Sponsored Content
Top Forums UNIX for Dummies Questions & Answers Extremely slow file writing with many small files on mounted NAS Post 302946366 by TupTupBoom on Monday 8th of June 2015 06:03:03 PM
Old 06-08-2015
Hammer & Screwdriver Extremely slow file writing with many small files on mounted NAS

I am working on a CentOS release 6.4 server which has two mounted NAS devices, one with 20 x 3TB HDD running in FreeBSD with Zfs2 and one NAS which I don't know much about, but which has 7 HDDs in RAID-6.

I was running tar -zxvf on a tarball that is 80Mb with 50,000 small files inside. Even with no other programs running it was taking a very long time (>2 days before I cancelled, but it was probably going to take much much longer than that). The problem only occurs if I run it in the NAS (running FreeBSD, zfs2 60TB, 20x3TB drives) as the working dir.

If I run the same command on the tarball on the server itself (On the HDD that's in the server) it completes in 3 minutes. I also tried it on the other NAS and that took 4-5 hours, so much faster, but still extremely slow.

I tried copying the directory containing the unpacked tarball contents over to the NAS which was more or less identical speed to unpacking from the NAS itself (not surprising.) As a test I unpacked the tarball on my desktop PC and set up an SCP to try copying that, but the SCP command failed with a 255 error. I also tried this from a Mac desktop with completely different commands and the same results. The SCP completed in ~1minute or so if I copied the unpacked tarball from the HDD attached to the server to the desktop PC. With FTP from my desktop it was successful to write files to the NAS but the speed was the same as with copying from the Centos machine.

During the operations, I was looking at iotop and I see that there is a huge % of usage. 99.99% iowait for the slower larger NAS and ~30% iowait for the 7-drive NAS. I also see that all the volumes have %usage over 100% (oscillates, but probably on average going above 100% or at 100% and this occurs on all the drives). I checked zpool status and didn't see any degraded volumes. For whatever reason I can't get smartctl to work on my FreeBSD NAS and I can't ssh into the other NAS at all to do anything with it. So I have not done a comprehensive smart test like I would like. I could in theory use another tool.

My next step is to use Wireshark and see if maybe there is some problem with the TCP settings. Potentially it is set somehow that is not ideal? I also recognize that this Centos version is quite old.

If there are any suggestions, that would be helpful.
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

File writing is slow

Hello Guru, I am using a Pro*C program to prepare some reports usaually the report file size is greater than 1GB. But nowadays program is very slow. I found out the program is taking much time to write data to file ..... is there any unix related reason to be slow down, the file writting... (2 Replies)
Discussion started by: bhagyaraj.p
2 Replies

2. Shell Programming and Scripting

Need help writing a small script

Hi I am trying to do the following: run a command that looks at a binary file and formats it find the first line that has what I am looking for (there will be multiple) print the name of the file that has it So I am running this but ti's not working ls -ltr *20080612* | while read line... (3 Replies)
Discussion started by: llsmr777
3 Replies

3. Shell Programming and Scripting

Split a file into 16 small files

Hi I want to split a file that has 'n' number of records into 16 small files. Can some one suggest me how to do this using Unix script? Thanks rrkk (10 Replies)
Discussion started by: rrkks
10 Replies

4. Solaris

why telnet from PC to Solaris x86 is extremely slow?

I got the following info from this forum, in regards to configuring my Solaris x86 to link to the Net: # echo 192.168.0.1 > /etc/defaultrouter # route add default 192.168.0.1 # echo nameserver 192.168.0.1 >> /etc/resolv.conf # cp /etc/nsswitch.dns /etc/nsswitch.conf So I did... (1 Reply)
Discussion started by: newbie09
1 Replies

5. Red Hat

NFS writing so slow

Hi, I facing an NFS problem. I have machine1, which has diskA and diskB, and machine2, both are Mandriva 2009 Linux. When I am on machine2 and NFS mount both diskA and diskB of machine1. Writing to diskA is very fast, but writing to diskB is very slow. I tried different mount rsize and wsize... (2 Replies)
Discussion started by: hiepng
2 Replies

6. Red Hat

file writing over nfs very slow

Hi guys, I am trying something. I wrote a simple shell program to test something where continuous while loop writes on a file over the nfs. The time taken to write "hello" 3000 times take about 10 sec which is not right. Ideally it should take fraction of seconds. If I write on the local disk, it... (1 Reply)
Discussion started by: abhig
1 Replies

7. Shell Programming and Scripting

Check if NAS filesystem is mounted

Anyone know the best way to check and see if a NAS filesystem is mounted on a linux box. I have no idea where to start :wall:. (2 Replies)
Discussion started by: d3mon_spawn
2 Replies

8. Solaris

how to make nas share mounted in zones persistent across reboots?

there are few nas shares that would be mounted on the local zone. should i add an entry into the add an entry in zone.xml file so that it gets mounted automatically when the zone gets rebooted? or whats the correct way to get it mounted automatically when the zone reboots (2 Replies)
Discussion started by: chidori
2 Replies

9. Solaris

Chgrp failed on NAS mounted

Hi, I am facing chgrp issue for a directory on a NAS mounted partation. issue details : user1 belongs to two groups grp1(primary) and grp2(secondary) not able to change directory group to secondary. WORKING on /tmp #mkdir /tmp/a #ls -ld /tmp/a drwxr-xr-x 2 user1 grp1 117 Mar 24... (7 Replies)
Discussion started by: naveen.surisett
7 Replies

10. Red Hat

Related to "NAS" some file system (mounted volumes) were not writable

Dear friends, I have been facing an issue with one of my red hat unix machine, suddenly lost to switch sudo users. My all colleagues lost to switch to access sudo users. Then, we have realized its related to NAS issue which does not allowing to write the file. because of this we got so many... (1 Reply)
Discussion started by: Chand
1 Replies
scnasdir(1M)						  System Administration Commands					      scnasdir(1M)

NAME
scnasdir - manage the exported directories on a network-attached storage (NAS) device in a Sun Cluster configuration. SYNOPSIS
scnasdir [-H] scnasdir [-a] [-H] [-n] -h device-name [-d directory [-d directory...]] [-f input-file] scnasdir -p [-H] [-h device-name] [-t device-type] scnasdir -r [-H ] [-n ] -h device-name [-d all | -d directory [-d directory...]] [-f input-file] DESCRIPTION
Note - Beginning with the Sun Cluster 3.2 release, Sun Cluster software includes an object-oriented command set. Although Sun Cluster software still supports the original command set, Sun Cluster procedural documentation uses only the object-oriented command set. For more infor- mation about the object-oriented command set, see the Intro(1CL) man page. The scnasdir command manages the exported directories on NAS devices in a Sun Cluster configuration. The device must already have been con- figured in the cluster by using the scnas command. The scnasdir command can be used to add directories to a device's cluster configuration, to remove directories from a device's cluster con- figuration, and to print the directories of a particular device or particular device types. The options in this command are processed in the order in which they are typed on the command line. The scnasdir command can only be run from an active cluster node. The results of running the command are always the same, regardless of the node that is used. All forms of the scnasdir command accept the -H option. Specifying -H displays help information, and all other options are ignored and not executed. Help information is also printed when scnasdir is run without options. You can use this command only in the global zone. OPTIONS
Basic Options The following options are common to all forms of the scnasdir command: -H If this option is specified on the command line at any position, the command prints help information. All other options are ignored and are not executed. Help information is also printed if scnasdir is run with no options. You can use this option only in the global zone. -n If this option is specified on the command line at any position, the scnasdir command only checks the usage and does not write the con- figuration data. If the -n option is specified with the -f option, the scnasdir command displays the data that will be processed for the user to review. The following options modify the basic form and function of the scnasdir command. None of these options can be combined on the same command line. -a Specifies the add form of the scnasdir command. The -a option can be used to add directories into the device's Sun Cluster configura- tion. You can use this option only in the global zone. -p Specifies the print form of the scnasdir command. When no other option is given, this -p option prints a listing of all the directories of all the NAS devices configured in Sun Cluster. This option can be used with additional options to query a particular device or par- ticular types of NAS devices. You can use this option only in the global zone. -r Specifies the remove form of the scnasdir command. The -r option is used to remove all the directories, or the specified directories of a NAS device from its Sun Cluster configuration. You can use this option only in the global zone. Additional Options The following additional options can be combined with one or more of the previously described basic options to manage the directories of a device. The additional options are as follows: -h device-name Use this option to specify the name of the NAS device in the Sun Cluster configuration. The -h option identifies the device and can be used to remotely access the device by using rhs or telnet. This device name must be specified for the add, change, and remove forms of the scnasdir command. -d all | directory Use this option to list the directories (or volumes) exported on the NAS device to be configured into the Sun Cluster. These directo- ries must be created and exported on the device before using the scnasdir command. See the documentation for the NAS device type for procedures for exporting directories. The -d all option can only be accepted by the remove option, -r. The directories must be specified by using either the -d option, or the -f option, for the add and remove forms of the scnasdir com- mand. -f input-file Directories can be placed into a plain text file, one directory per line, and used with the -f option. Leading white spaces and tabs are ignored. Comments begin with an unquoted pound (#) sign, and continue to the next new line. The parser ignores all comments. EXAMPLES
Example 1 Adding Two NAS Storage Device Directories to a Cluster The following scnasdir command adds two directories of a NAS device to the Sun Cluster configuration. # scnasdir -a -h sunnas1 -d /vol/DB1 -d /vol/DB2 Example 2 Removing All of a NAS Storage Device's Directories From a Cluster The following scnasdir command removes all the directories that are configured for a NAS device. # scnasdir -r -h sunnas1 -d all EXIT STATUS
The following exit values are returned: 0 The command executed successfully. nonzero An error has occurred. ATTRIBUTES
See attributes(5) for descriptions of the following attributes: +-----------------------------+-----------------------------+ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | +-----------------------------+-----------------------------+ |Availability |SUNWsczu | +-----------------------------+-----------------------------+ |Interface Stability |Evolving | +-----------------------------+-----------------------------+ SEE ALSO
Intro(1CL), clnasdevice(1CL), clquorum(1CL), cluster(1CL), scconf(1M), scnas(1M) Sun Cluster 3.2 10 Sep 2007 scnasdir(1M)
All times are GMT -4. The time now is 06:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy