Performance check without a counterpart


 
Thread Tools Search this Thread
Operating Systems Solaris Performance check without a counterpart
# 1  
Old 12-04-2009
Performance check without a counterpart

Hi,

is it possible to check out the speed limits of a box without a remote peer for netio and stuff?

I transferred 1 Terrabyte last night, which is not really much and need to find the bottleneck. The remote server is soon going to retire, yet I need to copy the 8T to the new machine. This will take at least a week with such speed!
# 2  
Old 12-04-2009
A solaris box? What version?
# 3  
Old 12-04-2009
Solaris10/Intel, latest Patchlevel.
# 4  
Old 12-04-2009
It would help if you provide some metrics.
What protocol are you using ?
on what interface/speed/mode ?
What actual rate are you observing ?
What components are between both boxes ?
What are the disks layout/performance on each side ?
What are reporting standard tools like "iostat -xtc 5 5", "netstat -i 5 5" during one of these transfers ?
# 5  
Old 12-04-2009
Well, that was not my initial question, but if you like, I'd appreciate it:

It's a netcat/tar transfer via Gigabit connection. 1000 Full Duplex

Rate is 1 Terrabyte in 10 hours

There is just one netgear Switch between them, Gigabit of course

Sender has an ext3 FS connected via 3ware controller to a 16port Raid 5.

Receiver has a ZFS raidz via Areca 1260 controller in JDOB mode.

Here's iostat:

Code:
Linux 2.6.26-2-686 (Sender)       12/04/09        _i686_

Time: 13:08:34
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           1.01    0.00    0.83    1.78    0.00   96.38

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               4.08    99.04    8.88    9.38    63.53    68.07     7.21     0.09    5.07   2.69   4.91
sda1              0.02     0.00    0.00    0.00     0.02     0.00    45.63     0.00   12.33   9.13   0.00
sda2              0.01     0.00    0.00    0.00     0.01     0.00    63.98     0.00    7.24   4.53   0.00
sda3              0.01     1.94    0.04    0.40     0.91    18.75    43.76     0.01   20.34   8.72   0.39
sda4              0.00     0.00    0.00    0.00     0.00     0.00     2.00     0.00   13.33  13.33   0.00
sda5              0.01    18.21    0.12    0.73     2.30   151.48   181.64     0.08   99.30   2.70   0.23
sda6              0.00     4.54    0.06    4.54     1.19    72.67    16.05     0.25   54.75   5.22   2.40
sda7              0.00     0.00    0.00    0.00     0.00     0.01    38.96     0.00   29.93  24.06   0.00
sda8              4.02    74.35    8.65    3.71    59.10   226.37    23.09     0.15   12.02   2.68   3.31
sdb               3.12    47.52    7.11    2.63   179.03     2.62    18.66     0.19   19.13   2.96   2.88
sdb1              3.12    47.52    7.11    2.63   179.03     2.62    18.66     0.19   19.13   2.96   2.88
sdc               3.10    87.86    7.20    3.90   241.43   338.17    52.24     0.00    0.05   2.64   2.93
sdc1              3.10    87.86    7.20    3.90   241.43   338.17    52.24     0.00    0.05   2.64   2.93
sdd               0.44     8.70    1.10    0.39   229.58    73.09   202.68     0.11   73.84   3.90   0.58
sdd1              0.44     8.70    1.10    0.39   229.58    73.09   202.68     0.11   73.84   3.90   0.58
dm-0              0.00     0.00   34.93  230.48   307.94   239.04     2.06     0.34    1.23   0.31   8.10
sde               0.00     0.77    0.00    0.04     0.00    12.81   297.92     0.01  155.43   1.43   0.01
sde1              0.00     0.77    0.00    0.04     0.00    12.81   297.96     0.01  155.42   1.42   0.01
sde9              0.00     0.00    0.00    0.00     0.00     0.00    25.96     0.00  206.45 160.57   0.00
sdf               0.00     0.77    0.00    0.02     0.00     6.29   293.46     0.01  288.21   2.08   0.00
sdf1              0.00     0.77    0.00    0.02     0.00     6.29   293.54     0.01  288.24   2.07   0.00
sdf9              0.00     0.00    0.00    0.00     0.00     0.00    26.50     0.00  195.58 143.33   0.00
sdg               0.00     0.77    0.00    0.02     0.00     6.29   293.43     0.01  289.77   2.09   0.00
sdg1              0.00     0.77    0.00    0.02     0.00     6.29   293.51     0.01  289.80   2.08   0.00
sdg9              0.00     0.00    0.00    0.00     0.00     0.00    25.44     0.00  187.52 147.36   0.00
sdh               0.00     0.14    0.00    0.00     0.00     1.15   292.28     0.00  274.15   2.23   0.00
sdh1              0.00     0.14    0.00    0.00     0.00     1.15   292.70     0.00  274.26   2.17   0.00
sdh9              0.00     0.00    0.00    0.00     0.00     0.00    25.96     0.00  221.06 190.37   0.00
dm-1              0.00     0.00    0.00    2.53     0.00    26.55    10.50     0.34  133.89   0.06   0.02

Receiver:

                 extended device statistics                    tty         cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  tin tout  us sy wt id
fd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0    0  254   1 18  0 81
sd0       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
sd2       0.1    1.6    6.8   16.4  0.0  0.0    2.2   0   0
sd3       0.1    1.6    4.6   16.4  0.0  0.0    3.0   0   0
sd4       1.3   31.1   86.2 2178.7  0.9  0.7   51.6   5   6
sd5       1.3   31.1   86.0 2178.7  0.9  0.8   50.8   5   6
sd6       1.3   31.1   86.3 2178.7  1.0  0.7   52.0   5   6
sd7       1.3   31.1   86.0 2178.7  0.9  0.7   51.4   5   6
sd8       1.3   31.1   86.0 2178.7  1.0  0.7   52.8   5   6
sd9       1.3   31.1   86.0 2178.7  1.0  0.7   52.0   5   6
sd10      1.4   31.1   86.5 2178.7  0.9  0.8   51.1   5   6
sd11      1.3   31.1   86.2 2179.0  0.9  0.7   51.7   5   6
sd12      1.3   31.1   85.9 2179.0  1.0  0.7   52.9   5   6
sd13      1.3   31.1   86.1 2179.0  1.0  0.7   52.5   5   6
sd14      1.3   31.1   86.1 2179.0  1.0  0.7   52.0   5   6
sd15      1.3   31.1   86.3 2179.0  1.0  0.7   52.7   5   6
sd16      1.3   31.1   86.0 2179.1  0.9  0.7   51.2   5   6
sd17      1.4   31.1   86.6 2179.1  1.0  0.7   51.9   5   6
st7       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
st8       0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0
nfs1      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0

This is netstat:

Code:
Sender:

Kernel Interface table
Iface   MTU Met   RX-OK RX-ERR RX-DRP RX-OVR    TX-OK TX-ERR TX-DRP TX-OVR Flg
bond0      1500 0  2408030752    615 1238006016 0      2825662764      0      0      0 BMmRU
eth0       1500 0  3030502353    594 1570237466 0      3487222518      0      0      0 BMsRU
eth1       1500 0  1700551651      4 1594719503 0      1836835136      0      0      0 BMsRU
eth2       1500 0  2046842103     10 1572882366 0      2168071782      0      0      0 BMsRU
eth3       1500 0  1421965044      2 1749190229 0      533378842      0      0      0 BMsRU
eth4       1500 0  1776744686      4 1646652888 0      2526363523      0      0      0 BMsRU
eth5       1500 0  1021359507      1 1694258156 0      863725555      0      0      0 BMsRU
lo        16436 0  10305047      0      0 0      10305047      0      0      0 LRU

Receiver:

Name  Mtu  Net/Dest      Address        Ipkts  Ierrs Opkts  Oerrs Collis Queue
lo0   8232 loopback      localhost      179    0     179    0     0      0
lo0   8232 loopback      localhost      0      N/A   115    N/A   N/A    0
aggr1 1500 obelix        obelix         1601598744 0     796084178 0     0      0
aggr1 1500 obelix        obelix         1601361709 N/A   795934688 N/A   N/A    0

What I noticed is the sender accessing the Raid like crazy, while the receivers HDDs merely write every twenty seconds and then pause. So it needs to be either the sender or the receivers network connection.

---------- Post updated at 07:59 AM ---------- Previous update was at 07:13 AM ----------

Running a parallel copy between two raid-arrays, one internal pci-x connected, the other one scsi:

Code:
keto:~/iostat-2.2# df -m /dev/mapper/external-external; sleep 120 ; df -m /dev/mapper/external-external
Filesystem           1M-blocks      Used Available Use% Mounted on
/dev/mapper/external-external
                       6570914     74220   6162911   2% /external_raid
Filesystem           1M-blocks      Used Available Use% Mounted on
/dev/mapper/external-external
                       6570914     78347   6158784   2% /external_raid


An awesome 2 Gigabyte per Minute. Hell breaks loose...

Last edited by DukeNuke2; 12-04-2009 at 09:19 PM..
# 6  
Old 12-04-2009
Please use code tags, not quote tags. That would make your stats more readable. Also, stats with a single sample are useless, at least whith Solaris where the values are the average since last boot.

As you are using zfs, "zpool iostat 2 2" would also be useful.

With 1 Terabyte in 10 hours and if I'm not mistaken in my calculation, you are using a about 222 Mbps (22%) payload network bandwidth which isn't optimal but not that bad either. You might want to try enabling jumbo frames to see if that improves that part.
# 7  
Old 12-04-2009
Well, we could argue about 1T in 10 hours, but 2 GIGAbytes in 60 seconds is not quite what I expect!
Login or Register to Ask a Question

Previous Thread | Next Thread

8 More Discussions You Might Find Interesting

1. Solaris

System Check Performance Tuning

Hello Forum, Well I am fairly new to this Solaris os thing. One thing I would like to check for system health and performance. I know the codes like prstat,vmstat,sar,iostat,netstat,prtdiag -v, What else does a want to be sys admin have to look for when checking a solaris box? I know... (3 Replies)
Discussion started by: br1an
3 Replies

2. High Performance Computing

How to check performance of your HPC cluster?

Hello Everybody, I have few queries : Do you have any idea how to check the performance of HPC cluster having mpich on top of centos 6.2? Are there any standard programs (like FFT, graphics rendering etc) to check the performance of mpi cluster with single node and multiple nodes? Can we... (4 Replies)
Discussion started by: albertspade
4 Replies

3. Red Hat

How to check performance?

Hi, all What would be the a,b,c in troubleshooting slow performance on RH box, I type and it became really slow, what commands or log files to examine. What parameters to check? Thanks all T (2 Replies)
Discussion started by: trento17
2 Replies

4. Red Hat

frecover counterpart on linux

Hello All, The frecover command on HP UX gives information about the backed up file in the format- Magic Field: Machine Identification: System Identification:HP-UX Release Identification:B.11.11 Node Identification: User Identification: Record Size: Time: Media Use:0 Volume Number:1... (1 Reply)
Discussion started by: shamik
1 Replies

5. Shell Programming and Scripting

Script to check solaris system performance

Hi, I need a script which runs in the solaris system and check the system performance eg .CPU usage, and send an alert when an thresold level is reached. kindly help me on this. (3 Replies)
Discussion started by: jayaramanit
3 Replies

6. Solaris

Solaris counterpart of /etc/security/limits.conf

Hi, How can we set per user core file size, etc in solaris, i.e. I want solaris counterpart/equivalent of linux /etc/security/limits.conf. TIA (0 Replies)
Discussion started by: slash_blog
0 Replies

7. News, Links, Events and Announcements

Announcing collectl - new performance linux performance monitor

About 4 years ago I wrote this tool inspired by Rob Urban's collect tool for DEC's Tru64 Unix. What makes this tool as different as collect was in its day is its ability to run at a low overhead and collect tons of stuff. I've expanded the general concept and even include data not available in... (0 Replies)
Discussion started by: MarkSeger
0 Replies

8. Programming

APIENTRY counterpart in UNIX

I'm trying to call a C program from my COBOL module. I've found a sample code in the net that uses APIENTRY for every function in C that will be access by the COBOL module (i.e. int APIENTRY UpdateFields(char *, char *, int *) ). However, to use this function, windows.h must be included in the list... (3 Replies)
Discussion started by: soulfactory2002
3 Replies
Login or Register to Ask a Question