Sponsored Content
Operating Systems AIX Performance issues for LPAR with GPFS 3.4 Post 302547507 by aixromeo on Tuesday 16th of August 2011 02:00:50 AM
Old 08-16-2011
Performance issues for LPAR with GPFS 3.4

Hi,

We have GPFS 3.4 Installed on two AIX 6.1 Nodes. We have 3 GPFS Mount points:

/abc01 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc02 4TB (Comprises of 14 x 300GB disks from XIV SAN)
/abc03 1TB ((Comprises of Multiple 300GB disks from XIV SAN)

Now these 40 something disks have been assigned from XIV to VIOS and from VIOS to "one" vhost.

We have Oracle RAC installed and facing 100% disk busy in topas. degraded performance. I had changed the q_depth to 32 for all disks in both LPAR and VIOS.

MaxFilesToCache GPFS parameter is set to 5000 but still no improvement. I guess I should have created more VSCSIs and mapped these disks on the LPAR with multiple VHOSTS. Is this the issue? If it is:

Then is it possible to unassign some of the disks from one vhost and assign the remaining disks to new vhost and still have the data there on the disks? Oracle RAC shouldnt behave abnormally i guess?

Help,

Regards,



 

10 More Discussions You Might Find Interesting

1. Shell Programming and Scripting

shell script performance issues --Urgent

I need help in awk please help immediatly. This below function is taking lot of time Please help me to fine tune it so that it runs faster. The file count is around 3million records # Process Body processbody() { #set -x while read line do ... (18 Replies)
Discussion started by: icefish
18 Replies

2. Solaris

raidctl performance issues

using the internal 2 drives mirror was created using raidctl on 100's of our servers . sometime when one drive fails we dont face any issue & we replace the drive with out any problem . but sometimes when one drive fails , system becomes unresponsive and doesnot allow us to login , the only way to... (1 Reply)
Discussion started by: skamal4u
1 Replies

3. UNIX for Dummies Questions & Answers

Awk Performance Issues

Hi All, I'm facing an issue in my awk script. The script is processing a large text file having the details of a number of persons, each person's details being written from 100 to 250 tags as given below: 100 START| 101klklk| ... 245 opr| 246 55| 250 END| 100 START| ... 245 pp| 246... (4 Replies)
Discussion started by: pgp_acc1
4 Replies

4. Programming

performance issues of calling a function in if condition

Hi, I have written a program in C and have to test the return value of the functions. So the normal way of doin this wud b int rc rc=myfunction(input); if(rc=TRUE){ } else{ } But instead of doing this I have called the function in the if() condition. Does this have any... (2 Replies)
Discussion started by: sidmania
2 Replies

5. AIX

How to configure new hdisk such that it has a gpfs fs on it and is added to a running gpfs cluster?

Hi, I have a running GPFS cluster. For one every mountpoint that i have created i have one disk assigned to it. That disk is converted to an NSD and is a part of the GPFS Cluster. Now i have a new disk and there is this requirement to add it to the GPFS cluster, such that this becomes an NSD.... (1 Reply)
Discussion started by: aixromeo
1 Replies

6. Solaris

Getcwd performance issues

Hello everyone, recently we have been experiencing performance issues with chmod. We managed to narrow it down to getcwd. The following folder exists: /Folder1/subfol1/subfol2/subfol3 cd /Folder1/subfol1/subfol2/subfol3 truss -D pwd 2>&1 | grep getcwd 0.0001... (4 Replies)
Discussion started by: KotekBury
4 Replies

7. AIX

AIX 6.1 Memory Performance issues

Good Day Everyone, Just wonder anyone has encounter AIX 6.1 Memory Performance issues ? What I have in my current scenario is we have 3 datastage servers (Segregate server and EE jobs - for those who know Datastage achitect) and 2 db servers(running HA to load balance 4 nodes partitions for... (3 Replies)
Discussion started by: ckwan
3 Replies

8. AIX

Newbie - AIX LPAR performance problem

Running into performance issues with WAS application servers on two of LPAR's or like configuration under high load web pages crawl. Please forgive me I'm new to AIX and most my expertise is in the Linux space. Thanks for your help!! Here's the run down: The problem appears to be CPU... (3 Replies)
Discussion started by: margeson
3 Replies

9. AIX

AIX lpar bad disk I/O performance - 4k per IO limitation ?

Hi Guys, I have fresh new installed VIO 2.2.3.70 on a p710, 3 physical SAS disks, rootvg on hdisk0 and 3 VIO clients through vscsi, AIX7.1tl4 AIX6.1tl9 RHEL6.5ppc, each lpar has its rootvg installed on a LV on datavg (hdisk2) mapped to vhost0,1,2 There is no vg on hdisk1, I use it for my... (1 Reply)
Discussion started by: frenchy59
1 Replies

10. What is on Your Mind?

Baiduspider and Forum Performance Issues

For years we blocked Baiduspider due to the fact their bots do not obey the robots.txt directive and can really hurt site performance when they unleash 100 bots on the site each pulling pages many times per second. Last year, I unblocked Baiduspider's IP addresses, and now the problem is back. ... (1 Reply)
Discussion started by: Neo
1 Replies
vgchgid(1M)															       vgchgid(1M)

NAME
vgchgid - modify the Volume Group ID (VGID) on a given set of physical devices SYNOPSIS
PhysicalVolumePath [PhysicalVolumePath] ... DESCRIPTION
The command is designed to change the LVM Volume Group ID (VGID) on a supplied set of disks. will work with any type of storage, but it is primarily targeted at disk arrays that are able to create "snapshots" or "clones" of mirrored LUNs. accepts a set of raw physical devices and ensures that they all belong to the same volume group, before altering the VGID (see section). The same VGID is set on all the disks and it should be noted that in cases of multi-PV volume groups, all the physical volumes should be supplied in a single invocation of the command. Options recognizes the following options and arguments: PhysicalVolumePath The raw devices path name of a physical volume. Background Some storage subsystems have a feature which allows a user to split off a set of mirror copies of physical storage (termed or just as LVM splits off logical volumes with the command. As the result of the "split," the split-off devices will have the same VGID as the original disks. is needed to modify the VGID on the BCV devices. Once the VGID has been altered, the BCV disks can be imported into a new volume group by using WARNINGS
Once the VGID has been changed, the original VGID is lost until a disk device is re-mirrored with the original devices. If is used on a subset of disk devices (for example, two out of four disk devices), the two groups of disk devices would not be able to be imported into the same volume group since they have different VGIDs on them. The solution is to re-mirror all four of the disk devices and re-run on all four BCV devices at the same time, and then use to import them into the same new volume group. If a disk is newly added to an existing volume group and no subsequent LVM operations has been performed to alter the structures (in other words, operations which perform an automated vgcfgbackup(1M)); then it is possible a subsequent will fail. It will report that the disk does not belong to the volume group. This may be overcome by performing a structure changing operation on the volume group (for example, using It is the system administrator's responsibility to make sure that the devices provided in the command line are all Business Copy volumes of the existing standard physical volumes and are in the ready state and writable. Mixing the standard and BC volumes in the same volume group can cause data corruption. RETURN VALUE
returns the following values: 0 VGID was modified with no error 1 VGID was not modified EXAMPLES
An example showing how might be used: 1. The system administrator uses the following commands to create the Business Continuity (BCV or BC) copy: 1) For EMC Symmetrix disks, the commands are and 2) For XP disk array, the commands are and Three BCV disks are created. 2. Change the VGID on the BCV disks. 3. Make a new volume group using the BCV disks. This step can be skipped as the group file will be created automatically. If the file is manually created it will have different major and minor numbers (see lvm(7)). 4. Import the BCV disks into the new volume group. 5. Activate the new volume group. 6. Backup the new volume group's LVM data structure. 7. Mount the associated logical volumes. SEE ALSO
vgimport(1M), vgscan(1M), vgcfgbackup(1M). vgchgid(1M)
All times are GMT -4. The time now is 06:05 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy