Sponsored Content
Full Discussion: disk issue
Operating Systems AIX disk issue Post 302111520 by karthikosu on Wednesday 21st of March 2007 01:39:23 PM
Old 03-21-2007
disk issue

Hi, I have a AIX 4.3 box here with problems cause of a disk in below VG.
volume group: workvg
lspv
hdisk4 000166789869ab2d workvg
hdisk5 000166789869b96b workvg

now hdisk4 disk has failed and cause the quorum was set, workvg became varied off.
I have to replace the disk. i tried below to see if hdisk4 and hdisk5 were mirrored but got below errors
lsvg -l workvg
0516-010 : Volume group must be varied on; use varyonvg command.
varyonvg workvg
PV Status: hdisk4 000166789869ab2d PVNOTFND
hdisk5 000166789869b96b PVINVG
0516-052 varyonvg: Volume group cannot be varied on without a
quorum. More physical volumes in the group must be active.
Run diagnostics on inactive PVs.

chvg Qn workvg
0516-306 getlvodm: Unable to find Qn in the Device
Configuration Database.
0516-732 chvg: Unable to change volume group Qn.
0516-1260 chvg: Device configuration database has been updated with new information.
Since the volume group workvg is not varied on, if the workvg is of Big
Volume group type, chvg command must be run with the volume group varied
on for these attributes to be saved across exportvg/importvg operation.


Since /u01 filesystem oracle resides on this VG, can someone tell me how should i go about replacing the disk

THANK YOU!
 

10 More Discussions You Might Find Interesting

1. UNIX for Advanced & Expert Users

Solaris 8 disk/mirroring issue

Hello! I recently inherited system administration duties for a SUN v880 box. The system has 6 physical hard disks.. In doing some basic maintenance, I found they're configured for mirroring. I ran the metastat and metadb commands, and many of the mirrors are showing they are in need of... (5 Replies)
Discussion started by: ghuber
5 Replies

2. UNIX for Advanced & Expert Users

partition disk issue

hi guys, I've got a strange issue, may be one of you has experienced this. SunOS 5.10 Generic_118833-33 sun4u sparc SUNW,Sun-Fire-V440 everything is mirrored. My issue is that I have a umpty directory but seems to have data on. Let me show you # df -h /data Filesystem size used... (10 Replies)
Discussion started by: moustik
10 Replies

3. Solaris

Disk Suite issue

Solaris 9 We had a problem server where only root was not mirrored (before my time). When I tried to mirror it, the live root slice bailed with errors at 97% so it couldn't be mirrored. It's a matched pair of boxes (nfs1 and nfs2) and they are interchangeable with regards to the NFS... (0 Replies)
Discussion started by: BOFH
0 Replies

4. AIX

Disk I/O Issue

We have a filesystem which contains 8 hard disks but i am facing disk I/O issue becuase data is not spreading across all the disks.Is there any way i can check how data is spreading and any parameter we need to change to spread ata across all disks. OS--AIX 5.3 (3 Replies)
Discussion started by: ukatru
3 Replies

5. AIX

Disk I/O Issue using LVM

We have a filesystem which contains 8 hard disks but i am facing disk I/O issue becuase data is not spreading across all the disks.Is there any way i can check how data is spreading and any parameter we need to change to spread ata across all disks. OS--AIX 5.3 (1 Reply)
Discussion started by: ukatru
1 Replies

6. Filesystems, Disks and Memory

Issue available disk space while using xdd

Good morning, I seem to be running into an issue with some drives I have attached to my solaris server. The drives are attached correctly, the partitions are arranged with fdisk, the ext3 filesystem is setup using mkfs, and finally the drive is mounted. When I use xdd to perform read/write... (3 Replies)
Discussion started by: mrpogo07
3 Replies

7. UNIX Desktop Questions & Answers

Issue with disk space usage

Issue with disk space usage I have the following line in my "df -h" output: Filesystem Size Used Avail Capacity Mounted on /dev/ad4s1a 496M 495M -39M 109% / What is the issue with having 9% excess utilisation? How can I find out what this partition is... (2 Replies)
Discussion started by: figaro
2 Replies

8. Solaris

T5220 disk mapping issue

Hi, More a Sun T5220 problem then a Solaris 10 problem, but perhaps someone had a similar issue. For starters the output with 1 disk in slot 0 of the server. It points to PhyNum 5, where I would expect PhyNum 0. {0} ok probe-scsi MPT Version 1.05, Firmware Version 1.22.00.00 Target... (2 Replies)
Discussion started by: ejdv
2 Replies

9. Solaris

Server disk issue need help

Hello all, Our Solaris 9? Sun Fire 480R backup server(in another city) is throwing disk errors such as these repeatedly. WARNING: vxvm:vxio: Subdisk rootdisk-02 block 24037056: Uncorrectable read error WARNING: vxvm:vxio: Subdisk rootdisk-02 block 7767072: Uncorrectable write error ... (18 Replies)
Discussion started by: RyanV
18 Replies

10. Solaris

Solaris 11 disk issue

I have 2 disks in my system.I recently added a zpool to the disk, but today I changed my mind and deleted the zpool , zpool destroy -f extra The zpool is now deleted and I want to partition the disk, so I delete the only partition on the disk. Now when I run format again, format... (13 Replies)
Discussion started by: cbtshare
13 Replies
vgmove(1M)																vgmove(1M)

NAME
vgmove - move data from an old set of disks in a volume group to a new set of disks SYNOPSIS
autobackup] diskmapfile vg_name autobackup] diskfile diskmapfile vg_name DESCRIPTION
The command migrates data from the existing set of disks in a volume group to a new set of disks. After the command completes successfully, the new set of disks will belong to the same volume group. The command is intended to migrate data on a volume group from old storage to new storage. The diskmapfile specifies the list of source disks to move data from, and the list of destination disks to move data to. The user may choose to list only a subset of the existing physical volumes in the volume group that need to be migrated to a new set of disks. The format of the diskmapfile file is shown below: source_pv_1 destination_pv_1_1 destination_pv_1_2 .... source_pv_2 destination_pv_2_1 destination_pv_2_2 .... .... source_pv_n destination_pv_n_1 destination_pv_n_2 .... If a destination disk is not already part of the volume group, it will be added using see vgextend(1M). Upon successful completion of the source disk will be automatically removed from the volume group using see vgreduce(1M). After successful migration, the destination disks are added to the LVM configuration files; namely, or The source disks along with their alternate links are removed from the LVM configuration files. A sample diskmapfile is shown below: /dev/disk/disk1 /dev/disk/disk51 /dev/disk/disk52 /dev/disk/disk2 /dev/disk/disk51 /dev/disk/disk3 /dev/disk/disk53 The diskmapfile can be manually created, or it can be automatically generated using the diskfile and diskmapfile options. The argument diskfile contains a list of destination disks, one per line such as the sample file below: /dev/disk/disk51 /dev/disk/disk52 /dev/disk/disk53 When the option is given, reads a list of destination disks from diskfile, generates the source to destination mapping, and saves it to diskmapfile. The volume group must be activated before running the command. If the command is interrupted before it completes, the volume group is in the same state it was at the beginning of the command. The migration can be continued by running the command with the same options and disk mapping file. Options and Arguments The command recognizes the following options and arguments: vg_name The path name of the volume group. Set automatic backup for this invocation of autobackup can have one of the following values: Automatically back up configuration changes made to the volume group. This is the default. After this command executes, the command is executed for the volume group; see vgcfgbackup(1M). Do not back up configuration changes this time. Specify the name of the file containing the source to destination disk mapping. If the option is also given, will generate the disk mapping and save it to this filename. (Note that if the diskmapfile already exists, the file will be overwritten). Otherwise, will perform the data migration using this diskmapfile. Specify the name of the file containing the list of destination disks. This option is used with the option to generate the diskmapfile. When the option is used, no volume group data is moved. Preview the actions to be taken but do not move any volume group data. Shared Volume Group Considerations For volume group version 1.0 and 2.0, cannot be used if the volume group is activated in shared mode. For volume groups version 2.1 (or higher), can be performed when activated in either shared, exclusive, or standalone mode. Note that the daemon must be running on all the nodes sharing a volume group activated in shared mode. See lvmpud(1M). When a node wants to share the volume group, the user must first execute a if physical volumes were moved in or out of the volume group at the time the volume group was not activated on that node. LVM shared mode is currently only available in Serviceguard clusters. EXTERNAL INFLUENCES
Environment Variables determines the language in which messages are displayed. If is not specified or is null, it defaults to "C" (see lang(5)). If any internationalization variable contains an invalid setting, all internationalization variables default to "C" (see environ(5)). EXAMPLES
Move data in volume group from to After the migration, remove from the volume group: Generate a source to destination disk map file for where the destination disks are and SEE ALSO
lvmpud(1M), pvmove(1M), vgcfgbackup(1M), vgcfgrestore(1M), vgextend(1M), vgreduce(1M), intro(7), lvm(7). vgmove(1M)
All times are GMT -4. The time now is 09:33 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy