How to reclaim hard disks and IP's in AIX?


Login or Register for Dates, Times and to Reply

 
Thread Tools Search this Thread
Operating Systems AIX How to reclaim hard disks and IP's in AIX?
# 1  
Wrench How to reclaim hard disks and IP's in AIX?

Hello

I recently received a request to reclaim hard disks and IP addresses within an AIX system(s). THe file systems are no longer in use and the client has indicated that it is OK to remove them and reclaim the disks and release the IP's. Now, since the file systems belong to a Volume group I am assuming I would need to take the VG offline and remove the file systems via smitty. Since the disk belongs to a SAN, I would need to remove the disk definition from ODM and erase the path. What I am looking for is an idea of what steps to follow in order to accomplish the task. Any help would be greatly appreciated.

Thanks in advance

Joe
# 2  
Hello Joseph Sabo, indeed Welcome! Smilie

AIX is good because you cannot remove anything that the OS believes is in use so it protects itself. You can detach them at the SAN end, but that will cause hardware alerts.

Are you relinquishing all IP addresses? Is this part of a cluster perhaps?

I presume you mean that you want to reclaim the disks/LUNs for use on other servers/partitions so:
  • You cannot remove these safely from AIX without them being removed from the relevant volume group.
  • You cannot remove them from the volume group unless they are empty (no logical volumes)
  • You cannot remove the logical volumes if they are in use as either raw devices (such as for a database) or as formatted and mounted filesystems.
  • You cannot unmount filesystems if they are in use, i.e. open files or any process with their current directory within the tree.

Before going through the process to unwind them, will anything be left of the server at the end of this or is it being turned off? If so, shut it down and use the SAN to detach/dis-associate the LUNs from the client AIX node (or whatever terminology your SAN uses) To reuse the server at a later date, you would be best to re-install over the root volume group so that it doesn't try to use devices that have been removed or use IP addresses that may have been reallocated elsewhere.


If you need to do a partial removal, it would be helpful to know what we're dealing with. Can you share the output (pasted in CODE tags) from the following:-
Code:
oslevel -s
df
lsvg
lsvg -p each volume group in turn
lsvg -l each volume group in turn
lspv


Thanks, in advance,
Robin

Last edited by rbatte1; 01-12-2017 at 10:36 AM.. Reason: Added welcome
# 3  
Hi Robin, and thank you for replying so quickly.

Code:
oslevel -s
6100-09-06-1543

This is the disk in question. THe command resulted in over 100 disks being displayed.

Code:
lspv
hdisk9          <disk_id>                    <vg_name>        active

as for the rest of the outputs you requested, I would need client permission to post it. I can say that for this server, there are 4 LV's and they all belong to on VG. Hdisk9 is the only disk belonging to this VG. There are 3 servers in all, so whatever i do on one can be replicated for the others.

Code:
<vg_name>:
LV NAME             TYPE       LPs     PPs     PVs  LV STATE      MOUNT POINT
lv_name         jfs2log    1       1       1    open/syncd    
lv_name         jfs2       7       7       1    open/syncd    
lv_name         jfs2       64      64      1    closed/syncd  
lv_name         jfs2       223     223     1    open/syncd    
lv_name         jfs2       6       6       1    open/syncd    
lv_name         jfs2       6       6       1    open/syncd

As for the IP, the file systems are databases that have been decommissioned, so there is 1 IP related to the lv's.

---------- Post updated at 09:56 AM ---------- Previous update was at 09:54 AM ----------

Also Robin, this is only an FS, the server will remain in service as it is being used for other things as well. So this is a partial removal.

Thanks again

Joe

Last edited by rbatte1; 01-12-2017 at 11:22 AM.. Reason: Changed icode tags everywhere to code tags
# 4  
Can we have the output from the other commands too please? They are:-
Code:
df
lsvg -p each volume group in turn

Please wrap the output in CODE tags, not ICODE tags. Use the icon that is a white square and has black "co" over "de" on it, rather than the </> one, thanks. I've changed them in your post.

I'm a bit confused by this:-
Quote:
As for the IP, the file systems are databases that have been decommissioned, so there is 1 IP related to the lv's.
I don't understand how an IP relates to a filesystem. If there are multiple IP addresses offered and there is no clustering in use, then you can just remove them. Can you additionally show us the output from:-
Code:
ifconfig -a


Thanks again,
Robin
# 5  
Quote:
Originally Posted by Joseph Sabo
Now, since the file systems belong to a Volume group I am assuming I would need to take the VG offline and remove the file systems via smitty. Since the disk belongs to a SAN, I would need to remove the disk definition from ODM and erase the path.
In AIX all (really all) filesystems are managed by a logical volume manager. As a rule of thumb you have:

- one or more filesystems belong to a volume group
- one or more disks provide the space for this volume group (each disk belongs to exactly one VG)

If a system is set up sensibly volume groups build logical groups of FSes, i.e. all FSes for one application are in one VG. Therefore you most probably want to remove all FSes from one VG.


If so:

- lsvg -p to get the disks belonging to the VG (man lsvg). Write these down.
- umount all the FSes in question.
- varyoff the VG after unmounting all the FSes in it. (man varyoffvg)
- export the VG (man exportvg)

The last command will remove the FS definitions from /etc/filesystems as well as the VG. Finally you can remove the disk(s) which (formerly) belonged to the VG:

- rmdev -Rdl <disk> (man rmdev)

Afterwards remove the zoning/LUNs and it might be necessary to remove the disks from the VIOS too (if they are vSCSI).

Finally do a cfgmgr on teh LPAR to make sure the disks are out of the system cleanly (if some artefacts are configured again you know you have more work to do).

Note, that there was a lot of assuming going on when writing this. In principle the procedure will cover the most common cases but you might have to change details for i.e. EMC SAN running PowerPath drivers, etc., so as long as we have no detailed description of your system we can#t tell you in all detail what to do and how.

I hope this helps.

bakunin
# 6  
Hello

@bakunin and @rbatte1 thank you for your responses.

@rbatte1, The request from my customer was that the database was decommissioned on three servers(LPARS) and that it was ok to remove the file systems (LV's) and reclaim the disks, which belong to a SAN. Because these disks belong to a SAN there is a cost to them, thus the reason for reclaiming them. Also included in the request was to reclaim one DNS and IP per server as it too was no longer needed. What the IP was used for, I do not know.


@bakunin - The setup on all three servers is that the list of 4 or 5 LV's belong to one volume group and each VG has 1 disk assigned to it with the expception of one VG, it has 2 disks. From the looks of it, the general how-to you posted is fine. I have a good direction I can follow. All I really need is concerning removing the IP.

Thank you

Joe
# 7  
If the IP address was offered as part of a cluster, then that makes a difference. Was this server perhaps part of an Oracle RAC or IBM PowerHA cluster other something similar?

Can we see the output from ifconfig -a?

The content of /etc/hosts might be useful too.

We might need to look at the output from other network commands to be sure, but we'll get to that later.
Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

Test Your Knowledge in Computers #784
Difficulty: Easy
The firmware for the TRS-80 Model 100 was developed by Steve Jobs.
True or False?

8 More Discussions You Might Find Interesting

1. Solaris

Determining number of hard disks in the system

Hello to all, what is the command in Solaris/Unix which I can use to determine how many hard disks exist in the system? I have tried with different command such as df -lk and similar but cannot know for sure how many actual disks are installed. Commands like # fdisk -l | grep Disk and #... (14 Replies)
Discussion started by: Mick
14 Replies

2. Shell Programming and Scripting

How to get number of attached hard disks in HP-UX

how do i get the number of attached hard disks in HP-UX (1 Reply)
Discussion started by: achak01
1 Replies

3. Solaris

Hard/Transfer errors in disks

Could you please explain us what are these transport/hard errors... when i ran the following command, iostat -E | grep Errors i got the following: sd240 Soft Errors: 37 Hard Errors: 1144 Transport Errors: 0 sd578 Soft Errors: 0 Hard Errors: 890 Transport Errors: 0 Please... (5 Replies)
Discussion started by: sundar3350
5 Replies

4. AIX

Mount points to hard disks

Hi I am oracle DBA and sometimes need to see on which disks oracle data files are residing . How can we check that . The file system is jfs on aix 5.2.0.0 The method is use is to use mount |grep oracle_dir_name or lsfs mount_point_name command to see what /dev/logical_volume_name is mounted... (1 Reply)
Discussion started by: clifford
1 Replies

5. Solaris

Get onyl local hard disks

How can I get only the local hard disks in Solaris? I've tried iostat -x, iostat -E, etc, but it shows the cdroms, dvds, external storage... I want only the local physical hard disks. Thanks. (2 Replies)
Discussion started by: psimoes79
2 Replies

6. Solaris

Hard disks in solaris

I need to insert a new hard disk into a Sun Fire v210 machine. The (only) internal disk which is already in the machine is part number XRA-SC1CB-73G10K (DISK DRIVE ASSY. 73GB, 10K RPM, with SPUD BRACKET). I also have nearly endless access to IBM hard disks at extremely low prices and would there... (2 Replies)
Discussion started by: sprellari
2 Replies

7. UNIX for Advanced & Expert Users

Solaris GRUB troubled with dual hard disks

I have the following system: - DELL Dimension 8300 - Pentium IV @ 2.66GHz - BIOS Revision A07 - 1.5GB RAM - 2 Hard Disks (Master 120GB, Slave 80GB), I guess it's IDE I had WinXP on the master disk (hd0) and recently installed Solaris 10 1/06 on the slave disk (hd1). The NTLDR is in MBR... (0 Replies)
Discussion started by: Hawk
0 Replies

8. Filesystems, Disks and Memory

External Lacie USB hard disks

I'm trying to mount a USB Lacie external hardrive in my Linux system but am having trouble doing so, I'm also having trouble mounting my USB ZIP 250 drive. It is totally me being stupid, but I'm new to unix and am having a few teathing problems. the command I'm using is the following mount... (4 Replies)
Discussion started by: electrode101
4 Replies

Featured Tech Videos