Mounted and unmounted

Login or Register for Dates, Times and to Reply

Thread Tools Search this Thread
# 1  
Mounted and unmounted

Hi Guys
I'm new here, and I need urgent help.
This my first steps to be Aix admin and I have this task
-instal Oracle database on Aix machine and create mounting point /u02 of size 100GB for Oracle Standalone database installation.
-download and install the following OS patches
- IV42025
- IV42024
- IV41380
- IV37790
- IV40002
- IV40001
many thanks

Last edited by Don Cragun; 03-13-2018 at 05:10 AM.. Reason: Remove duplicated text.
# 2  
Welcome khaled_ly84,

There are several large chunks of work here and much that we don't know from the limited description. I'm afraid that I have questions for them all.

Install Oracle database:
  • Do you need to install the Oracle software?
  • Is the Oracle DB software installed and you need to create an instance?
  • Is there an instance that you need to create a schema in?

Create filesystem
  • Do you have more than one volume group? (Use command lsvg to see if there's more than rootvg displayed)
  • Do you need to create a mirrored filesystem (simple local disk) or not (some other hardware protection, e.g. RAID or a SAN provided)

Install OS patches
  • Do you have a subscription to the IBM site to download patches?
  • What is your current OS version. Something like oslevel -s might help here, but I can't be sure on the syntax.

Before we know a bit more, I'd be worried about proceeding.

Kind regards,
# 3  
Thanks, Rbatte1
what's important to me is (create filesystem) because who is going to install Oracl is another guy and he asked me to
1- create a new volume group and mount it under /u02 of size 100GB. (I created a new vg "VG1"), but I wanna know who to mounted under /u02 of size 100GB
2- no need to mirror filesystem.
-Install Os Patches
I have a subscribed version of Aix 7.1 and what I know Os patches means APARs (Instfix/APAR).
what I wanna know is mounting and unmount and also who to increase paging space.
Below command maybe be informative to you
# oslevel -s
# lsvg

thanks and sorry for if there is anything not clear
# 4  
Originally Posted by khaled_ly84
what's important to me is (create filesystem) because who is going to install Oracl is another guy and he asked me to
1- create a new volume group and mount it under /u02 of size 100GB. (I created a new vg "VG1"), but I wanna know who to mounted under /u02 of size 100GB
2- no need to mirror filesystem.
OK, i will start with this problem, otherwise the post will perhaps become too long. In the following i will try to explain some concepts as well as answer your question. Try to understand the concepts rahter than just copying the commands, because this is what will help you in the long run more than everything else.

The Logical Volume Manager

The LVM is a mechanism to deal with real (storage) hardware on an abstracted level, so that it is possible to logically treat a very diverse collection of things (single disks, RAID-sets, LUNs from a storage box, ...) in the same coherent way. The LVM has a layered architecture and we will explore one layer after the other.

First layer: disks and other devices that provide raw storage capacity

When we attach disks (or, rather, the devices i mentioned in the title - things that provide something to a system which can be measured in kilobytes) to a system we need to make these available to the LVM first before we can use them. Furthermore we want to be able to group these disks to reflect that some of them provide storage for the same purpose.

Supose we have a system with three different applications: there are, say, 7 disks connected to the system and 2 are used for appl1, one for appl2 and the rest for appl3. We will want to organise the disks so that it immediately becomes clear which disks provides capacity for which application.

For this we have "volume groups" or VGs for short. Every "disk" (we treat this term very losely here, including RAID-sets, LUNs, and what not) is attached to exactly one such VG. When we make a disk part of a VG it becomes a "physical volume", PV. Here are some commands to deal with physical volumes:

lsvg              # show a list of all VGs
lsvg <VG>         # show the attributes of volume group VG
lsvg -p <VG>      # show a list of PVs in VG
lspv              # show a list of all hdisks and to which VG they belong
lspv <hdisk>      # show the attributes of hdisk and a free-distribution

When you create a VG you have to make some decisions which can't be changed (at least not easily) afterwards. These are:

a) the type of the VG
The logical volume manager is a piece of software which is over 20 years old. This means that some of the concepts (especially where it deals with disk sizes) comes from a time where disk space was measured in MB rather than GB or TB. The "classical" VG had a maximum of 32 disks with a maximum of 1019 physical partitions (see below) each. This became a problem at some point in time and IBM developed a way to convert such "classical" VGs to "Big VGs" which alleviated some restrictions a bit. Later another type, the "Scalable VG" was introduced, which loosened even more restrictions. Today there is no reason to create anything else than a "scalable VG".

If you have a VG and don't know which type it is (there is no direct way to display it) do the following (handle the following low-level commands with EXTREME care!) to display it:

readvgda <one-PV-of-the-VG> | head -n 7

b) the size of the physical partitions (PPs)
When a disk becomes part of a VG (and thus a pv) it is partitioned into so-called physical partitions. These PPs are the smallest units of space that can be assigned. What makes the LVM so versatile is that you don't have to be concerned where (that is: from which disk) a PP comes from. As i said above the classical restriction of 1019 PPs per PV is not any more with scalable VGs but still it pays to select a reasonable size for the PPs. "2MB" for a database is just nonsense. Ask yourself which would be the smallest unit reasonably to add to a filesystem and use this as PP size. For most cases a PP size of 512MB-2GB is a reasonable choice.

c) create the VG concurrent-capable?
If there is any chance that the VG is (or will become) part of a cluster, you should create it as concurrent-capable. This means the VG can be opened at more than one system at the same time. If you have a strict single-system this won't matter in any way.

d) The major number
Every VG has its own major number. Within clusters where you have more than one system that can activate the VG it is good practive to make sure the major number is maintained throughout the cluster so that the VG has the same major number on every node. You do not need this to make the cluster work but it makes administration easier. If you have a (strict) single-system this doesn't matter so much.

Ok, so much as a first part. I will write some more later today but for now this has to suffice. I hope you found that helpful.

One more point: it helps if you introduce naming conventions an stick to them. Don't name a VG "VG1", because once you have ten VGs, named VG1-VG10 you won't be able to tell which one is what. give them speaking names like "oraclevg" or "myorainstvg" or whatever - but something that tells you immediately what belongs to where.

My own convention (but that is as good as anthing else, just be consistent) is to name VGs with a "vg" at the end because the default "rootvg", which holds the system FSes is named that way. So i might have a "rootvg", an "orabinvg", an "oradatavg", etc. on one system.

I hope this helps.

# 5  
As promised, here is the second part:

First, there are some commands to add and remove disks to existing VGs. You should know these:

extendvg <VGname> <hdisk>      # adds the disk to the VG
reducevg <VGname> <hdisk>           # removes the disk from the VG

Second layer: Logical Volumes (LVs)

After having created a VG you can start to create logical volumes within the VG. Notice that a LV is NOT a filesystem - a logical volume is rather some space where you can put a filesystem. Other options for a logical volume to be used is as a "raw disk" - some databases use this because bypassing the filesystem layer comes with a (nowadays minuscule) performance advantage. Also possible is to put a swap space onto such an LV.

Also, when creating LVs you are making some decisions which cannot be reversed later - you'd have to recreate the LV in such a case with different attributes and move the data. So, again, plan thoroughly, then revise your plans, then pan again and only then implement them.

First, like with VGs, i'd suggest a naming convention and to be consistent with it. Personally i name the LVs always with a "lv" at the end (like the VGs have "vg" at the end) and some hint about the usage of the LV. You can have the LVs automatically named when you create them but i suggest NOT to do that. Ending with LVs named "lv00"-"lv127" pretty easily makes you lose orientation and once you have deleted the wrong one because you confused "lv87" with "lv86" you are in deep kimchi.

Now, after so much warnings, what is an LV actually? It is a collection of so-called Logical Partitions (LPs). A Logical Partition is like a PP, has the same size and it can have 1, 2 or 3 PPs representing it. That means you can even have mirrored data by the means of the LVM itself.

Because you can alter the number of PPs by which an LP is represented later (even while the file system is online and used) you can move LVs across disks: suppose you have a LV residing on PPs only coming from one disk. Now, that you introduced a new disk and want to replace the old one with the new one you need to move the data to the new disk somehow: you create a mirror with a second set of PPs from the new disk to represent each LP, wait until everthing is synchronised, then remove the mirror with the set of PPs coming from the old disk. Everything now is on the new disk without even so much as a downtime.

It is possible to completely control which PP represents which LP (via a so-called "map file"), but this is rarely done. Usually you rely on the automatisms built into the LVM and only make high-level requests about the layout of the mapping. You can request that each copy (if you have more than one PP representing each LP) resides on a separate disk, which makes sense because if you have mirrored data you won't want the mirrors to end up on the same device. This is called the "strictness" attribute.

You can also request the PPs to come from as many disks as possible (this is to engage as many disks as possible so that the load on each disk levels out) or as few as possible (this will make the layout less complicated and more easily maintainable but will come with a slight performance penalty). This is - rather unintuitively - called "inter-policy". Request "maximum" to spread the LV over as many disks as possible, "minimum" to use as few disks as possible.

You can also control where on the disk the LV is placed. The fastest parts of the disks are the center and it gets slower the more to the edges it gets because the read-/write-heads will have the longest way to travel there. This is called "intra-policy" and not to be confused withe the "inter-policy" from above.

Notice that all these performance considerations ca be skipped if you deal with LUNs from a storage box. All of the above applies only to real, physical harddisks consisting of rotating magnetic platters. It will also not apply to flash disks and the like.

Here are the most important commands for dealing with LVs:

lsvg -l <VGname>          # list all LVs in the VG and their status
lslv <LVname>             # show the attributes of an LV
lslv -m <LVname>          # show the exact mapping of PPs to LPs in an LV
mklv                      # create an LV. Has a lot of options, see the man page
rmlv                      # remove an LV, the LV has to be closed
chlv                      # changes attributs of an LV, also see man page

Third layer: filesystems

At lst we come to the filesystems: you create them by basically formatting an LV and so turn it into a FS. Notice that an FS resides on an LV but these two are different things - or rather different logical layers. It doesn't really help to clarify things that the command to create a FS (crfs) will create an LV automatically if there is none and the command to remove the FS (rmfs) will automatically remove the underlying LV if not invoked with special options. Still, LVs and FSs are NOT the same.

When creating FSs you don't have to consider that much as it used to be: disk space is cheap and plenty today and you need not to concern yourself at all over the waste of a few KB. Finetuning the number of inodes and similar things are rarely used any more because using a few MBs of space or not will not affect anything.

What you need to take into account, though, is if you work in a cluster environment or not: if so, you need to make sure the information about LVs, FSs etc. is propagated throughout the cluster nodes consistently. You either do that with "learning imports" (see the importvg command) or by using the cluster commands instead of the normal commands to create or manage VGs, LVs and FSs.

When creating FSs in a cluster make sure they are NOT mounted automatically at boot time! The cluster software itself will activate them when a VG is activated on a certain node.

Another thing you want to take into consideration is the placement of the log volume: AIX uses a "journaled file system" and somewhere the JFS-log has to be placed. Historically there was a special LV for that in each VG but nowadays with JFS2 it is better to use "inline logs", which set aside a part of the FS to do the same.

A feature you also want to use is to create "mount groups" for groups of FSs and respectively, make FSs part of a mount group: mount groups can be used to mount or unmount groups of FSs together and it is a very practial way of making sure all FSs related to each other (like all FSs of a certain application) are mounted or unmounted together. This saves a lot of unnecessary work and headache when managing a system. If you forgot to put a FS into a mountgroup just edit the file /etc/filesystems, which contains information about all the filesystems in stanza format. Here is an example:

         dev        = /dev/myappbinlv
         vol        = "binlv"
         mount      = true
         check      = true
         log        = INLINE

Add the line "type = <groupname>" to such a stanza to add a filesystem to a mount group, like this:

         dev        = /dev/myappbinlv
         vol        = "binlv"
         mount      = true
         check      = true
         log        = INLINE
         type       = myapp

If you have questions please ask.

I hope this helps.

# 6  
Thank you sooooooooo much that is was so informative for me,, I really appreciate your effort and time.
really thank you
Login or Register for Dates, Times and to Reply

Previous Thread | Next Thread
Thread Tools Search this Thread
Search this Thread:
Advanced Search

10 More Discussions You Might Find Interesting

1. How to Post in the The UNIX and Linux Forums

NFS mounted and unmounted shell script

Hi, I making a script to check nfs mount and unmount options. After various findings, i didn't get any solution for that. Can you please help me in making the script. 1) I have used, if grep -qs '/var/JETSHARE' /proc/mounts; then echo "It's mounted." else echo "It's not mounted. ... (2 Replies)
Discussion started by: Santosh101
2 Replies

2. UNIX for Dummies Questions & Answers

New mounted directory

Hello I need to install a new application on my linux server but needs to have a new directory e.g /opt/InCharge as this is the directory that the application would ask for during installation .This directory needs to be mounted if df -h it should appear as mounted (9 Replies)
Discussion started by: DOkuwa
9 Replies

3. AIX

Filesystem Unmounted

Please help me to find or how the filesystem got unmounted without our(Admin) knowledge? :confused::confused: It was an Application filesystem and server was not rebooted. Customer is asking the RCA that how their filesystem got umounted? bash-3.2# uptime 02:49PM up 193 days, 18:59, 7... (3 Replies)
Discussion started by: Thala
3 Replies

4. Red Hat

LVM unmounted due to out of memory

Hi , Our one of VMguest all lvm got unmounted once the machine is rebooted when in repair state dmesg its showing an error out of memory killed process 22289 (lvm) please refer screen shots attached when i look the lvscan its showing all lvm are inactive i checked throuh top there... (0 Replies)
Discussion started by: venikathir
0 Replies

5. Shell Programming and Scripting

if (disk is mounted) unmount if (disk is unmounted) mount

Hey there, sorry if this is a bit too much of a noob question, trying to get to grips with a simple bash script - but i have done ZERO bash scripting. basically having worked out how to mount and unmount disks using: disktool -m *device* & disktool -e *device* - and looking at the result of... (2 Replies)
Discussion started by: hollister
2 Replies

6. Shell Programming and Scripting

Kill shell script when host program not running/disk unmounted

I have my Mac OS X program executing a shell script (a script that copies files to a drive). I want to make it so that the shell script automatically kills itself if it finds that the host .app is not running OR kill itself if the drive that it is copying files to has been unmounted. Right now what... (2 Replies)
Discussion started by: pcwiz
2 Replies

7. SCO

file system under /dev have been unmounted

Dear all, i am running SCO unix 7.1, and i had a problem with the system file that was full thus some application wasn't able to be executed; then i've done the following: - move some core files from / directory - Increase the system parameter FLCKREC, then rebuild the system. - after... (1 Reply)
Discussion started by: Athos19
1 Replies

8. SCO

filesystem not getting unmounted

Hi, I'm working on SCO 2.1. I had a problem with my datadrive & I had replaced it lastly. Now one of the installed filesystems is not getting unmounted at the time of shutting down the system (I guess):confused: after the installation of new tape drive. If I try to unmount it forciblyby writing... (1 Reply)
Discussion started by: nensee7
1 Replies

9. Debian

/ is getting mounted in read-only!

once after the fsck during booting linux(debian 3.0) found some corruption in root (/) partition. Then it corrected it; but problem here is root partition is getting mounted in readonly. Other partitions like /home /tmp /boot are normal (rw). after doing fsck -f for the root partion it finds... (0 Replies)
Discussion started by: yogesh_powar
0 Replies

10. UNIX for Dummies Questions & Answers

mounted filesystems

If you have multiple hard drives and multiple mounted filesystems, how can you tell which filesystem resides on which disk? (3 Replies)
Discussion started by: jalburger
3 Replies

Featured Tech Videos