Unix/Linux Go Back    


AIX AIX is IBM's industry-leading UNIX operating system that meets the demands of applications that businesses rely upon in today's marketplace.

Mounted and unmounted

AIX


Tags
faq, lvm

Reply    
 
Thread Tools Search this Thread Display Modes
    #1  
Old Unix and Linux 03-13-2018   -   Original Discussion by khaled_ly84
khaled_ly84's Unix or Linux Image
khaled_ly84 khaled_ly84 is offline
Registered User
 
Join Date: Mar 2018
Last Activity: 22 April 2018, 5:31 AM EDT
Posts: 4
Thanks: 0
Thanked 0 Times in 0 Posts
Mounted and unmounted

Hi Guys
I'm new here, and I need urgent help.
This my first steps to be Aix admin and I have this task
-instal Oracle database on Aix machine and create mounting point /u02 of size 100GB for Oracle Standalone database installation.
-download and install the following OS patches
- IV42025
- IV42024
- IV41380
- IV37790
- IV40002
- IV40001
many thanks

Last edited by Don Cragun; 03-13-2018 at 04:10 AM.. Reason: Remove duplicated text.
Sponsored Links
    #2  
Old Unix and Linux 03-13-2018   -   Original Discussion by khaled_ly84
rbatte1's Unix or Linux Image
rbatte1 rbatte1 is offline Forum Staff  
Root armed
 
Join Date: Jun 2007
Last Activity: 20 April 2018, 12:32 PM EDT
Location: Lancashire, UK
Posts: 3,510
Thanks: 1,544
Thanked 689 Times in 619 Posts
Welcome khaled_ly84,

There are several large chunks of work here and much that we don't know from the limited description. I'm afraid that I have questions for them all.

Install Oracle database:
  • Do you need to install the Oracle software?
  • Is the Oracle DB software installed and you need to create an instance?
  • Is there an instance that you need to create a schema in?

Create filesystem
  • Do you have more than one volume group? (Use command lsvg to see if there's more than rootvg displayed)
  • Do you need to create a mirrored filesystem (simple local disk) or not (some other hardware protection, e.g. RAID or a SAN provided)

Install OS patches
  • Do you have a subscription to the IBM site to download patches?
  • What is your current OS version. Something like oslevel -s might help here, but I can't be sure on the syntax.


Before we know a bit more, I'd be worried about proceeding.



Kind regards,
Robin
Sponsored Links
    #3  
Old Unix and Linux 03-14-2018   -   Original Discussion by khaled_ly84
khaled_ly84's Unix or Linux Image
khaled_ly84 khaled_ly84 is offline
Registered User
 
Join Date: Mar 2018
Last Activity: 22 April 2018, 5:31 AM EDT
Posts: 4
Thanks: 0
Thanked 0 Times in 0 Posts
Thanks, Rbatte1
what's important to me is (create filesystem) because who is going to install Oracl is another guy and he asked me to
1- create a new volume group and mount it under /u02 of size 100GB. (I created a new vg "VG1"), but I wanna know who to mounted under /u02 of size 100GB
2- no need to mirror filesystem.
-Install Os Patches
I have a subscribed version of Aix 7.1 and what I know Os patches means APARs (Instfix/APAR).
what I wanna know is mounting and unmount and also who to increase paging space.
Below command maybe be informative to you
# oslevel -s
7100-03-04-1441
# lsvg
rootvg
VG1

thanks and sorry for if there is anything not clear
    #4  
Old Unix and Linux 03-14-2018   -   Original Discussion by khaled_ly84
bakunin's Unix or Linux Image
bakunin bakunin is offline Forum Staff  
Bughunter Extraordinaire
 
Join Date: May 2005
Last Activity: 21 April 2018, 4:16 PM EDT
Location: In the leftmost byte of /dev/kmem
Posts: 5,741
Thanks: 112
Thanked 1,667 Times in 1,226 Posts
Quote:
Originally Posted by khaled_ly84 View Post
what's important to me is (create filesystem) because who is going to install Oracl is another guy and he asked me to
1- create a new volume group and mount it under /u02 of size 100GB. (I created a new vg "VG1"), but I wanna know who to mounted under /u02 of size 100GB
2- no need to mirror filesystem.
OK, i will start with this problem, otherwise the post will perhaps become too long. In the following i will try to explain some concepts as well as answer your question. Try to understand the concepts rahter than just copying the commands, because this is what will help you in the long run more than everything else.

The Logical Volume Manager
=====================


The LVM is a mechanism to deal with real (storage) hardware on an abstracted level, so that it is possible to logically treat a very diverse collection of things (single disks, RAID-sets, LUNs from a storage box, ...) in the same coherent way. The LVM has a layered architecture and we will explore one layer after the other.


First layer: disks and other devices that provide raw storage capacity

When we attach disks (or, rather, the devices i mentioned in the title - things that provide something to a system which can be measured in kilobytes) to a system we need to make these available to the LVM first before we can use them. Furthermore we want to be able to group these disks to reflect that some of them provide storage for the same purpose.

Supose we have a system with three different applications: there are, say, 7 disks connected to the system and 2 are used for appl1, one for appl2 and the rest for appl3. We will want to organise the disks so that it immediately becomes clear which disks provides capacity for which application.

For this we have "volume groups" or VGs for short. Every "disk" (we treat this term very losely here, including RAID-sets, LUNs, and what not) is attached to exactly one such VG. When we make a disk part of a VG it becomes a "physical volume", PV. Here are some commands to deal with physical volumes:



Code:
lsvg              # show a list of all VGs
lsvg <VG>         # show the attributes of volume group VG
lsvg -p <VG>      # show a list of PVs in VG
lspv              # show a list of all hdisks and to which VG they belong
lspv <hdisk>      # show the attributes of hdisk and a free-distribution

When you create a VG you have to make some decisions which can't be changed (at least not easily) afterwards. These are:

a) the type of the VG
The logical volume manager is a piece of software which is over 20 years old. This means that some of the concepts (especially where it deals with disk sizes) comes from a time where disk space was measured in MB rather than GB or TB. The "classical" VG had a maximum of 32 disks with a maximum of 1019 physical partitions (see below) each. This became a problem at some point in time and IBM developed a way to convert such "classical" VGs to "Big VGs" which alleviated some restrictions a bit. Later another type, the "Scalable VG" was introduced, which loosened even more restrictions. Today there is no reason to create anything else than a "scalable VG".

If you have a VG and don't know which type it is (there is no direct way to display it) do the following (handle the following low-level commands with EXTREME care!) to display it:



Code:
readvgda <one-PV-of-the-VG> | head -n 7

b) the size of the physical partitions (PPs)
When a disk becomes part of a VG (and thus a pv) it is partitioned into so-called physical partitions. These PPs are the smallest units of space that can be assigned. What makes the LVM so versatile is that you don't have to be concerned where (that is: from which disk) a PP comes from. As i said above the classical restriction of 1019 PPs per PV is not any more with scalable VGs but still it pays to select a reasonable size for the PPs. "2MB" for a database is just nonsense. Ask yourself which would be the smallest unit reasonably to add to a filesystem and use this as PP size. For most cases a PP size of 512MB-2GB is a reasonable choice.

c) create the VG concurrent-capable?
If there is any chance that the VG is (or will become) part of a cluster, you should create it as concurrent-capable. This means the VG can be opened at more than one system at the same time. If you have a strict single-system this won't matter in any way.

d) The major number
Every VG has its own major number. Within clusters where you have more than one system that can activate the VG it is good practive to make sure the major number is maintained throughout the cluster so that the VG has the same major number on every node. You do not need this to make the cluster work but it makes administration easier. If you have a (strict) single-system this doesn't matter so much.


-----------------
Ok, so much as a first part. I will write some more later today but for now this has to suffice. I hope you found that helpful.

One more point: it helps if you introduce naming conventions an stick to them. Don't name a VG "VG1", because once you have ten VGs, named VG1-VG10 you won't be able to tell which one is what. give them speaking names like "oraclevg" or "myorainstvg" or whatever - but something that tells you immediately what belongs to where.

My own convention (but that is as good as anthing else, just be consistent) is to name VGs with a "vg" at the end because the default "rootvg", which holds the system FSes is named that way. So i might have a "rootvg", an "orabinvg", an "oradatavg", etc. on one system.

I hope this helps.

bakunin
Sponsored Links
    #5  
Old Unix and Linux 03-14-2018   -   Original Discussion by khaled_ly84
bakunin's Unix or Linux Image
bakunin bakunin is offline Forum Staff  
Bughunter Extraordinaire
 
Join Date: May 2005
Last Activity: 21 April 2018, 4:16 PM EDT
Location: In the leftmost byte of /dev/kmem
Posts: 5,741
Thanks: 112
Thanked 1,667 Times in 1,226 Posts
As promised, here is the second part:

First, there are some commands to add and remove disks to existing VGs. You should know these:



Code:
extendvg <VGname> <hdisk>      # adds the disk to the VG
reducevg <VGname> <hdisk>           # removes the disk from the VG


Second layer: Logical Volumes (LVs)

After having created a VG you can start to create logical volumes within the VG. Notice that a LV is NOT a filesystem - a logical volume is rather some space where you can put a filesystem. Other options for a logical volume to be used is as a "raw disk" - some databases use this because bypassing the filesystem layer comes with a (nowadays minuscule) performance advantage. Also possible is to put a swap space onto such an LV.

Also, when creating LVs you are making some decisions which cannot be reversed later - you'd have to recreate the LV in such a case with different attributes and move the data. So, again, plan thoroughly, then revise your plans, then pan again and only then implement them.

First, like with VGs, i'd suggest a naming convention and to be consistent with it. Personally i name the LVs always with a "lv" at the end (like the VGs have "vg" at the end) and some hint about the usage of the LV. You can have the LVs automatically named when you create them but i suggest NOT to do that. Ending with LVs named "lv00"-"lv127" pretty easily makes you lose orientation and once you have deleted the wrong one because you confused "lv87" with "lv86" you are in deep kimchi.

Now, after so much warnings, what is an LV actually? It is a collection of so-called Logical Partitions (LPs). A Logical Partition is like a PP, has the same size and it can have 1, 2 or 3 PPs representing it. That means you can even have mirrored data by the means of the LVM itself.

Because you can alter the number of PPs by which an LP is represented later (even while the file system is online and used) you can move LVs across disks: suppose you have a LV residing on PPs only coming from one disk. Now, that you introduced a new disk and want to replace the old one with the new one you need to move the data to the new disk somehow: you create a mirror with a second set of PPs from the new disk to represent each LP, wait until everthing is synchronised, then remove the mirror with the set of PPs coming from the old disk. Everything now is on the new disk without even so much as a downtime.

It is possible to completely control which PP represents which LP (via a so-called "map file"), but this is rarely done. Usually you rely on the automatisms built into the LVM and only make high-level requests about the layout of the mapping. You can request that each copy (if you have more than one PP representing each LP) resides on a separate disk, which makes sense because if you have mirrored data you won't want the mirrors to end up on the same device. This is called the "strictness" attribute.

You can also request the PPs to come from as many disks as possible (this is to engage as many disks as possible so that the load on each disk levels out) or as few as possible (this will make the layout less complicated and more easily maintainable but will come with a slight performance penalty). This is - rather unintuitively - called "inter-policy". Request "maximum" to spread the LV over as many disks as possible, "minimum" to use as few disks as possible.

You can also control where on the disk the LV is placed. The fastest parts of the disks are the center and it gets slower the more to the edges it gets because the read-/write-heads will have the longest way to travel there. This is called "intra-policy" and not to be confused withe the "inter-policy" from above.

Notice that all these performance considerations ca be skipped if you deal with LUNs from a storage box. All of the above applies only to real, physical harddisks consisting of rotating magnetic platters. It will also not apply to flash disks and the like.

Here are the most important commands for dealing with LVs:



Code:
lsvg -l <VGname>          # list all LVs in the VG and their status
lslv <LVname>             # show the attributes of an LV
lslv -m <LVname>          # show the exact mapping of PPs to LPs in an LV
mklv                      # create an LV. Has a lot of options, see the man page
rmlv                      # remove an LV, the LV has to be closed
chlv                      # changes attributs of an LV, also see man page


Third layer: filesystems

At lst we come to the filesystems: you create them by basically formatting an LV and so turn it into a FS. Notice that an FS resides on an LV but these two are different things - or rather different logical layers. It doesn't really help to clarify things that the command to create a FS (crfs) will create an LV automatically if there is none and the command to remove the FS (rmfs) will automatically remove the underlying LV if not invoked with special options. Still, LVs and FSs are NOT the same.

When creating FSs you don't have to consider that much as it used to be: disk space is cheap and plenty today and you need not to concern yourself at all over the waste of a few KB. Finetuning the number of inodes and similar things are rarely used any more because using a few MBs of space or not will not affect anything.

What you need to take into account, though, is if you work in a cluster environment or not: if so, you need to make sure the information about LVs, FSs etc. is propagated throughout the cluster nodes consistently. You either do that with "learning imports" (see the importvg command) or by using the cluster commands instead of the normal commands to create or manage VGs, LVs and FSs.

When creating FSs in a cluster make sure they are NOT mounted automatically at boot time! The cluster software itself will activate them when a VG is activated on a certain node.

Another thing you want to take into consideration is the placement of the log volume: AIX uses a "journaled file system" and somewhere the JFS-log has to be placed. Historically there was a special LV for that in each VG but nowadays with JFS2 it is better to use "inline logs", which set aside a part of the FS to do the same.

A feature you also want to use is to create "mount groups" for groups of FSs and respectively, make FSs part of a mount group: mount groups can be used to mount or unmount groups of FSs together and it is a very practial way of making sure all FSs related to each other (like all FSs of a certain application) are mounted or unmounted together. This saves a lot of unnecessary work and headache when managing a system. If you forgot to put a FS into a mountgroup just edit the file /etc/filesystems, which contains information about all the filesystems in stanza format. Here is an example:



Code:
/opt/myapp/bin:
         dev        = /dev/myappbinlv
         vol        = "binlv"
         mount      = true
         check      = true
         log        = INLINE

Add the line "type = <groupname>" to such a stanza to add a filesystem to a mount group, like this:



Code:
/opt/myapp/bin:
         dev        = /dev/myappbinlv
         vol        = "binlv"
         mount      = true
         check      = true
         log        = INLINE
         type       = myapp

If you have questions please ask.

I hope this helps.

bakunin
Sponsored Links
    #6  
Old Unix and Linux 03-15-2018   -   Original Discussion by khaled_ly84
khaled_ly84's Unix or Linux Image
khaled_ly84 khaled_ly84 is offline
Registered User
 
Join Date: Mar 2018
Last Activity: 22 April 2018, 5:31 AM EDT
Posts: 4
Thanks: 0
Thanked 0 Times in 0 Posts
Thank you sooooooooo much that is was so informative for me,, I really appreciate your effort and time.
really thank you
Sponsored Links
Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Linux More UNIX and Linux Forum Topics You Might Find Helpful
Thread Thread Starter Forum Replies Last Post
NFS mounted and unmounted shell script Santosh101 How to Post in the The UNIX and Linux Forums 2 08-07-2015 04:21 AM
Filesystem Unmounted Thala AIX 3 01-11-2014 04:00 AM
LVM unmounted due to out of memory venikathir Red Hat 0 10-12-2011 08:34 AM
if (disk is mounted) unmount if (disk is unmounted) mount hollister Shell Programming and Scripting 2 03-11-2011 06:51 PM
filesystem not getting unmounted nensee7 SCO 1 07-01-2007 12:59 PM



All times are GMT -4. The time now is 08:42 AM.