Sponsored Content
Special Forums Hardware Filesystems, Disks and Memory Partition management: lvm? fdisk? parted? (on RAID) Post 302417800 by builder88 on Friday 30th of April 2010 06:52:02 PM
Old 04-30-2010
Partition management: lvm? fdisk? parted? (on RAID)

Hello,

I have a RHEL system with two 500GB hard drives in RAID 1 (I think hardware, but not 100% certain - any way to tell?).

It looks like it was just set up in default configuration with a small boot partition and one huge partition for the rest, which composes a LVM volume.

I want to break that partition up into at least two separate ones. What I am wondering is:

* What's the difference between using fdisk and parted? Any reason to use one over the other?

* Should I just use lvm instead to shrink the current volume and create a second one? Is there a down-side to use logical volumes without creating a physical partition with fdisk/parted?

* Being that there are already two disks in RAID 1 configuration, does partitioning with lvm or fdisk/parted transparently propagate to the mirror disk or do I need to do something to partition BOTH drives?

Thanks in advance!!
 

10 More Discussions You Might Find Interesting

1. Solaris

recovery partition table from fdisk?

I have two disks on a sun blade 100. I just installed a solaris8 on the first disk. The installation was successful. But the problem is now I lost all data / partition on my second hard disk. The possible reason could be: 1. I used default web start install. During the installation I didn't... (2 Replies)
Discussion started by: motor98
2 Replies

2. UNIX for Dummies Questions & Answers

I've created a partition with GNU Parted, how do I mount the partition?

I've created a partition with GNU Parted, how do I mount the partition? The manual information at http://www.gnu.org/software/parted/manual/parted.html is good, but I am sure about how I mount the partition afterwards. Thanks, --Todd (1 Reply)
Discussion started by: jtp51
1 Replies

3. UNIX for Dummies Questions & Answers

Fdisk v/s parted

Just started understanding linux filesystem and partition utilities. I was going though some video tutorials by CBT nuggets and the author was cursing fdisk as fuzzy tool and recommending to use parted instead. In our job environment i have seen almost every one using fdisk utility for... (1 Reply)
Discussion started by: pinga123
1 Replies

4. Shell Programming and Scripting

Partition with parted

Hello folks, I have 2.4TB san storage i want to make 4 partition each having 600GB. I use fdisk but not possible to do this because of fdisk limitation. So i tried parted but it make 2.4TB one partition but with parted not possible to make 4 partition having 600GB, may be i m doing wrong, can... (6 Replies)
Discussion started by: learnbash
6 Replies

5. UNIX for Dummies Questions & Answers

How to convert non LVM root partition to LVM?

Hi Guys, I m using redhat 6, I have installed root partition as non-LVM . Is there any way i can convert it to LVM? (1 Reply)
Discussion started by: pinga123
1 Replies

6. Shell Programming and Scripting

Non-interactive fdisk partition in script

Hi, How can I run fdisk partition in a script without interactive input? In manual procedure, I run fdisk device, select n, select p, presess enter for default start number (1), press enter to default end number, then select w for writing to the partition table. The command looks like... (3 Replies)
Discussion started by: hce
3 Replies

7. Red Hat

Can I change the partition type with fdisk without disrupting data?

Hello, I have been going through our environment and I see we have a few servers with LVM's setup and the file system type is still set to "83" within fdisk. If I change this to "8e", will it hurt the data or cause any loss? I need to know for sure before I make the change. (1 Reply)
Discussion started by: s ladd
1 Replies

8. Red Hat

Shrink LVM partition & create new Linux Primary partition

Hello All, I have a Red Hat Linux 5.9 Server installed with one hard disk & 2 Partitions created on it as follows, /boot - Linux Partition & another is LVM - One VG & under that 5-6 Logical volumes(var,opt,home etc). Here my requirement is to take out 1GB of space from LVM ( Any logical... (5 Replies)
Discussion started by: gr8_usk
5 Replies

9. UNIX for Dummies Questions & Answers

RAID autodetect in fdisk -l

Hello, Please refer to the below output: # fdisk -l /dev/sda Disk /dev/sda: 598.9 GB, 598999040000 bytes 255 heads, 63 sectors/track, 72824 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 * ... (1 Reply)
Discussion started by: admin_db
1 Replies

10. UNIX for Dummies Questions & Answers

Using parted command to create LVM partitions

Oracle Linux 6.6 To create Physical Volumes for Volume groups (LVM) , the disk need to be partitioned to LVM type ie. 'Linux LVM' type . In fdisk , this can done by choosing 8e when prompted for partition type. Since it is easy to script (non-interactive), I use parted command rather than... (1 Reply)
Discussion started by: John K
1 Replies
raidtab(5)							File Formats Manual							raidtab(5)

NAME
raidtab - configuration file for md (RAID) devices DESCRIPTION
/etc/raidtab is the default configuration file for the raid tools (raidstart and company). It defines how RAID devices are configured on a system. FORMAT
/etc/raidtab has multiple sections, one for each md device which is being configured. Each section begins with the raiddev keyword. The order of items in the file is important. Later raiddev entries can use earlier ones (which allows RAID-10, for example), and the parsing code isn't overly bright, so be sure to follow the ordering in this man page for best results. Here's a sample md configuration file: # # sample raiddev configuration file # 'old' RAID0 array created with mdtools. # raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 0 chunk-size 8 device /dev/hda1 raid-disk 0 device /dev/hdb1 raid-disk 1 raiddev /dev/md1 raid-level 5 nr-raid-disks 3 nr-spare-disks 1 persistent-superblock 1 parity-algorithm left-symmetric device /dev/sda1 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 spare-disk 0 Here is more information on the directives which are in raid configuration files; the options are listen in this file in the same order they should appear in the actual configuration file. raiddev device This introduces the configuration section for the stated device. nr-raid-disks count Number of raid devices in the array; there should be count raid-disk entries later in the file. (current maximum limit for RAID devices -including spares- is 12 disks. This limit is already extended to 256 disks in experimental patches.) nr-spare-disks count Number of spare devices in the array; there should be count spare-disk entries later in the file. Spare disks may only be used with RAID4 and RAID5, and allow the kernel to automatically build new RAID disks as needed. It is also possible to add/remove spares run- time via raidhotadd/raidhotremove, care has to be taken that the /etc/raidtab configuration exactly follows the actual configuration of the array. (raidhotadd/raidhotremove does not change the configuration file) persistent-superblock 0/1 newly created RAID arrays should use a persistent superblock. A persistent superblock is a small disk area allocated at the end of each RAID device, this helps the kernel to safely detect RAID devices even if disks have been moved between SCSI controllers. It can be used for RAID0/LINEAR arrays too, to protect against accidental disk mixups. (the kernel will either correctly reorder disks, or will refuse to start up an array if something has happened to any member disk. Of course for the 'fail-safe' RAID variants (RAID1/RAID5) spares are activated if any disk fails.) Every member disk/partition/device has a superblock, which carries all information necessary to start up the whole array. (for autodetection to work all the 'member' RAID partitions should be marked type 0xfd via fdisk) The superblock is not visible in the final RAID array and cannot be destroyed accidentally through usage of the md device files, all RAID data content is available for filesystem use. parity-algorithm which The parity-algorithm to use with RAID5. It must be one of left-asymmetric, right-asymmetric, left-symmetric, or right-symmetric. left-symmetric is the one that offers maximum performance on typical disks with rotating platters. chunk-size size Sets the stripe size to size kilobytes. Has to be a power of 2 and has a compilation-time maximum of 4M. (MAX_CHUNK_SIZE in the ker- nel driver) typical values are anything from 4k to 128k, the best value should be determined by experimenting on a given array, alot depends on the SCSI and disk configuration. device devpath Adds the device devpath to the list of devices which comprise the raid system. Note that this command must be followed by one of raid-disk, spare-disk, or parity-disk. Also note that it's possible to recursively define RAID arrays, ie. to set up a RAID5 array of RAID5 arrays. (thus achieving two-disk failure protection, at the price of more disk space spent on RAID5 checksum blocks) raid-disk index The most recently defined device is inserted at position index in the raid array. spare-disk index The most recently defined device is inserted at position index in the spare disk array. parity-disk index The most recently defined device is moved to the end of the raid array, which forces it to be used for parity. failed-disk index The most recently defined device is inserted at position index in the raid array as a failed device. This allows you to create raid 1/4/5 devices in degraded mode - useful for installation. Don't use the smallest device in an array for this, put this after the raid-disk definitions! NOTES
The raidtools are derived from the md-tools and raidtools packages, which were originally written by Marc Zyngier, Miguel de Icaza, Gadi Oxman, Bradley Ward Allen, and Ingo Molnar. SEE ALSO
raidstart(8), raid0run(8), mkraid(8), raidstop(8) raidtab(5)
All times are GMT -4. The time now is 09:32 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy