Sponsored Content
Operating Systems Linux Debian Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS) Post 302586209 by Marcus Aurelius on Friday 30th of December 2011 02:54:23 PM
Old 12-30-2011
Best RAID settings for Debian Server? Help!! (1+0 or 5 or NAS)

I am installing a Debian Server on a:

HP Proliant DL380 G4
Dual CPU's 3.20 ghz / 800 mhz / 1MB L2
5120 MB RAM
6 hard disks on HP Smart Array 6i controller (36.4 GB Ultra320 SCSI HD each)


I will be using this server to capture VHS video, encode, compress, cut, edit, make DVD's, rip DVD's, re-encode some more, generally convert large uncompressed files as routine.

I have options:

#1) RAID set to RAID 5 (5 discs) with one spare (6th disk)
I liked this option, because of the spare. My disks are getting older, then are going to brake down, and I really need redundancy. However, I thought the parity processing would really slow down the encoding process. !!!

#2) RAID 1+0
I thought this would be faster, but I can either do 4 disks mirrored (leaves me really two disks size of 36.4GB x2)... Then I would have 1 spare, and one SCSI disk not being used sitting in the server. (some what of a waste, unless I need a quick replacement).

With RAID 5 I have 5 disks @ 36.4GB each and 1 spare
OR
With RAID 1+0 I have 2 disks @ 36.4GB each but I have two spares and mirroring.

I am also considering purchasing a NAS.

Given these scenarios (RAID 5...RAID 1+0....either 5 or 1+0 using a NAS)

What is the best configuration for my needs?
Thank you.
 

10 More Discussions You Might Find Interesting

1. Filesystems, Disks and Memory

Adding RAID to server

Hi, I have a server that I am adding a RAID that we purchased to. The server works fine. It is running Fedora 7 and is a Dell Precision 690. When the RAID is attached, it boots up and says the following: Controller Bus #00, Device#1F, Function#02: 00 Ports No device found AHCI BIOS not... (0 Replies)
Discussion started by: user23
0 Replies

2. Hardware

Raid 0 on database server

Hi guys. if we want to use SSD drives on a database server, can we use RAID 0 configuration because of their reliability? (3 Replies)
Discussion started by: majid.merkava
3 Replies

3. Debian

backup debian-nas

hello i want to backup my debian running nas (only the debian part) i wanna do this over ssh is this possible and how to do this thx ---------- Post updated at 07:02 AM ---------- Previous update was at 06:57 AM ---------- the thing is i f this is possible i wanne have te back up of... (2 Replies)
Discussion started by: joosted
2 Replies

4. Hardware

3ware RAID server

We have a 3ware RAID server at work, and as the appointed systems administrator (by virtue of being the one with the most knowledge) I've taken on the job of maintaining it. I've installed smartmontools on it to keep an eye on the drives and run scans every day, and looking at the data from the... (0 Replies)
Discussion started by: Krendoshazin
0 Replies

5. Filesystems, Disks and Memory

Help needed! Raid 5 failure on a Debian System

Hello! I have a 4-disc Raid 5 server running Open Media Vault (Debian). The other day, it disappeared from OMV, which was reporting 3 drives failed. Panic Stations. However, using MDADM I can get info from 3 of the drives which suggests they are functioning ok (info below). The remaining 4th... (1 Reply)
Discussion started by: jonlisty
1 Replies

6. UNIX and Linux Applications

Building a NAS server

Hello, I am planning to build a NAS server next week and i was wondering which OS to use. As i see the two most common are FreeNAS and Ubuntu server + samba. What do you think?Do you hava any experience on that?Any other idea? Thanks! (5 Replies)
Discussion started by: @dagio
5 Replies

7. Solaris

Reboot required on Server, Just confirm my settings.

Hi Guys, I need to reboot one Server as the newly inserted disk is not getting detected in system , I have also confirmed with Sun Support and finally it was the reboot which was required after doing all troubleshooting stuff. So I have disassembled the mirror and kept working disk's single... (3 Replies)
Discussion started by: manalisharmabe
3 Replies

8. UNIX for Beginners Questions & Answers

Nas server TS-251+

Hello. I have this nas server ts-251+ and I thought about - if it is possible to make a script that can turn on/off for the vpn via crontab. If it's possible ( I dont have any knowledge about scripts myself ), can anyone help me creating the script? Thank you. (6 Replies)
Discussion started by: Kuno
6 Replies

9. Debian

Debian 8 (Jessie): Where are the sound settings?

Dear experts, right now I'm watching a Youtube movie and the sound is on and at maximum in the browser. On my Thinkpad T60 I have also pressed the volume up hardware button as often as possible. Now I'm curious if there are some advanced sound settings, so I can check if the sound level is... (2 Replies)
Discussion started by: junior-helper
2 Replies

10. UNIX for Advanced & Expert Users

Revive RAID 0 Array From Buffalo Duo NAS

Thank you in advanced, I had a Buffalo DUO crap out on me that was setup as RAID 0. I dont belive it was the drives but rather the controller in the DUO unit. I bought another external HDD enclosure and was able to fireup the two older DUO drives in it and I think I resembled the RAID... (12 Replies)
Discussion started by: metallica1973
12 Replies
GRAID(8)						    BSD System Manager's Manual 						  GRAID(8)

NAME
graid -- control utility for software RAID devices SYNOPSIS
graid label [-f] [-o fmtopt] [-S size] [-s strip] format label level prov ... graid add [-f] [-S size] [-s strip] name label level graid delete [-f] name [label | num] graid insert name prov ... graid remove name prov ... graid fail name prov ... graid stop [-fv] name ... graid list graid status graid load graid unload DESCRIPTION
The graid utility is used to manage software RAID configurations, supported by the GEOM RAID class. GEOM RAID class uses on-disk metadata to provide access to software-RAID volumes defined by different RAID BIOSes. Depending on RAID BIOS type and its metadata format, different subsets of configurations and features are supported. To allow booting from RAID volume, the metadata format should match the RAID BIOS type and its capabilities. To guarantee that these match, it is recommended to create volumes via the RAID BIOS interface, while experienced users are free to do it using this utility. The first argument to graid indicates an action to be performed: label Create an array with single volume. The format argument specifies the on-disk metadata format to use for this array, such as "Intel". The label argument specifies the label of the created volume. The level argument specifies the RAID level of the created volume, such as: "RAID0", "RAID1", etc. The subsequent list enumerates providers to use as array components. The special name "NONE" can be used to reserve space for absent disks. The order of components can be important, depending on specific RAID level and metadata format. Additional options include: -f Enforce specified configuration creation if it is officially unsupported, but technically can be created. -o fmtopt Specifies metadata format options. -S size Use size bytes on each component for this volume. Should be used if several volumes per array are planned, or if smaller components going to be inserted later. Defaults to size of the smallest component. -s strip Specifies strip size in bytes. Defaults to 131072. add Create another volume on the existing array. The name argument is the name of the existing array, reported by label command. The rest of arguments are the same as for the label command. delete Delete volume(s) from the existing array. When the last volume is deleted, the array is also deleted and its metadata erased. The name argument is the name of existing array. Optional label or num arguments allow specifying volume for deletion. Additional options include: -f Delete volume(s) even if it is still open. insert Insert specified provider(s) into specified array instead of the first missing or failed components. If there are no such compo- nents, mark disk(s) as spare. remove Remove the specified provider(s) from the specified array and erase metadata. If there are spare disks present, the removed disk(s) will be replaced by spares. fail Mark the given disks(s) as failed, removing from active use unless absolutely necessary due to exhausted redundancy. If there are spare disks present - failed disk(s) will be replaced with one of them. stop Stop the given array. The metadata will not be erased. Additional options include: -f Stop the given array even if some of its volumes are opened. list See geom(8). status See geom(8). load See geom(8). unload See geom(8). Additional options include: -v Be more verbose. SUPPORTED METADATA FORMATS
The GEOM RAID class follows a modular design, allowing different metadata formats to be used. Support is currently implemented for the fol- lowing formats: DDF The format defined by the SNIA Common RAID Disk Data Format v2.0 specification. Used by some Adaptec RAID BIOSes and some hardware RAID controllers. Because of high format flexibility different implementations support different set of features and have different on-disk metadata layouts. To provide compatibility, the GEOM RAID class mimics capabilities of the first detected DDF array. Respecting that, it may support different number of disks per volume, volumes per array, partitions per disk, etc. The following con- figurations are supported: RAID0 (2+ disks), RAID1 (2+ disks), RAID1E (3+ disks), RAID3 (3+ disks), RAID4 (3+ disks), RAID5 (3+ disks), RAID5E (4+ disks), RAID5EE (4+ disks), RAID5R (3+ disks), RAID6 (4+ disks), RAIDMDF (4+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Format supports two options "BE" and "LE", that mean big-endian byte order defined by specification (default) and little-endian used by some Adaptec controllers. Intel The format used by Intel RAID BIOS. Supports up to two volumes per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks). Configurations not supported by Intel RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks). JMicron The format used by JMicron RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID10 (4 disks), CONCAT (2+ disks). Configurations not supported by JMicron RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID1E (3+ disks), RAID10 (6+ disks), RAID5 (3+ disks). NVIDIA The format used by NVIDIA MediaShield RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4+ disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by NVIDIA MediaShield RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks). Promise The format used by Promise and AMD/ATI RAID BIOSes. Supports multiple volumes per array. Each disk can be split to be used by up to two arbitrary volumes. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by RAID BIOSes, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SiI The format used by SiliconImage RAID BIOS. Supports one volume per array. Supports configurations: RAID0 (2+ disks), RAID1 (2 disks), RAID5 (3+ disks), RAID10 (4 disks), SINGLE (1 disk), CONCAT (2+ disks). Configurations not supported by SiliconImage RAID BIOS, but enforceable on your own risk: RAID1 (3+ disks), RAID10 (6+ disks). SUPPORTED RAID LEVELS
The GEOM RAID class follows a modular design, allowing different RAID levels to be used. Full support for the following RAID levels is cur- rently implemented: RAID0, RAID1, RAID1E, RAID10, SINGLE, CONCAT. The following RAID levels supported as read-only for volumes in optimal state (without using redundancy): RAID4, RAID5, RAID5E, RAID5EE, RAID5R, RAID6, RAIDMDF. RAID LEVEL MIGRATION
The GEOM RAID class has no support for RAID level migration, allowed by some metadata formats. If you started migration using BIOS or in some other way, make sure to complete it there. Do not run GEOM RAID class on migrating volumes under pain of possible data corruption! 2TiB BARRIERS NVIDIA metadata format does not support volumes above 2TiB. SYSCTL VARIABLES
The following sysctl(8) variable can be used to control the behavior of the RAID GEOM class. kern.geom.raid.aggressive_spare: 0 Use any disks without metadata connected to controllers of the vendor matching to volume metadata format as spare. Use it with much care to not lose data if connecting unrelated disk! kern.geom.raid.clean_time: 5 Mark volume as clean when idle for the specified number of seconds. kern.geom.raid.debug: 0 Debug level of the RAID GEOM class. kern.geom.raid.enable: 1 Enable on-disk metadata taste. kern.geom.raid.idle_threshold: 1000000 Time in microseconds to consider a volume idle for rebuild purposes. kern.geom.raid.name_format: 0 Providers name format: 0 -- raid/r{num}, 1 -- raid/{label}. kern.geom.raid.read_err_thresh: 10 Number of read errors equated to disk failure. Write errors are always considered as disk failures. kern.geom.raid.start_timeout: 30 Time to wait for missing array components on startup. kern.geom.raid.X.enable: 1 Enable taste for specific metadata or transformation module. kern.geom.raid.legacy_aliases: 0 Enable geom raid emulation of legacy /dev/ar%d devices. This should aid the upgrade of systems from legacy to modern releases. EXIT STATUS
Exit status is 0 on success, and non-zero if the command fails. SEE ALSO
geom(4), geom(8), gvinum(8) HISTORY
The graid utility appeared in FreeBSD 9.0. AUTHORS
Alexander Motin <mav@FreeBSD.org> M. Warner Losh <imp@FreeBSD.org> BSD
April 4, 2013 BSD
All times are GMT -4. The time now is 11:45 PM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy