Sponsored Content
Full Discussion: RAID 0 for SSD
Special Forums Hardware Filesystems, Disks and Memory RAID 0 for SSD Post 302779837 by fpmurphy on Wednesday 13th of March 2013 11:54:13 AM
Old 03-13-2013
Hardware RAID0 is always faster than a single disk because you can step the reads and writes across the two disks simultaneously. However, I have never come across a case where throughput was doubled. Typically you will get something like a 50% increase in throughput.

Software RAID0 can provide throughput improvement but typically less than hardware RAID0.
 

8 More Discussions You Might Find Interesting

1. BSD

Using SSD in FreeBSD

Now that SSD drives are becoming mainstream, I had a few questions on installing a SSD drive in a FreeBSD environment. Can FreeBSD be made SSD aware, that is, somehow let FreeBSD know that reads and writes should be limited or deferred to extend the disk's life? Is there a setting for wear... (0 Replies)
Discussion started by: figaro
0 Replies

2. UNIX for Dummies Questions & Answers

RAID software vs hardware RAID

Hi Can someone tell me what are the differences between software and hardware raid ? thx for help. (2 Replies)
Discussion started by: presul
2 Replies

3. AIX

SSD with GPFS ?

Hi, does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks? I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies

4. AIX

SCSI PCI - X RAID Controller card RAID 5 AIX Disks disappeared

Hello, I have a scsi pci x raid controller card on which I had created a disk array of 3 disks when I type lspv ; I used to see 3 physical disks ( two local disks and one raid 5 disk ) suddenly the raid 5 disk array disappeared ; so the hardware engineer thought the problem was with SCSI... (0 Replies)
Discussion started by: filosophizer
0 Replies

5. Solaris

Software RAID on top of Hardware RAID

Server Model: T5120 with 146G x4 disks. OS: Solaris 10 - installed on c1t0d0. Plan to use software raid (veritas volume mgr) on c1t2d0 disk. After format and label the disk, still not able to detect using vxdiskadm. Question: Should I remove the hardware raid on c1t2d0 first? My... (4 Replies)
Discussion started by: KhawHL
4 Replies

6. Red Hat

RAID Configuration for IBM Serveraid-7k SCSI RAID Controller

Hello, I want to delete a RAID configuration an old server has. Since i haven't the chance to work with the specific raid controller in the past can you please help me how to perform the configuraiton? I downloaded IBM ServeRAID Support CD but i wasn't able to configure the video card so i... (0 Replies)
Discussion started by: @dagio
0 Replies

7. UNIX for Dummies Questions & Answers

What should I format my SSD with?

Hello All, I recently received a new SSD that I am going to use for the purpose of Booting Virtual Machines. I use VMWare Player to boot Windows Guest Operating Systems onto my Linux Laptop. I currently have a SSD drive that I use for this exact same purpose that is formatted as ext3 and I'm... (3 Replies)
Discussion started by: mrm5102
3 Replies

8. Linux

CentOS 6.6 SSD trim on HP DL380 G2 RAID 0

I'm running glusterfs on CentOS 6.6 two nodes, (the SSD (samsung 840 1TB x2) is RAID 0 on the HP DL380 G6) x2, and trimming is not enable on it by checking /dev/sdb1/xxxxx/discard_max_bytes=0. Do I still need trimming? Somehow my filesystem is fine with 35-30% free space and running very fast. ... (1 Reply)
Discussion started by: itik
1 Replies
DMC(1)																	    DMC(1)

NAME
dmc - controls the Disk Mount Conditioner SYNOPSIS
dmc start mount [profile-name|profile-index [-boot]] dmc stop mount dmc status mount [-json] dmc show profile-name|profile-index dmc list dmc select mount profile-name|profile-index dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] dmc help | -h DESCRIPTION
dmc(1) configures the Disk Mount Conditioner. The Disk Mount Conditioner is a kernel provided service that can degrade the disk I/O being issued to specific mount points, providing the illusion that the I/O is executing on a slower device. It can also cause the conditioned mount point to advertise itself as a different device type, e.g. the disk type of an SSD could be set to an HDD. This behavior consequently changes various parameters such as read-ahead settings, disk I/O throttling, etc., which normally have different behavior depending on the underlying device type. COMMANDS
Common command parameters: o mount - the mount point to be used in the command o profile-name - the name of a profile as shown in dmc list o profile-index - the index of a profile as shown in dmc list dmc start mount [profile-name|profile-index [-boot]] Start the Disk Mount Conditioner on the given mount point with the current settings (from dmc status) or the given profile, if pro- vided. Optionally configure the profile to remain enabled across reboots, if -boot is supplied. dmc stop mount Disable the Disk Mount Conditioner on the given mount point. Also disables any settings that persist across reboot via the -boot flag provided to dmc start, if any. dmc status mount [-json] Display the current settings (including on/off state), optionally as JSON dmc show profile-name|profile-index Display the settings of the given profile dmc list Display all profile names and indices dmc select mount profile-name|profile-index Choose a different profile for the given mount point without enabling or disabling the Disk Mount Conditioner dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] Select custom parameters for the given mount point rather than using the settings provided by a default profile. See dmc list for example parameter settings for various disk presets. o type - 'SSD' or 'HDD'. The type determines how various system behaviors like disk I/O throttling and read-ahead algorithms affect the issued I/O. Additionally, choosing 'HDD' will attempt to simulate seek times, including drive spin-up from idle. o access-time - latency in microseconds for a single I/O. For SSD types this latency is applied exactly as specified to all I/O. For HDD types, the latency scales based on a simulated seek time (thus making the access-time the maximum latency or seek penalty). o read-throughput - integer specifying megabytes-per-second maximum throughput for disk reads o write-throughput - integer specifying megabytes-per-second maxmimu throughput for disk writes o ioqueue-depth - maximum number of commands that a device can accept o maxreadcnt - maximum byte count per read o maxwritecnt - maximum byte count per write o segreadcnt - maximum physically disjoint segments processed per read o segwritecnt - maximum physically disjoint segments processed per write dmc help | -h Display help text EXAMPLES
dmc start / '5400 HDD' Turn on the Disk Mount Conditioner for the boot volume, acting like a 5400 RPM hard drive. dmc configure /Volumes/ExtDisk SSD 100 100 50 Configure an external disk to use custom parameters to degrade performance as if it were a slow SSD with 100 microsecond latencies, 100MB/s read throughput, and 50MB/s write throughput. IMPORTANT
The Disk Mount Conditioner is not a 'simulator'. It can only degrade (or 'condition') the I/O such that a faster disk device behaves like a slower device, not vice-versa. For example, a 5400 RPM hard drive cannot be conditioned to act like a SSD that is capable of a higher throughput than the theoretical limitations of the hard disk. In addition to running dmc stop, rebooting is also a sufficient way to clear any existing settings and disable Disk Mount Conditioner on all mount points (unless started with -boot). SEE ALSO
nlc(1) January 2018 DMC(1)
All times are GMT -4. The time now is 09:07 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy