Sponsored Content
Top Forums UNIX for Dummies Questions & Answers What should I format my SSD with? Post 302922847 by Corona688 on Tuesday 28th of October 2014 04:29:17 PM
Old 10-28-2014
SMART statistics are just index numbers. The drive doesn't actually tell the computer "flying head time is too long", it just spits out some numbers -- a test number, a value, and the acceptable ranges(so your program doesn't have to know what it is to know it's bad). So, whatever test number "flying head hours" is may mean something totally different for your SSD. Look up the manual for your drive or ask the manufacturer.

I reccomend ext4 over ext3 for heavy-duty things since it's faster for large partitions (ever tried to fsck a 100GB ext3 partition? Takes a while). Also, it can be defragmented without unmounting it, which could end up being very important for the long-term performance of your virtual machines.

Otherwise, making a filesystem work well with an ssd is mostly about fine-tuning it to match its block sizes and boundaries. If you get it wrong, it won't explode, but performance might be just slightly worse. See SSD - Gentoo Wiki for some general advice.

Also, an fstrim once in a while is good for the SSD, it helps wear-levelling work better by informing the SSD which blocks it doesn't have to care about anymore. See the wiki again for that.

Last edited by Corona688; 10-28-2014 at 05:39 PM..
This User Gave Thanks to Corona688 For This Post:
 

10 More Discussions You Might Find Interesting

1. BSD

Using SSD in FreeBSD

Now that SSD drives are becoming mainstream, I had a few questions on installing a SSD drive in a FreeBSD environment. Can FreeBSD be made SSD aware, that is, somehow let FreeBSD know that reads and writes should be limited or deferred to extend the disk's life? Is there a setting for wear... (0 Replies)
Discussion started by: figaro
0 Replies

2. Hardware

Use of SSD for serving webpages

I have seen research articles and forum postings that demonstrate that SSDs are poor at reading large files: the larger the file, the slower the SSD compared to traditional hard disk drives. The difference with hard disk drives becomes apparent at medium size files, say 20KB. Does this mean that... (2 Replies)
Discussion started by: figaro
2 Replies

3. AIX

SSD with GPFS ?

Hi, does anyone here happen to know if I could run GLVM or GPFS on Solid State Disks? I have a high volume / high transaction Sybase HACMP cluster currently setup with SRDF to the DR datacentre. My business now considers to move everything to SSD storage but we still need to get the data to... (0 Replies)
Discussion started by: zxmaus
0 Replies

4. Hardware

SSD - motherboard compatibility question

I am planning on purchasing an OCZ SSD that runs on the PCI Express lane. Now, while the motherboard it is to be installed on is not listed in the compatibility table, this does not mean it will not work, because the table only lists the motherboards that have been tested. Still, is there a way... (4 Replies)
Discussion started by: figaro
4 Replies

5. Programming

Program to test SSD

Hi, I want to use SSD as storage to replace hard disk in server(Linux system). Need some help how to code the pgm(C or C ++)to test the SSD functionality (eg: Badblocks ). As im new to this line. i dont have much experience. Any input from experts/pro much appreciated. thanks a lot. :confused: (4 Replies)
Discussion started by: crazydude80
4 Replies

6. Hardware

mobo with built-in ssd and linux

hi, thinking of building a system around this mobo: GA-Z68XP-UD3-iSSD. this has an ssd built-in to the mobo that serves as a cache for the sata drives. does linux have a chance of working on this? or is it going to get confused. thanks (6 Replies)
Discussion started by: rtayek
6 Replies

7. Filesystems, Disks and Memory

RAID 0 for SSD

Nowadays the fastest SSDs achieve read-speeds of between 1500 MB/s to 1900 MB/s. Let's say that two such SSDs in RAID 0 achieve roughly double the throughput, ie 3000 MB/s. That is only half an order of magnitude removed from RAM ((10)^(1/2) * 3000 = 10.000), very broadly speaking. So for the... (6 Replies)
Discussion started by: figaro
6 Replies

8. Solaris

Install ssd hdd on Solaris 5.8

Hi, I want to install an SSD on my hdd ULTRA SPARK 10 with solaris 5.8. I can not format it because to complete the operation is necessary to enter the number of sectors and cylinders ... I also tried to make a copy disk2disk but goes wrong ... someone has an idea? Has anyone ever done such a... (0 Replies)
Discussion started by: stirux
0 Replies

9. Filesystems, Disks and Memory

SSD Caching, how its done, right choice?

Hello, someone needed VPS with SSD caching, he want to use server for websites hosting. What does that mean, this SSD caching and is it optimal solution for this? Also i listen some SSD dont like too much of writting so how one can recognise certain SSD is made the way that its not destroyed... (1 Reply)
Discussion started by: postcd
1 Replies

10. Hardware

SAS or SSD for Ubuntu 14.04 and data analysis

I am in the process of building a workstation and have a question related to performance. I am a scientist who deals with big data (average file size 30-50gb). My OS is ubuntu 14.04 and so far I have a 128gb dual xeon E5-2630 with 6 cores each. I/O buffering is an issue so I am adding a 256/512?... (6 Replies)
Discussion started by: cmccabe
6 Replies
DMC(1)																	    DMC(1)

NAME
dmc - controls the Disk Mount Conditioner SYNOPSIS
dmc start mount [profile-name|profile-index [-boot]] dmc stop mount dmc status mount [-json] dmc show profile-name|profile-index dmc list dmc select mount profile-name|profile-index dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] dmc help | -h DESCRIPTION
dmc(1) configures the Disk Mount Conditioner. The Disk Mount Conditioner is a kernel provided service that can degrade the disk I/O being issued to specific mount points, providing the illusion that the I/O is executing on a slower device. It can also cause the conditioned mount point to advertise itself as a different device type, e.g. the disk type of an SSD could be set to an HDD. This behavior consequently changes various parameters such as read-ahead settings, disk I/O throttling, etc., which normally have different behavior depending on the underlying device type. COMMANDS
Common command parameters: o mount - the mount point to be used in the command o profile-name - the name of a profile as shown in dmc list o profile-index - the index of a profile as shown in dmc list dmc start mount [profile-name|profile-index [-boot]] Start the Disk Mount Conditioner on the given mount point with the current settings (from dmc status) or the given profile, if pro- vided. Optionally configure the profile to remain enabled across reboots, if -boot is supplied. dmc stop mount Disable the Disk Mount Conditioner on the given mount point. Also disables any settings that persist across reboot via the -boot flag provided to dmc start, if any. dmc status mount [-json] Display the current settings (including on/off state), optionally as JSON dmc show profile-name|profile-index Display the settings of the given profile dmc list Display all profile names and indices dmc select mount profile-name|profile-index Choose a different profile for the given mount point without enabling or disabling the Disk Mount Conditioner dmc configure mount type access-time read-throughput write-throughput [ioqueue-depth maxreadcnt maxwritecnt segreadcnt segwritecnt] Select custom parameters for the given mount point rather than using the settings provided by a default profile. See dmc list for example parameter settings for various disk presets. o type - 'SSD' or 'HDD'. The type determines how various system behaviors like disk I/O throttling and read-ahead algorithms affect the issued I/O. Additionally, choosing 'HDD' will attempt to simulate seek times, including drive spin-up from idle. o access-time - latency in microseconds for a single I/O. For SSD types this latency is applied exactly as specified to all I/O. For HDD types, the latency scales based on a simulated seek time (thus making the access-time the maximum latency or seek penalty). o read-throughput - integer specifying megabytes-per-second maximum throughput for disk reads o write-throughput - integer specifying megabytes-per-second maxmimu throughput for disk writes o ioqueue-depth - maximum number of commands that a device can accept o maxreadcnt - maximum byte count per read o maxwritecnt - maximum byte count per write o segreadcnt - maximum physically disjoint segments processed per read o segwritecnt - maximum physically disjoint segments processed per write dmc help | -h Display help text EXAMPLES
dmc start / '5400 HDD' Turn on the Disk Mount Conditioner for the boot volume, acting like a 5400 RPM hard drive. dmc configure /Volumes/ExtDisk SSD 100 100 50 Configure an external disk to use custom parameters to degrade performance as if it were a slow SSD with 100 microsecond latencies, 100MB/s read throughput, and 50MB/s write throughput. IMPORTANT
The Disk Mount Conditioner is not a 'simulator'. It can only degrade (or 'condition') the I/O such that a faster disk device behaves like a slower device, not vice-versa. For example, a 5400 RPM hard drive cannot be conditioned to act like a SSD that is capable of a higher throughput than the theoretical limitations of the hard disk. In addition to running dmc stop, rebooting is also a sufficient way to clear any existing settings and disable Disk Mount Conditioner on all mount points (unless started with -boot). SEE ALSO
nlc(1) January 2018 DMC(1)
All times are GMT -4. The time now is 10:06 AM.
Unix & Linux Forums Content Copyright 1993-2022. All Rights Reserved.
Privacy Policy