Solid-State Disk Deployment Guidelines

Solid-state disks (SSD ) are storage devices that use NAND flash chips to persistently store data. This sets them apart from previous generations of disks, which store data in rotating, magnetic platters. In an SSD , the access time for data across the full Logical Block Address (LBA) range is constant; whereas with older disks that use rotating media, access patterns that span large address ranges incur seek costs. As such, SSD devices have better latency and throughput.
Performance degrades as the number of used blocks approaches the disk capacity. The degree of performance impact varies greatly by vendor. However, all devices experience some degradation.
To address the degradation issue, the host system (for example, the Linux kernel) may use discard requests to inform the storage that a given range of blocks is no longer in use. An SSD can use this information to free up space internally, using the free blocks for wear-leveling. D iscards will only be issued if the storage advertises support in terms of its storage protocol (be it ATA or SCSI). Discard requests are issued to the  storage using the negotiated discard command specific to the storage protocol (TRIM command for ATA, and WRITE SAME with UNMAP set, or UNMAP command for SCSI).
Enabling discard support is most useful when the following two points are true:

Free space is still available on the file system.
Most logical blocks on the underlying storage device have already been written to.

For more information about TRIM, refer to its Data Set Management T13 Specifications from the
following link: http://t13.org/D ocuments/UploadedD ocuments/docs2008/e07154r6-Data_Set_Management_Proposal_for_ATA-ACS2.doc
For more information about UNMAP , refer to section 4.7.3.4 of the SCSI Block Commands 3 T10
Specification from the following link:
http://www.t10.org/cgi-bin/ac.pl?t=f&f=sbc3r26.pdf
Note
Not all solid-state devices in the market have discard support. To determine if your solidstate device has di scard support check for

/sys/block/sda/queue/discard_granularity.

Deployment Considerations

Because of the internal layout and operation of SSD s, it is best to partition devices on an internal erase block boundary. Partitioning utilities in Red Hat Enterprise Linux 7 chooses sane defaults if the  SSD exports topology information.
However, if the device does not export topology information, Red Hat recommends that the first partition be created at a 1MB boundary.
The Logical Volume Manager (LVM), the device-mapper (DM) targets, and MD (software raid) targets that LVM uses support discards. The only DM targets that do not support discards are dm-snapshot, dm-crypt, and dm-raid45. D iscard support for the dm-mirror was added in Red Hat  Enterprise Linux 6.1 and as of 7.0 MD supports discards.
Red Hat also warns that software RAID levels 1, 4, 5, and 6 are not recommended for use on SSDs. During the initialization stage of these RAID levels, some RAID management utilities (such as mdadm) write to all of the blocks on the storage device to ensure that checksums operate properly. This will cause the performance of the SSD to degrade quickly.
As of Red Hat Enterprise Linux 6.4, ext4 and XFS are the only fully-supported file systems that support discard . Previous versions of Red Hat Enterprise Linux 6 only ext4 fully supported discard .

To enable discard commands on a device, use the mount option discard . For example, to mount /dev/sda2 to /mnt with discard enabled, run:

# mount -t ext4 -o discard /dev/sda2 /mnt

By default, ext4 does not issue the d i scard command. This is mostly to avoid problems on devices which may not properly implement the discard command. The Linux swap code will issue discard commands to discard -enabled devices, and there is no option to control this behavior.

Tuning Considerations

This section describes several factors to consider when configuring settings that may affect SSD performance.
I/O Scheduler
Any I/O scheduler should perform well with most SSD s. However, as with any other storage type, Red Hat recommends benchmarking to determine the optimal configuration for a given workload.
When using SSD s, Red Hat advises changing the I/O scheduler only for benchmarking particular workloads. For more information about the different types of I/O schedulers, refer to the I/O Tuning Guide (also provided by Red Hat). The following kernel document also contains instructions on how to switch between I/O schedulers:

/usr/share/doc/kernel-version/Documentation/block/switching-sched.txt

As of Red Hat Enterprise Linux 7.0, the default I/O scheduler is now D eadline, except for use with SATA drives. In the case of SATA drives, CFQ is the default I/O scheduler. For faster storage, deadline can outperform CFQ leading to better I/O performance without the need for specific tuning. It is  possible, however, that default is not right for some disks (such as SAS rotational disks). In this case the I/O scheduler will need to be changed to CFQ.
Virual Memory
Like the I/O scheduler, virtual memory (VM) subsystem requires no special tuning. Given the fast nature of I/O on SSD , it should be possible to turn down the vm_dirty_backg round _ratio and vm_dirty_ratio settings, as increased write-out activity should not negatively impact the latency of other operations on the disk. However, this can generate more overall I/O and so is not generally recommended without workload-specific testing.
Swap
An SSD can also be used as a swap device, and is likely to produce good page-out/page-in performance.