Deadline scheduler (deadline)

is a latency-oriented I/O scheduler. Each I/O request has got a deadline assigned. Usually, requests are stored in queues (read and write) sorted by sector numbers. The DEADLINE algorithm maintains two additional queues (read and write) where the requests are sorted by deadline. As long as no request has timed out, the “sector” queue is used. If timeouts occur, requests from the “deadline” queue are served until there are no more expired requests. Generally, the algorithm prefers reads over writes.

This scheduler can provide a superior throughput over the CFQ I/O scheduler in cases where several threads read and write and fairness is not an issue. For example, for several parallel readers from a SAN and for databases (especially when using “TCQ” disks).

Anticipatory scheduler (anticipatory)

It began life as the Deadline I/O scheduler but was gifted with the addition of an anticipation mechanism. Each read request has its deadline. Unlike Deadline scheduler, it sits and waits doing nothing for up to 6 milliseconds, chances are good that the application will issue another read to the samep prart of the filesystem during those 6 milliseconds, so It seeks to increase the efficiency of disk utilization by "anticipating" synchronous read operations.

CFQ [cfq] (Completely Fair Queuing)

is an I/O scheduler for the Linux kernel and default under many Linux distributions. The algorithm assigns each thread a time slice in which it is allowed to submit I/O to disk. This way each thread gets a fair share of I/O throughput.

Noop scheduler (noop)

is the simplest I/O scheduler for the Linux kernel based upon FIFO queue concept. Useful for checking whether complex I/O scheduling decisions of other schedulers are not causing I/O performance regressions.

In some cases it can be helpful for devices that do I/O scheduling themselves, as intelligent storage, or devices that do not depend on mechanical movement, like SSDs. Usually, the DEADLINE I/O scheduler is a better choice for these devices, but due to less overhead NOOP may produce better performance on certain workloads.