iostat is one of mostly used performance tools for troubleshooting on Linux, it reports CPU statistics and input/output statistics for devices, partitions and NFS.
Here are some useful command examples:
Basic command examples
Display a single history since boot report for all CPU and Devices
iostat
Display a continuous device report at two second intervals.
iostat -d 2
Display six reports at two second intervals for all devices.
iostat -d 2 6
Omit the first report in multiple report mode.
In multiple reports, the first one always shows the statistics since boot, to omit the first report, try
iostat -y -d 2 6
In the case above, the total number of output is 5
Display six reports of extended statistics at two second intervals for devices hda and hdb.
iostat -x hda hdb 2 6
Display six reports at two second intervals for device sda and all its partitions (sda1, etc.)
iostat -p sda 2 6
Report output explanation
The iostat command generates two types of reports, the CPU Utilization report and the Device Utilization report.
CPU Utilization Report
The first report generated by the iostat command is the CPU Utilization Report. For multiprocessor systems, the CPU values are global averages among all processors. The report has the following format:
%user
Show the percentage of CPU utilization that occurred while executing at the user level
%nice
Show the percentage of CPU utilization that occurred while executing at the user level
with nice priority.
%system
Show the percentage of CPU utilization that occurred while executing at the system level
%iowait
Show the percentage of time that the CPU or CPUs were idle during which the system had
an outstanding disk I/O request.
%steal
Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the
hypervisor was servicing another virtual processor.
%idle
Show the percentage of time that the CPU or CPUs were idle and the system did not have
an outstanding disk I/O request.
Device Utilization Report
The device report provides statistics on a per physical device or partition basis. Block devices and partitions for which statistics are to be displayed may be entered on the command line. If no device nor partition is entered, then statistics are displayed for every device used by the system, and providing that the kernel maintains statistics for it. If the ALL keyword is given on the command line, then statistics are displayed for every device defined by the system, including those that have never been used. Transfer rates are shown in 1K blocks by default, unless the environment variable POSIXLY_CORRECT is set, in which case 512-byte blocks are used. The report may show the following fields, depending on the flags used:
Device:
This column gives the device (or partition) name as listed in the /dev directory.
tps
Indicate the number of transfers per second that were issued to the device.
A transfer is an I/O request to the device. Multiple logical requests can be combined
into a single I/O request to the device. A transfer is of indeterminate size.
Blk_read/s (kB_read/s, MB_read/s)
Indicate the amount of data read from the device expressed in a number of blocks
per second. Blocks are equivalent to sectors and therefore have a size of 512 bytes.
Blk_wrtn/s (kB_wrtn/s, MB_wrtn/s)
Indicate the amount of data written to the device expressed in a number of blocks per second.
Blk_read (kB_read, MB_read)
The total number of blocks (kilobytes, megabytes) read.
Blk_wrtn (kB_wrtn, MB_wrtn)
The total number of blocks (kilobytes, megabytes) written.
rrqm/s
The number of read requests merged per second that were queued to the device.
wrqm/s
The number of write requests merged per second that were queued to the device.
r/s
The number (after merges) of read requests completed per second for the device.
w/s
The number (after merges) of write requests completed per second for the device.
rsec/s (rkB/s, rMB/s)
The number of sectors (kilobytes, megabytes) read from the device per second.
wsec/s (wkB/s, wMB/s)
The number of sectors (kilobytes, megabytes) written to the device per second.
avgrq-sz
The average size (in sectors) of the requests that were issued to the device.
avgqu-sz
The average queue length of the requests that were issued to the device.
await
The average time (in milliseconds) for I/O requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent servicing them.
r_await
The average time (in milliseconds) for read requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent servicing them.
w_await
The average time (in milliseconds) for write requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent servicing them.
svctm
The average service time (in milliseconds) for I/O requests that were issued to the device
Warning!
Do not trust this field any more. This field will be removed in a future sysstat version.
%util
Percentage of elapsed time during which I/O requests were issued to the device
(bandwidth utilization for the device).
Device saturation occurs when this value is close to 100%.
Display only CPU statistics
# iostat -c 2 2
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.73 0.00 0.99 0.49 0.00 97.78
avg-cpu: %user %nice %system %iowait %steal %idle
0.44 0.00 0.61 0.15 0.00 98.81
Display only NFS statistics
iostat -n
Linux 2.6.32-504.1.3.el6.x86_64 07/18/15 _x86_64_ (4 CPU)
Filesystem: rBlk_nor/s wBlk_nor/s rBlk_dir/s wBlk_dir/s rBlk_svr/s wBlk_svr/s ops/s rops/s wops/s
Display only disk statistics
# iostat -d
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 6.97 655.35 407.92 4928340558 3067622213
sdc 10.00 12727.00 7164.52 95709420584 53878546848
sdj 7.37 10625.38 1629.56 79904801512 12254573904
...
Display one or multiple disks io statistics
iostat -p sda,sdc
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.73 0.00 0.99 0.49 0.00 97.78
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 6.97 655.33 407.90 4928434150 3067640781
sda1 0.00 0.00 0.00 12809 1109
sda2 0.63 29.94 0.16 225149943 1191880
sda3 1.85 34.05 14.01 256090276 105332840
sda4 4.49 591.33 393.73 4447179714 2961114952
sdc 10.00 12727.05 7164.27 95714993512 53879554624
Note: with option '-p', iostat displays statistics for block devices and all their partitions, sda in the example above
Display one or multiple disks by persistent disk name
Especially useful if you have multpath devices,here is the syntax
-j { ID | LABEL | PATH | UUID | ... } [ device [...] | ALL ]
Display persistent device names. Options ID, LABEL, etc. specify the type of the persistent name. These options are not limited, only prerequisite is that directory with required persistent names is present in /dev/disk. Optionally, multiple devices can be specified in the chosen persistent name type.
# iostat -m -j ID dm-name-lun8 dm-name-lun0
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.73 0.00 0.99 0.49 0.00 97.79
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
dm-name-lun0 15.73 10.70 2.01 80508087 15138946
dm-name-lun8 14.56 10.78 1.56 81079650 11733230
Note: The persistent name is from /dev/disk/by-id/
iostat Display I/O data in group
# iostat -g OSdisk sda sdi 6 2
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.07 0.08 0.00 99.85
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 28.11 1782.01 0.85 1953498023 932957
sdi 28.09 1781.82 0.85 1953294311 934109
OSdisk 56.20 3563.83 1.70 3906792334 1867066
Don't want show individual disks in the group output? Try this:
# iostat -g OSdisk sda sdi 6 2
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.00 0.00 0.07 0.08 0.00 99.85
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
OSdisk 56.20 3563.83 1.70 3906792334 1867066
iostat Display I/O data in MB/second
Default output is in KB/sec, this command shows output in MB/sec
iostat -m -j ID dm-name-lun8 dm-name-lun0 2 2
..
avg-cpu: %user %nice %system %iowait %steal %idle
0.29 0.00 0.99 0.00 0.00 98.72
Device: tps MB_read/s MB_wrtn/s MB_read MB_wrtn
dm-name-lun0 0.00 0.00 0.00 0 0
dm-name-lun8 36.00 72.00 0.00 144 0
iostat Display with timestamp
Want timestamp in the output? here is it
iostat -t -p sdc 2 2
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
07/18/15 13:40:56
avg-cpu: %user %nice %system %iowait %steal %idle
0.73 0.00 0.99 0.49 0.00 97.79
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdc 10.00 12725.99 7162.96 95724608304 53879612960
07/18/15 13:40:58
avg-cpu: %user %nice %system %iowait %steal %idle
0.19 0.00 0.65 0.00 0.00 99.16
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sdc 0.00 0.00 0.00 0 0
Display extended statistics info
iostat -j ID dm-name-lun8 dm-name-lun0 -x -m 2 2
Linux 2.6.32-504.12.2.el6.x86_64 07/18/15 _x86_64_ (24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.73 0.00 0.99 0.49 0.00 97.79
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
dm-name-lun0 4.47 1.04 13.24 1.92 10.57 1.94 1689.57 0.15 9.86 2.31 3.51
dm-name-lun8 4.65 0.85 12.87 1.68 10.78 1.56 1736.35 0.02 1.49 2.49 3.63
Where
rrqm/s : The number of read requests merged per second that were queued to the hard disk
wrqm/s : The number of write requests merged per second that were queued to the hard disk
r/s : The number of read requests per second
w/s : The number of write requests per second
rsec/s : The number of sectors read from the hard disk per second
wsec/s : The number of sectors written to the hard disk per second
avgrq-sz : The average size (in sectors) of the requests that were issued to the device.
avgqu-sz : The average queue length of the requests that were issued to the device
await : The average time (in milliseconds) for I/O requests issued to the device to be served.
This includes the time spent by the requests in queue and the time spent servicing them.
svctm : The average service time (in ms) for I/O requests that were issued to the device
%util : Percentage of CPU time during which I/O requests were issued to the device .
Device saturation occurs when this value is close to 100%.
Comments powered by CComment