The zdb utility displays information about a ZFS pool useful for debugging and performs some amount of consistency checking. It shows on-disk structure of a ZFS pool, but the output of most invocations is not documented.
ORACLE says this tool should be only run under support engineer, but in some cases, we still want do something by ourown, here I just show some examples:

1. zdb <pool>

In this basic command output, it includes the following information section

    Cached pool configuratino (-C)
    Uberblock (-u)
    Datasets (-d)
Statistics (-b)
    Report stats on zdb's I/O (-s), this is similar to the first interval of zpool iostat

2. Display pool configuration zdb -C <pool>

Display information about the configuration. If specified with no other options, instead display information about the cache	file(/etc/zfs/zpool.cache).
# zdb -C zpool_1

MOS Configuration:
        version: 5000
        name: 'zpool_1'
        state: 0
        txg: 2174467
        pool_guid: 7192340891682188223
        errata: 0
        hostname: 'fibrevillage.com'
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 7192340891682188223
            create_txg: 4
            children[0]:
                type: 'raidz'
                id: 0
                guid: 3149437435388132785
                nparity: 2
                metaslab_array: 33
                metaslab_shift: 36
...

3. Display zpool Uberblock

The ZFS Uberblock is the root of a giant dynamic tree whose leaves contain data.
Most other file systems use instead a superblock (and copies of it) and a static collection of fixed size inode maps.
There are no inode maps with ZFS, inode equivalents (dnodes) are dynamically created and destroyed.
zdb -u
Uberblock:
        magic = 0000000000bab10c
        version = 5000
        txg = 3500525
        guid_sum = 5779117982243443140
        timestamp = 1434696856 UTC = Thu Jun 18 23:54:16 2015

4.Display the vdev labels for a device

     -l	device
	Display the vdev labels from the specified device. If the -u
	option is also specified, also display the uberblocks on this
	device.
# zdb -l /dev/disk/by-vdev/c0t2-part1
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 5000
    name: 'zpool_1'
    state: 0
    txg: 1510816
    pool_guid: 17027971859810456307
    errata: 0
    hostname: 'fibrevillage.com'
    top_guid: 4646251685471427072
    guid: 17893509949341848938
    vdev_children: 1
    vdev_tree:
        type: 'raidz'
        id: 0
        guid: 4646251685471427072
        nparity: 2
        metaslab_array: 33
        metaslab_shift: 36
        ashift: 9
        asize: 11002089832448
        is_log: 0
        create_txg: 4
        children[0]:
            type: 'disk'
            id: 0
            guid: 4022273922860863564
            path: '/dev/disk/by-vdev/c0t1-part1'
            whole_disk: 1
            DTL: 201
            create_txg: 4
        children[1]:
            type: 'disk'
            id: 1
            guid: 10431796400892251768
            path: '/dev/disk/by-vdev/c1t1-part1'
            whole_disk: 1
            DTL: 200
            create_txg: 4
...

5.Display information about datasets

Display information about datasets. Specified once, displays basic dataset information:	ID, create transaction,	size, and object count. If specified multiple times provides greater and greater verbosity.
If object IDs are specified, display information about those specific objects only.
# zdb -d zpool_1
Dataset mos [META], ID 0, cr_txg 4, 59.9M, 202 objects
Dataset zpool_1 [ZPL], ID 21, cr_txg 1, 4.68T, 3871 object

6.Display deduplication statistics

Display deduplication statistics, including the deduplication
	     ratio (dedup), compression	ratio (compress), inflation due	to the
	     zfs copies	property (copies), and an overall effective ratio
	     (dedup * compress / copies).
If specified twice, display a histogram of deduplication statis- tics, showing the allocated (physically present on disk) and ref- erenced (logically referenced in the pool) block counts and sizes by reference count.
If specified a third time, display the statistics independently for each deduplication table. If specified a fourth time, dump the contents of the deduplica- tion tables describing duplicate blocks. If specified a fifth time, also dump the contents of the dedupli- cation tables describing unique blocks.
# zdb -D zpool_1
All DDTs are empty

7.Intent log(ZIL)

Display information about intent log (ZIL) entries relating to each dataset.  If specified multiple times, display counts of each intent log transaction type.
# zdb -i zpool_1
Dataset mos [META], ID 0, cr_txg 4, 59.9M, 202 objects
Dataset zpool_1 [ZPL], ID 21, cr_txg 1, 4.68T, 3871 objects

8. Display the offset, spacemap, and free space of each metaslab

When specified twice, also display information about the on-disk free space histogram associated with each metaslab. When specified three	time, display the maximum contiguous free space, the in-core free space	histogram, and the percentage of free space in each space	map.  When specified four times	display	every spacemap record.
# zdb -m zpool_1
Metaslabs:
    vdev          0
    metaslabs   160   offset                spacemap          free      
    ---------------   -------------------   ---------------   -------------
    metaslab      0   offset            0   spacemap     36   free    2.56G
    metaslab      1   offset   1000000000   spacemap     37   free    2.26G
    metaslab      2   offset   2000000000   spacemap     38   free    1.68G
    metaslab      3   offset   3000000000   spacemap     39   free     490M
    metaslab      4   offset   4000000000   spacemap     40   free    33.1G
...

9.Report statistics on zdb i/o

Display operation counts,bandwidth, and	error counts of	I/O to the pool	from zdb.
# zdb -s zpool_1
                            capacity   operations   bandwidth  ---- errors ----
description                used avail  read write  read write  read write cksum
zpool_1                 5.76T 4.24T   363     0  866K     0     0     0     0
  raidz2                  5.76T 4.24T   363     0  866K     0     0     0     0
    /dev/disk/by-vdev/c0t1-part1        582     0 1.61M     0     0     0     0
    /dev/disk/by-vdev/c1t1-part1        575     0 1.16M     0     0     0     0
    /dev/disk/by-vdev/c2t0-part1        569     0 1.14M     0     0     0     0
    /dev/disk/by-vdev/c2t1-part1        565     0 1.12M     0     0     0     0
    /dev/disk/by-vdev/c3t0-part1        565     0 1.15M     0     0     0     0
    /dev/disk/by-vdev/c3t1-part1        580     0 1.14M     0     0     0     0
    /dev/disk/by-vdev/c4t0-part1        574     0 1.15M     0     0     0     0
    /dev/disk/by-vdev/c4t1-part1        566     0 1.12M     0     0     0     0
    /dev/disk/by-vdev/c5t0-part1        570     0 1.13M     0     0     0     0
    /dev/disk/by-vdev/c5t1-part1        574     0 1.16M     0     0     0     0
    /dev/disk/by-vdev/c0t2-part1        566     0 1.13M     0     0     0     0

10.Display statistics

Display statistics regarding the number, size (logical, physical and allocated) and deduplication of blocks.
Traversing all blocks to verify checksums and verify nothing leaked ...

# zdb -b zpool_1

Traversing all blocks to verify nothing leaked ...

    No leaks (block sum matches space maps exactly)

    bp count:        39549349
    bp logical:    5147193008640      avg: 130146
    bp physical:   5143966270464      avg: 130064     compression:   1.00
    bp allocated:  6332902768128      avg: 160126     compression:   0.81
    bp deduped:             0    ref>1:      0   deduplication:   1.00
    SPA allocated: 6332902768128     used: 57.60%