Queue depth is the number of I/O requests (SCSI commands) that can be queued at one time on a storage controller. Each I/O request from the host's initiator HBA to the storage controller's target adapter consumes a queue entry. Typically, a higher queue depth equates to better performance. However, if the storage controller's maximum queue depth is reached, that storage controller rejects incoming commands by returning a QFULL response to them. If a large number of hosts are accessing a storage controller, plan carefully to avoid QFULL conditions, which significantly degrade system performance and can lead to errors on some systems.
Changing the queue_depth on a hdisk with chdev –P. This updates the devices ODM information only, not its running configuration. The new value will take effect next time We reboot the system. So now I have a different queue_depth in the ODM compared to the devices current running config (in the kernel).
What if We forget that We’ve made this change to the ODM and forget to reboot the system for many months? Until someone complains about the an I/O performance issue, We still believe that the changes are in effective. But it is NOT, Why ?
How do We know if the ODM matches the devices running configuration?
For example, We start with a queue_depth of 3, which is confirmed by looking at lsattr (ODM) and kdb (running config) output:
# lsattr -El hdisk6 -a queue_depth
queue_depth 3 Queue DEPTH True
# echo scsidisk hdisk6 | kdb | grep queue_depth
ushort queue_depth = 0x3; < In Hex.
Now We change the queue_depth using chdev –P i.e. only updating the ODM.
# chdev -l hdisk6 -a queue_depth=256 -P
hdisk6 changed
# lsattr -El hdisk6 -a queue_depth
queue_depth 256 Queue DEPTH True
kdb reports that the disks running configuration still has a queue_depth of 3.
# echo scsidisk hdisk6 | kdb | grep queue_depth
ushort queue_depth = 0x3;
Now if We varyoff the VG and change the disk queue_depth, both lsattr (ODM) and kdb (the running config) show the same value:
# umount /test
# varyoffvg testvg
# chdev -l hdisk6 -a queue_depth=256
hdisk6 changed
# varyonvg testvg
# mount /test
# lsattr -El hdisk6 -a queue_depth
queue_depth 256 Queue DEPTH True
# echo scsidisk hdisk6 | kdb | grep queue_depth
ushort queue_depth = 0x100; < In Hex = Dec 256.
# echo "ibase=16 ; 100" | bc
256
This is one way of checking We’ve rebooted since We changed our queue_depth attributes.
Setting queue depths on AIX hosts:
You can change the queue depth on AIX hosts using the chdev command. Changes made using the chdev command persist across reboots.
Examples:
To change the queue depth for the hdisk7 device, use the following command:
chdev -l hdisk7 -a queue_depth=32
To change the queue depth for the fcs0 HBA, use the following command:
chdev -l fcs0 -a num_cmd_elems=128
The default value for num_cmd_elems is 200. The maximum value is 2,048.
Note: It might be necessary to take the HBA offline to change num_cmd_elems and then bring it back online using the rmdev -l fcs0 -R and makdev -l fcs0 -P commands.
Changing the queue_depth on a hdisk with chdev –P. This updates the devices ODM information only, not its running configuration. The new value will take effect next time We reboot the system. So now I have a different queue_depth in the ODM compared to the devices current running config (in the kernel).
What if We forget that We’ve made this change to the ODM and forget to reboot the system for many months? Until someone complains about the an I/O performance issue, We still believe that the changes are in effective. But it is NOT, Why ?
How do We know if the ODM matches the devices running configuration?
For example, We start with a queue_depth of 3, which is confirmed by looking at lsattr (ODM) and kdb (running config) output:
# lsattr -El hdisk6 -a queue_depth
queue_depth 3 Queue DEPTH True
# echo scsidisk hdisk6 | kdb | grep queue_depth
ushort queue_depth = 0x3; < In Hex.
Now We change the queue_depth using chdev –P i.e. only updating the ODM.
# chdev -l hdisk6 -a queue_depth=256 -P
hdisk6 changed
# lsattr -El hdisk6 -a queue_depth
queue_depth 256 Queue DEPTH True
kdb reports that the disks running configuration still has a queue_depth of 3.
# echo scsidisk hdisk6 | kdb | grep queue_depth
ushort queue_depth = 0x3;
Now if We varyoff the VG and change the disk queue_depth, both lsattr (ODM) and kdb (the running config) show the same value:
# umount /test
# varyoffvg testvg
# chdev -l hdisk6 -a queue_depth=256
hdisk6 changed
# varyonvg testvg
# mount /test
# lsattr -El hdisk6 -a queue_depth
queue_depth 256 Queue DEPTH True
# echo scsidisk hdisk6 | kdb | grep queue_depth
ushort queue_depth = 0x100; < In Hex = Dec 256.
# echo "ibase=16 ; 100" | bc
256
This is one way of checking We’ve rebooted since We changed our queue_depth attributes.
Setting queue depths on AIX hosts:
You can change the queue depth on AIX hosts using the chdev command. Changes made using the chdev command persist across reboots.
Examples:
To change the queue depth for the hdisk7 device, use the following command:
chdev -l hdisk7 -a queue_depth=32
To change the queue depth for the fcs0 HBA, use the following command:
chdev -l fcs0 -a num_cmd_elems=128
The default value for num_cmd_elems is 200. The maximum value is 2,048.
Note: It might be necessary to take the HBA offline to change num_cmd_elems and then bring it back online using the rmdev -l fcs0 -R and makdev -l fcs0 -P commands.
No comments:
Post a Comment