HDS Pools Hit the Target in RMF for Hitachi Dynamic Tiering

Gilbert

By Gilbert Houtekamer, Ph.D.

In previous blogs we talked about metrics that we would like to see added to RMF and SMF records. We also discussed the challenges that EMC and HDS face fitting their measurement data in the IBM-defined RMF instrumentation.

A good example of what can be achieved given the constraints is what HDS did for their Hitachi Dynamic Provisioning (HDP) pools. HDP pools are the basis for thin provisioning and dynamic tiering in the Hitachi architecture. An HDP pool consists of a number of array groups. Arrays with different drive technologies can be combined in a pool with dynamic tiering, in a mix that you feel is appropriate for your workload.

With the introduction of the HDP pools, HDS decided to assign ‘IBM extent pool’ numbers to each HDP pool. IBM uses a two-byte extent pool number in their architecture. With HDP, HDS now returns hex ‘01xx’ for HDP Pool (hex) xx. Since there can be up to 128 HDP pools in a storage array, this fits nicely.

Like for IBM, each VSP array group or concatenated array group gets a rank number. The 74.8 RMF records provide the relationship between an IBM Extent Pool and its ranks, so for the VSP this gives the relationship between an HDP pool and its arrays.

Sample information for one Dynamic Tiering Pool:

Strage Pool RAID Group ID # Drives Drive Tier RAID Type
HDP 00 0001: 1-1 4 USP: SSD 200GB RAID 5
HDP 00 000E: 1-14 8 USP: 15K 300GB RAID 5
HDP 00 000F: 1-15 8 USP: 15K 300GB RAID 5
HDP 00 0027: 3-7 8 USP: 10K 300GB RAID 5
HDP 00 0028: 3-8 8 USP: 10K 300GB RAID 5
HDP 00 0029: 3-9 8 USP: 10K 300GB RAID 5
HDP 00 003A: 4-10 8 USP: 10K 300GB RAID 5
HDP 00 003D: 4-13 8 USP: 15K 300GB RAID 5
HDP 00 003E: 4-14 8 USP: 15K 300GB RAID 5

For array groups that are not part of an HDP pool, extent pool zero (0) is returned. This can be misleading, since each (concatenated) array group acts like an individual extent pool in the sense that z/OS volumes are created on the array group.   Those array groups that are not part of an HDP pool are not all in one big pool ‘zero’.

The new support is in particular significant for automatic tiering. Like for the DS8000, you can now see the activity and response time for each tier. And since the array group counters include all activity, this includes the inter- and intra-tier migration activity.  HDSpools

 

 

 

 

 

 

 

With these counters it is possible to assess whether each tier is getting the I/O intensity that you would expect: high activity on SSD and low activity on SATA. Normally this will be the case because of the skew in almost all workloads: some datasets are very active, others are hardly used. When you put constraints on the dynamic tiering algorithm with policy rules, this might impact the ability of the VSP to achieve optimal balance. It will attempt to use the tiers in the ratio that you specify instead of optimizing. Reports like the above provide a great way to assess whether you provided reasonable and useful objectives.

We are very pleased to see that HDS made these changes. They greatly increase your ability to get insight in the back-end and autotiering workload for the VSP.

Download this white paper to see IntelliMagic’s support for HDS VSP G1000.

One thought on “HDS Pools Hit the Target in RMF for Hitachi Dynamic Tiering”

  1. Joe Winterbotham says:

    Hi Gilbert,

    Good article. One point I would make is that neither HDS or HP made this change. Neither company make the firmware for these boxes. All hardware and firmware is done by Hitachi LTD, Japan. Hitachi supplies HP and HDS the same firmware and hardware. Using HDS as you have in this article gives the public the impression that HDS creates the hardware and firmware. This is not the case. HDS is not the same thing as Hitachi LTD.

Leave a Reply

Your email address will not be published. Required fields are marked *