HDS G1000 and Better Protecting Availability of z/OS Disk Storage


By Brent Phillips


If your job includes avoiding service disruptions on z/OS infrastructure, you may not have everything you need to do your job.

seeinsideHDSVSPG1000 (3)Disk storage in particular is typically the least visible part of the z/OS infrastructure. It is largely a black box. You can see what goes in and what comes out, but not what happens inside. Storage arrays these days are very complex devices with internal components that may become overloaded and introduce unacceptable service time delays for production work without providing an early warning.

Consequently, in the 10+ years I have been with IntelliMagic I have yet to meet a single mainframe site that (prior to using IntelliMagic) automatically monitors for threats to availability due to storage component overload.

One of the most important strategic advantages of z/OS and the z Systems platform over alternative enterprise platforms is the incomparable richness of the machine-generated data (SMF and RMF). While vastly underutilized, it is possible now to automatically mine the RMF and SMF data using built-in expert knowledge about the physical infrastructure capabilities in light of the requirements placed on them by the specific workloads in each data interval.

This type of data mining produces valuable intelligence about the threats to continuous availability of the storage environment such as physical components approaching critical levels of utilization during peak periods. This is what IntelliMagic Vision for z/OS Disk does for our clients, and it dramatically reduces the risk of disruptions due to saturated components. The resulting intelligence also dramatically accelerates the time to fix problems that are not predictable such as faulty fibre connections or other hardware or microcode failures, untested application changes, unpredictable workload spikes, etc.

What allows us to provide this new type of intelligence about the storage infrastructure, as well as the other infrastructure areas we monitor, is the automated interpretation of existing measurement data using built-in expert knowledge. Every mainframe storage vendor writes platform specific measurement data to RMF and SMF that goes significantly beyond IO rates and service times. Using built-in expert domain knowledge about the hardware, IntelliMagic can produce far more visibility and value than even expert SAS programmers are capable of using the same data source.

In our opinion, the protection of availability for the production site (and for replication as well) is further enhanced if storage vendors provide even more instrumentation to SMF. Our founder, Dr. Gilbert Houtekamer, has been an advocate of this principle and has worked closely with storage vendors to get this to happen. This is a good example of why he was recently inducted into this list of luminaries in the Mainframe Hall of Fame.

To this end, Hitachi has stepped up to the plate and has announced on August 4, 2015, their Mainframe Analytics Recorder. The Mainframe Analytics Recorder is a no-charge microcode upgrade for the VSP G1000 storage array that writes additional information to SMF about the utilization of the Multiprocessor Boards and Cache, for example.

IntelliMagic believes that this capability marks a very important step in the evolution of z/OS storage, as it has the potential to give unprecedented visibility into what happens inside the storage box. While not everything that IntelliMagic has advocated for (such as automatic tiering data or additional replication information) is available in the first release of the Mainframe Analytics Recorder, the information provided on the utilization of internal components is important and valuable. And Mainframe Analytics Recorder does provide a framework for enhancing SMF with this additional information in future releases.

IntelliMagic has worked closely with Hitachi on this new capability and has enhanced IntelliMagic Vision so that it supports, from day one, the newly available data about the VSP G1000. Rather than duplicating information about that reporting here, I refer you to our new white paper, “IntelliMagic Vision for Hitachi VSP G1000 SMF Records from Mainframe Analytics Recorder.” Contact your HDS sales representative for more information from HDS about Mainframe Analytics Recorder.

2 thoughts on “HDS G1000 and Better Protecting Availability of z/OS Disk Storage”

  1. Don Mayfield says:

    Yes. This measuring was done at Chevron Info Tech Company in the 1980s. Not only did it allow monitoring, it also allowed billing for resources.

    1. Brent Phillips says:

      Hi Don, you are correct that SMF records have been written for mainframe IO for many years. Back in the 1980’s storage devices were far different than they are today, and the queuing would occur on the host side. These days they are far more powerful and complex devices with their own processors, cache, etc. and the queuing for IO most often occurs inside the storage array rather than on the host channels, etc. The point of this blog is that the HDS G1000 is the first vendor of mainframe storage hardware to write these specific kind of metrics to SMF/RMF about the utilization of the internal processors in the storage array, for example. When interpreted using built-in expert knowledge, this data can provide IT managers with visibility of approaching bottlenecks instead of only finding out about constraints after the production workloads are already negatively affected. And the resulting knowledge/visibility also provides quick understanding of what the cause is and how to rectify it.

Leave a Reply