Application Design Issues Cause Low Throughput to Virtual Tape

Dave HeggenBy Dave Heggenlow throughput application failure

Our story begins as our stories usually do, somewhere in the middle after the customer has been working on a problem, in this case low throughput to virtual tape, for a while and they are just about to give-up.

The customer had a group of tape jobs that frequently would not finish on time. On time means the jobs complete within the batch window. Not on time means they would run beyond the batch window and would compete with the online activity. Some days the job would run in less than an hour, while the same job on other days would run for 10 to 15 hours.

Low Throughput to VSM Tape Systems

The customer investigated the issue when the jobs ran long and found that the throughput to the VSM tape systems was low for the jobs in question. A joint investigation with Oracle was started under the assumption that the problem was caused by the VSM tape systems. My intel tells me that 6 people from the customer worked on the investigation for a whole month and Oracle was unable to find anything wrong from a hardware or software perspective.

Continue reading

zHyperLink: The Holy Grail of Mainframe I/O?

Dr. Gilbert HoutekamerBy Gilbert Houtekamer, Ph.D.zHyperLink

Now that it has become harder and harder to make the processor faster, IBM is looking for other ways to make their mainframes perform better.

This has resulted in new co-processors for compression and encryption and now also, with the z14 processor, in a new technology called zHyperLink. This new I/O connectivity aims to significantly reduce the I/O response time, while at the same time not increase the processor (CP) load.

This new technology comes with a set of promises and restrictions that will cause you to rethink the design of your storage and replication infrastructure. The days of distance limitations are back, which has big implications for synchronous replication in particular.

Continue reading

HDS Pools Hit the Target in RMF for Hitachi Dynamic Tiering

Dr. Gilbert HoutekamerBy Gilbert Houtekamer, Ph.D.

In previous blogs we talked about metrics that we would like to see added to RMF and SMF records. We also discussed the challenges that EMC and HDS face fitting their measurement data in the IBM-defined RMF instrumentation.

A good example of what can be achieved given the constraints is what HDS did for their Hitachi Dynamic Provisioning (HDP) pools. HDP pools are the basis for thin provisioning and dynamic tiering in the Hitachi architecture. An HDP pool consists of a number of array groups. Arrays with different drive technologies can be combined in a pool with dynamic tiering, in a mix that you feel is appropriate for your workload. Continue reading