What’s Using Up All My Tapes? – Using Tape Management Catalog Data

BrettBy Dave Heggen

tape management catalog

Most of the data processed for IntelliMagic Vision for z/OS Tape is performance, event or activity driven, obtained from SMF and the Virtual Tape Hardware. Did you know that in addition to the SMF and TS7700 BVIR data, IntelliMagic Vision could also process information from a Tape Management Catalog (TMC)? Having this type of data available and processing it correctly is critical to answering the question “What’s using up all my tapes?”.

We’re all set up and distributed scratch lists. This is a necessary (and generally manual) part of maintaining a current tape library. It does require participation for compliance. Expiration Dates, Catalog and Cycle management also have their place to automate the expiration end of the tape volume cycle. This blog is intended to address issues that neither compliance nor automation address.

Continue reading

Game Changer for Transaction Reporting

Todd-Havekost

By Todd Havekost

Periodically, a change comes to an industry that introduces a completely new and improved way to accomplish an existing task that had previously been difficult, if not daunting. Netflix transformed the home movie viewing industry by offering video streaming that was convenient, affordable, and technically feasible – a change so far-reaching that it ultimately led to the closing of thousands of Blockbuster stores. We feel that IBM recently introduced a similar “game changer” for transaction reporting for CICS, IMS and DB2.

Continue reading

Flash Performance in High-End Storage

cor-m

By Dr. Cor Meenderinck

This is a summary of the white paper with the same title which was the Winner of 2016 CMG imPACt conference Best Paper Award. It is a great example of the research that we do that leads to the expert knowledge we embed in our products.

Flash based storage is revolutionizing the storage world. Flash drives can sustain a very large number of operations and are extremely fast. It is for those reasons that manufacturers eagerly embraced this technology to be included in high-end storage systems. As the price per gigabyte of flash storage is rapidly decreasing, experts predict that flash will soon be the dominant medium in high-end storage.

But how well are they really performing inside your high-end storage systems? Do the actual performance metrics when deployed within a storage array live up to the advertised Flash latencies of around 0.1 milliseconds? Continue reading

Which Workloads Should I Migrate to the Cloud?

Brett

By Brett AllisonCloud Storage

By now, we have just about all heard it from our bosses, “Alright folks we need to evaluate our workloads and determine which ones are a good fit for the cloud.” After feeling a tightening in your chest, you remember to breathe and ask yourself, “How the heck do I accomplish this task as I know very little about the cloud and to be honest it seems crazy to move data to the cloud!”

According to this TechTarget article, “A public cloud is one based on the standard cloud computing model, in which a service provider makes resources, such as applications and storage, available to the general public over the internet. Public cloud services may be free or offered on a pay-per-usage model.” Most organizations have private clouds, and some have moved workloads into public clouds. For the purpose of this conversation, I will focus on the public cloud. Continue reading

HDS G1000 and Better Protecting Availability of z/OS Disk Storage

B._Phillips-web0

By Brent Phillips

 

If your job includes avoiding service disruptions on z/OS infrastructure, you may not have everything you need to do your job.

seeinsideHDSVSPG1000 (3)Disk storage in particular is typically the least visible part of the z/OS infrastructure. It is largely a black box. You can see what goes in and what comes out, but not what happens inside. Storage arrays these days are very complex devices with internal components that may become overloaded and introduce unacceptable service time delays for production work without providing an early warning.

Consequently, in the 10+ years I have been with IntelliMagic I have yet to meet a single mainframe site that (prior to using IntelliMagic) automatically monitors for threats to availability due to storage component overload. Continue reading

What HDS VSP and HP XP P9500 Should Be Reporting in RMF/SMF – But Aren’t

Gilbert

By Gilbert Houtekamer, Ph.D.

This the last blog post in a series of four, where we share our experience with the instrumentation that is available for the IBM DS8000, EMC VMAX  and HDS VSP or HP XP P9500 storage arrays through RMF and SMF.   This post is about the Hitachi high-end storage array that is sold by HDS as the VSP and by HP as the XP P9500.

RMF has been developed over the years by IBM, based on IBM storage announcements. Even for the IBM DS8000, not nearly all functions are covered; see “What IBM DS8000 Should Be Reporting in RMF/SMF – But Isn’t” blog post.  For the other vendors it is harder still –  they will have to make do with what IBM provides in RMF, or create their own SMF records.

Hitachi has supported the RMF 74.5 cache counters for a long time, and those counters are fully applicable to the Hitachi arrays.  For other RMF record types though, it is not always a perfect match.  The Hitachi back-end uses RAID groups that are very similar to IBM’s.  This allowed Hitachi to use the RMF 74.5 RAID Rank and 74.8 Link records that were designed for IBM ESS. But for Hitachi arrays with concatenated RAID groups not all information was properly captured.   To interpret data from those arrays, additional external information from configuration files was needed.

With their new Hitachi Dynamic Provisioning (HDP) architecture, the foundation for both Thin Provisioning and automated tiering, Hitachi updated their RMF 74.5 and 74.8 support such that each HDP pool is reflected in the RMF records as if it were an IBM Extent Pool.   This allows you to track the back-end activity on each of the physical drive tiers, just like for IBM.

This does not provide information about the dynamic tiering process itself, however.    Just like for the other vendors, there is no information per logical volume on what portion of its data is stored on each drive tier. Nor are there any metrics available about the migration activity between the tiers.

Overall, we would like to see the following information in the RMF/SMF recording:

  • Configuration data about replication.   Right now, you need to issue console or Business Continuity Manager commands to determine replication status.  Since proper and complete replication is essential for any DR usage, the replication status should be recorded every RMF interval instead.
  • Performance information on Universal Replicator, Hitachi’s implementation of asynchronous mirroring.  Important metrics include the delay time for the asynchronous replication, the amount of write data yet to be copied, and the activity on the journal disks.
  • ShadowImage, FlashCopy and Business Copy activity metrics. These functions provide logical copies that can involve significant back-end activity which is currently not recorded separately.  This activity can easily cause hard-to-identify performance issues, hence it should be reflected in the measurement data.
  • HDP Tiering Policy definitions, tier usage and background migration activity.  From z/OS, you would want visibility into the migration activity, and you’d want to know the policies for a Pool and the actual drive tiers that each volume is using.

Unless IBM is going to provide an RMF framework for these functions, the best approach for Hitachi is to create custom SMF records from the mainframe component that Hitachi already uses to control the mainframe-specific functionality.

It is good to see that Hitachi works to fit their data in the framework defined by RMF for the IBM DS8000.  Yet we would like to see more information from the HDS VSP and HP XP P9500 reflected in the RMF or SMF records.

So when considering your next HDS VSP or HP XP P9500 purchase, also discuss the need to manage it with the tools that you use on the mainframe for this purpose: RMF and SMF.  If your commitment to the vendor is significant, they may be responsive.