How to Measure the Impact of a Zero RPO Strategy

Merle SadlerBy Merle Sadler

Have you ever wondered about the impact of zero RPO on Mainframe Virtual Tape for business continuity or disaster recovery? This blog focuses on the impact of jobs using the Oracle/STK VSM Enhanced Synchronous Replication capability while delivering an RPO of 0.

A recovery point objective, or “RPO”, is defined by business continuity planning. It is the maximum targeted time period in which data might be lost from an IT service due to a major incident.

Zero RPO - Recovery Point Objective

Continue reading

What Good is a zEDC Card?

BrettBy Dave Heggen

informatics inc: You Need Our Shrink!

The technologies involving compression have been looking for a home on z/OS for many years. There have been numerous implementations to perform compression, all with the desired goal of reducing the number of bits needed to store or transmit data. Hostbased implementations ultimately trade MIPS for MB. Outboard hardware implementations avoid this issue.

Examples of Compression Implementations

The first commercial product I remember was from Informatics, named Shrink, sold in the late 1970s and early 1980s. It used host cycles to perform compression, could generally get about a 2:1 reduction in file size and, in the case of the IMS product, worked through exits so programs didn’t require modification. Sharing data compressed in this manner required accessing the data with the same software that compressed the data to expand it.

Continue reading

Mainframe Capacity “Through the Looking Glass”

Todd-Havekost

By Todd Havekost

 

With the recent release of “Alice Through the Looking Glass” (my wife is a huge Johnny Depp fan), it seems only appropriate to write on a subject epitomized by Alice’s famous words:

“What if I should fall right through the center of the earth … oh, and come out the other side, where people walk upside down?”  (Lewis Carroll, Alice in Wonderland)

Along with the vast majority of the mainframe community, I had long embraced the perspective that running mainframes at high levels of utilization was essential to operating in the most cost-effective manner. Based on carefully constructed capacity forecasts, our established process involved implementing just-in-time upgrades designed to ensure peak utilization’s remained slightly below 90%.

It turns out we’ve all been wrong.  Continue reading

Is Your Car or Mainframe Better at Warning You?

jerrystreetBy Jerry Street

 

Imagine driving your car when, without warning, all of the dashboard lights came on at the same time. Yellow lights, red lights. Some blinking, while others even have audible alarms. You would be unable to identify the problem because you’d have too many warnings, too much input, too much display. You’d probably panic!

That’s not likely, but if your car’s warning systems did operate that way, would it make any sense to you? Conversely, if your car didn’t have any dashboard at all, how would you determine if your car was about to have a serious problem like very low oil pressure or antifreeze/coolant? Could you even operate it safely without an effective dashboard? Even the least expensive cars include sophisticated monitoring and easy interpretation of metrics into good and bad indicators on the dashboard.

You have a need for a similar dashboard of your z/OS mainframe to alarm you. When any part of the infrastructure starts to be at risk of not performing well, you need to know it, and sooner is better. By being warned of a risk in an infrastructure component’s ability to handle your peak workload, you can avoid the problem before it impacts production users or fix what is minor before the impact becomes major. The only problem is that the dashboards and reporting you’re using today for your z/OS infrastructure, and most monitoring tools, do not provide this type of early warning.

Continue reading