Finding Hidden Time Bombs in Your SAN Connectivity

BrettBy Brett Allison

Do you have any SAN connectivity risks? Chances are you do. Unfortunately, there is no way to see them. That’s because seeing the real end-to-end risks from the VMware guest through the SAN fabric to the Storage LUN is a difficult thing to do in practice as it requires many relationships from a variety of sources.

A complete end to end picture requires:

  • VMware guests to the ESX Hosts
  • ESX hosts initiators to targets
  • ESX hosts and datastores, VM guests and datastores, and ESX datastores to LUNs.
  • Zone sets
  • Target ports to host adapters and LUNs and storage ports.

For seasoned SAN professionals, none of this information is very difficult to comprehend. The trick is tying it all together in a cohesive way so you can visualize these relationships and quickly identify any asymmetry.

Why is asymmetry important? Let’s look at an actual example:

Continue reading

No Budget for a Storage Management Solution

By Morgan Oats

Every department in every industry has the same problem: how can I stretch my budget to get the necessary work done, make my team more effective, reduce costs, and stay ahead of the curve? This is equally true for performance and capacity planning teams. In many cases, it’s difficult to get budget approval to purchase the right software solution to help accomplish these goals. Management wants to stay under budget while IT is concerned with getting a solution that solves their problems. When trying to get approval for the right solution, it’s important to be able to show how you will get a good return on investment.

Continue reading

Bridging the z/OS Mainframe Performance & Capacity Skills Gap

B._Phillips-web0By Brent Phillips

Many, if not most organizations that depend on mainframes are experiencing the effects of the mainframe skills gap, or shortage. This gap is a result of the largely baby-boomer workforce that is now retiring without a new generation of experts in place who have the same capabilities. At the same time, the scale, complexity, and change in the mainframe environment continues to accelerate. Performance and capacity teams are a mission-critical function, and this performance skills gap represents a great risk to ongoing operations. It demands both immediate attention and a new, more effective approach to bridging the gap.

Bridging the z/OS Mainframe Performance and Capacity Skills Gap

Continue reading

How Much Flash Do I Need Part 2: Proving the Configuration

By Jim Sedgwick

Before making a costly Flash purchase, it’s always a good idea to use some science to forecast if the new storage hardware configuration, and especially the costly Flash you purchase, is going to be able to handle your workload. Is your planned purchase performance capacity actually too much, so that you aren’t getting your money’s worth? Or, even worse, is your planned hardware purchase too little?

In Part 1 of this blog, we discovered that our customer just might be planning to purchase more Flash capacity than their unique workload requires. In part 2 we will demonstrate how we were able to use modeling techniques to further understand how the proposed new storage configuration will handle their current workload. We will also project how this workload will affect response times when the workload increases into the future, as workloads tend to do.

Continue reading

How Much Flash Do I Need? Part 1

By Jim Sedgwick

Flash, Flash, Flash. It seems that every storage manager has a new favorite question to ask about Flash storage. Do we need to move to Flash? How much of our workload can we move to Flash? Can we afford to move to Flash? Can we afford NOT to move to Flash?

Whether or not Flash is going to magically solve all our problems (it’s not), it’s here to stay. We know Flash has super-fast response times as well as other benefits, but for a little while yet, it’s still going to end up costing you more money. If you subscribe to the notion that it’s good to make sure you only purchase as much Flash as your unique workload needs, read on.

Continue reading

How to Measure the Impact of a Zero RPO Strategy

Merle SadlerBy Merle Sadler

Have you ever wondered about the impact of zero RPO on Mainframe Virtual Tape for business continuity or disaster recovery? This blog focuses on the impact of jobs using the Oracle/STK VSM Enhanced Synchronous Replication capability while delivering an RPO of 0.

A recovery point objective, or “RPO”, is defined by business continuity planning. It is the maximum targeted time period in which data might be lost from an IT service due to a major incident.

Zero RPO - Recovery Point Objective

Continue reading

The High Cost of “Unpredictable” IT Outages and Disruptions

By Curtis Ryan

High Costs of IT Outages

It is no secret that IT service outages and disruptions can cost companies anywhere from thousands up to millions of dollars per incident – plus significant damage to company reputation and customer satisfaction. In the most high profile cases, such as recent IT outages at Delta and Southwest Airlines, the costs can soar to over $150 million per incident (Delta Cancels 280 Flights Due to IT Outage). Quite suddenly, IT infrastructure performance can become a CEO level issue (Unions Want Southwest CEO Removed After IT Outage).

While those kinds of major incidents make the headlines, there are thousands of lesser known, but still just as disruptive to business, service level disruptions and outages happening daily in just about every sizeable enterprise.

The costs of these often daily occurring incidents, like an unexpected slowdown in response time of a key business application during prime shift, can have a significant cumulative financial impact that may not be readily visible in the company’s accounting system.

Continue reading

What’s Using Up All My Tapes? – Using Tape Management Catalog Data

BrettBy Dave Heggen

tape management catalog

Most of the data processed for IntelliMagic Vision for z/OS Tape is performance, event or activity driven, obtained from SMF and the Virtual Tape Hardware. Did you know that in addition to the SMF and TS7700 BVIR data, IntelliMagic Vision could also process information from a Tape Management Catalog (TMC)? Having this type of data available and processing it correctly is critical to answering the question “What’s using up all my tapes?”.

We’re all set up and distributed scratch lists. This is a necessary (and generally manual) part of maintaining a current tape library. It does require participation for compliance. Expiration Dates, Catalog and Cycle management also have their place to automate the expiration end of the tape volume cycle. This blog is intended to address issues that neither compliance nor automation address.

Continue reading

Clogged Device Drain? Use Your Data Snake!

Lee

By Lee LaFresePlunger

Have you ever run into high I/O response times that simply defy explanation? You can’t find anything wrong with your storage to explain why performance is degraded. It could be a classic “slow drain device” condition. Unfortunately, you can’t just call the data plumbers to clean it out! What is a storage handyman to do?

Continue reading

SRM: The “Next” As-a-Service

Brett

By Brett Allison

You may have seen this article published by Forbes, stating that Storage Resource Management (SRM) is the “Next as-a-Service.” The benefits cited include the simplicity and visibility provided by as-a-service dashboards and the increasing sophistication through predictive analytics.

IntelliMagic Vision is used as-a-Service for some of the world’s largest companies, and has been since 2013. Although we do much more than your standard SRM by embedding deep expert knowledge into our software, SRM, SPM, and ITOA all fall under our umbrella of capabilities. So, while we couldn’t agree more with the benefits of as-a-service offerings for SRM software, the word “Next” in the article seems less applicable. We might even say: “We’ve been doing that for years!”

Continue reading

Noisy Neighbors: Finding Root Cause of Performance Issues in IBM SVC Environments

By Jim SedgwickNoisy Neighbors

At some point or another, we have probably all experienced noisy neighbors, either at home, at work, or at school. There are just some people who don’t seem to understand the negative effect their loudness has on everyone around them.

Our storage environments also have these “noisy neighbors” whose presence or actions disrupt the performance of the rest of the storage environment. In this case, we’re going to take a look at an SVC all flash storage pool called EP-FLASH_3. Just a few bad LUNs have a profound effect on the I/O experience of the entire IBM Spectrum Virtualize (SVC) environment.

Continue reading

Getting the Most out of zEDC Hardware Compression

Todd-Havekost

By Todd Havekost

One of the challenges our customers tell us they face with their existing SMF reporting is keeping up with emerging z/OS technologies. Whenever a new element is introduced in the z infrastructure, IBM adds raw instrumentation for it to SMF. This is of course very valuable, but the existing SMF reporting toolset, often a custom SAS-based program, subsequently needs to be enhanced to support these new SMF metrics in order to properly manage the new technology.

z Enterprise Data Compression (zEDC) is one of those emerging that is rapidly gaining traction with many of our customers, and for good reasons:

  • It is relatively straightforward and inexpensive to implement.
  • It can be leveraged by numerous widely used access methods and products.
  • It reduces disk storage requirements and I/O elapsed times by delivering good compression ratios.
  • The CPU cost is very minimal since almost all the processing is offloaded to the hardware.

Continue reading

Game Changer for Transaction Reporting

Todd-Havekost

By Todd Havekost

Periodically, a change comes to an industry that introduces a completely new and improved way to accomplish an existing task that had previously been difficult, if not daunting. Netflix transformed the home movie viewing industry by offering video streaming that was convenient, affordable, and technically feasible – a change so far-reaching that it ultimately led to the closing of thousands of Blockbuster stores. We feel that IBM recently introduced a similar “game changer” for transaction reporting for CICS, IMS and DB2.

Continue reading

How to Prevent an “Epic” EMR System Outage

By Curtis RyanElectronic Medical Records

Protecting the availability of your IT storage is vital for performance, but it can also be critical for life. No one knows this better than the infrastructure department of major healthcare providers. Application slowdowns or outages in Electronic Medical Record (EMR), Systems or Electronic Health Record (EHR) Systems – such as Epic, Meditech, or Cerner – can risk patient care, open hospitals up for lawsuits, and cost hundreds of thousands of dollars.

Nobody working in IT Storage in any industry wants to get a call about a Storage or SAN service outage, but even minor service disruptions can halt business operations until the root cause of the issue can be diagnosed and resolved. This kind of time cannot always be spared in the ‘life and death’ environment of the users of EMR systems in healthcare providers.

Continue reading

The Circle of (Storage) Life

Lee

Storage Life Cycle

By Lee LaFrese

Remember the Lion King? Simba starts off as a little cub, and his father, Mufasa, is king. Over time, Simba goes through a lot of growing pains but eventually matures to take over his father’s role despite the best efforts of his Uncle Scar to prevent it. This is the circle of life. It kind of reminds me of the storage life cycle only without the Elton John score!

Hardware Will Eventually Fail and Software Will Eventually Work

New storage technologies are quickly maturing and replacing legacy platforms. But will they be mature enough to meet your high availability, high performance IT infrastructure needs?

Continue reading

Compressing Wisely with IBM Spectrum Virtualize

Brett

By Brett Allison

 

Compressing Wisely - CompressionCompression of data in an IBM SVC Spectrum Virtualize environment may be a good way to gain back capacity, but there can be hidden performance problems if compressible workloads are not first identified. Visualizing these workloads is key to determining when and where to successfully use compression. In this blog, we help you with identifying the right workloads so that you can achieve capacity savings in your IBM Spectrum Virtualize environments without compromising performance.

Today, all vendors have compression capabilities built into their hardware. The advantage of compression is that you need less real capacity to service the needs of your users. Compression reduces your managed capacity, directly reducing your storage costs.

Continue reading

What Good is a zEDC Card?

BrettBy Dave Heggen

informatics inc: You Need Our Shrink!

The technologies involving compression have been looking for a home on z/OS for many years. There have been numerous implementations to perform compression, all with the desired goal of reducing the number of bits needed to store or transmit data. Hostbased implementations ultimately trade MIPS for MB. Outboard hardware implementations avoid this issue.

Examples of Compression Implementations

The first commercial product I remember was from Informatics, named Shrink, sold in the late 1970s and early 1980s. It used host cycles to perform compression, could generally get about a 2:1 reduction in file size and, in the case of the IMS product, worked through exits so programs didn’t require modification. Sharing data compressed in this manner required accessing the data with the same software that compressed the data to expand it.

Continue reading

How’s Your Flash Doing?

By Joe Hyde

Assessing Flash Effectiveness

How’s your Flash doing? Admittedly, this is a bit of a loaded question. It could come from your boss, a colleague or someone trying to sell you the next storage widget. Since most customers are letting the vendors’ proprietary storage management algorithms optimize their enterprise storage automatically you may not have had the time or tools to quantify how your Flash is performing.

The Back-end Activity

First, let’s use the percentage of back-end activity to Flash as the metric to answer this question. Digging a little deeper we can look at back-end response times for Flash and spinning disks (let’s call these HDD for Hard Disk Drives). I’ll also look at the amount of sequential activity over the day to help explain the back-end behavior.

Below is 5 weekdays worth of data from an IBM DS8870 installed at a Fortune 500 company. Although it’s possible to place data statically on Flash storage in the IBM DS8870, in this case, IBM’s Easy Tier is used for the automatic placement of data across Flash and HDD storage tiers. Let’s refer to this scheme generically as auto-tiering. For this IBM DS8870, Flash capacity was roughly 10% of the total storage capacity. Continue reading

Flash Performance in High-End Storage

cor-m

By Dr. Cor Meenderinck

This is a summary of the white paper with the same title which was the Winner of 2016 CMG imPACt conference Best Paper Award. It is a great example of the research that we do that leads to the expert knowledge we embed in our products.

Flash based storage is revolutionizing the storage world. Flash drives can sustain a very large number of operations and are extremely fast. It is for those reasons that manufacturers eagerly embraced this technology to be included in high-end storage systems. As the price per gigabyte of flash storage is rapidly decreasing, experts predict that flash will soon be the dominant medium in high-end storage.

But how well are they really performing inside your high-end storage systems? Do the actual performance metrics when deployed within a storage array live up to the advertised Flash latencies of around 0.1 milliseconds? Continue reading

Beat the Annual MLC Software Price Increase

Todd-Havekost

By Todd Havekost

In August, IBM announced their annual 4% increase in z Systems Monthly License Charge (MLC) software prices. It was indicated in the announcement letter that the timing of the announcement is designed to give customers sufficient lead time to adjust their budgets for the following year. This cost increase may put additional strain on a lot of already tight budgets and force some shops to make unpleasant decisions. We at IntelliMagic think you have a better alternative to the MLC expense increases.

For several months, IntelliMagic has been delivering free MLC Reduction Assessments showing mainframe sites ways MLC expenses can be reduced. These assessments unleash the visibility provided by IntelliMagic Vision into SMF data from your environment, exploring several potential areas of opportunity for savings. And for a majority of mainframe sites we have been able to help identify significant potential MLC reductions through these assessments.
Continue reading

Which Workloads Should I Migrate to the Cloud?

Brett

By Brett AllisonCloud Storage

By now, we have just about all heard it from our bosses, “Alright folks we need to evaluate our workloads and determine which ones are a good fit for the cloud.” After feeling a tightening in your chest, you remember to breathe and ask yourself, “How the heck do I accomplish this task as I know very little about the cloud and to be honest it seems crazy to move data to the cloud!”

According to this TechTarget article, “A public cloud is one based on the standard cloud computing model, in which a service provider makes resources, such as applications and storage, available to the general public over the internet. Public cloud services may be free or offered on a pay-per-usage model.” Most organizations have private clouds, and some have moved workloads into public clouds. For the purpose of this conversation, I will focus on the public cloud. Continue reading

5 Reasons Why All-Flash Arrays Won’t Magically Solve All Your Problems

Brett

By Brett Allison

 
IntelliMagic Flash Storage
In the last few years, flash storage has turned from very expensive into quite affordable. Vendors that sell all-flash arrays advertise the extremely low latencies, and those are indeed truly impressive. So it may feel like all-flash systems will solve all your performance issues. But reality is that even with game-changing technological advances like flash, the complexity of the entire infrastructure makes sure that there are still plenty of problems to run into. Continue reading

Mainframe Capacity “Through the Looking Glass”

Todd-Havekost

By Todd Havekost

 

With the recent release of “Alice Through the Looking Glass” (my wife is a huge Johnny Depp fan), it seems only appropriate to write on a subject epitomized by Alice’s famous words:

“What if I should fall right through the center of the earth … oh, and come out the other side, where people walk upside down?”  (Lewis Carroll, Alice in Wonderland)

Along with the vast majority of the mainframe community, I had long embraced the perspective that running mainframes at high levels of utilization was essential to operating in the most cost-effective manner. Based on carefully constructed capacity forecasts, our established process involved implementing just-in-time upgrades designed to ensure peak utilization’s remained slightly below 90%.

It turns out we’ve all been wrong.  Continue reading

Achieving Significant Software Cost Reduction on the IBM z13

B._Phillips-web0

By Brent Phillips

 

While most mainframe shops have explored how to reduce mainframe software costs, at IntelliMagic we are finding significant latent savings opportunities still exist at even the best run sites.

Since software cost reduction is always important, we thought it would be helpful to pass along a valuable resource from the March 2016 SHARE Conference for mainframe users, which included a new session by Todd Havekost of USAA, a Fortune 100 financial services company.

Mr. Havekost’s presentation ‘Achieving Significant Capacity Improvements on the IBM z13’ outlined the results of their software cost optimization initiatives. Part of the story is that some of the historical capacity planning assumptions no longer apply and how lowering RNI reduced both MIPS and the cost of IBM Monthly Licensing Charge (MLC) software. The session has been identified as outstanding and the winner of the SHARE Best Session Award. Continue reading

Performance Virtual Reality – Seeking the Truth in Storage Benchmarks

Lee

By Lee LaFrese

 

Performance analysts likeFigure 2 - The Four Corners of Storage Benchmarking myself have a love/hate relationship with benchmarks. On the one hand, benchmarks are perceived as a great way to quantify ‘feeds and speeds’ of storage hardware. However, it is very difficult for benchmarks to be truly representative of how real applications work. Thus, I consider benchmarks a form of ‘virtual reality’; and like virtual reality, benchmarks may seem very realistic but they can deceive you. Therefore, I’ve written this article from the viewpoints of expanding your knowledge about how benchmarks work so you stay rooted in the real world.

Continue reading

How Effective is Your Adaptive Flash Cache?

Brett

By Brett Allison

 

Have you ever wondered whether3PAR you should enable Adaptive Flash Cache on your HPE 3PAR?

Adaptive Optimization (AO) is HPE 3PARs automatic tiering solution. It provides the user with several performance and capacity related parameters for influencing the behavior of the automatic tiering. I covered this in detail in a recent whitepaper about HPE 3PAR AO. One of the findings from that study was that in this particular customer’s environment there were too many I/Os on the 450 GB 10K RPM drives and there were not enough I/Os on the SSDs. The result was that the 450 GB 10K RPM drives were running at nearly 100% busy all the time. My suggestion was to enable Adaptive Flash Cache (AFC)  by allocating some of the under-utilized SSD capacity. AFC supplements DRAM with NAND flash devices to cache small (<64 KB) frequently accessed read blocks and ultimately to improve read response time. Continue reading

How to Diagnose IBM SVC/Storwize V7000 (Spectrum Virtualize) Replication Performance Issues: Part 2 Diagnostics

Brett

By Brett Allison

 

In part 1 of this blog series we talked about how to select your SVC/V7000, replication technology that matches your business requirements, or more likely, your budget.

Now we need to think about how you can monitor and diagnose SVC/V7000 performance issues that may be caused by replication. I run into SVC/V7000 replication issues quite frequently, and have found that not all monitoring and diagnostic tools provide a comprehensive picture of SVC/V7000 replication. Further complicating matters, the nature of the technology you have selected will influence expectations and approach to problem determination.

Continue reading

How to Choose the Best IBM SVC/Storwize V7000 (Spectrum Virtualize) Replication Technology: Part I Introduction

Brett

By Brett Allison

Disaster Recovery Plan

Choosing the wrong V7000/SVC replication technology can put your entire availability strategy at risk.

For most customers, there seems to be a bit of a mystery in how replication works. On the surface, it is simple. Data is written to a primary copy and either synchronously or asynchronously copied to a secondary location with the expectation that a loss of data at the primary site would result in minimal data loss and a very minimal recovery effort.

There are several types of replication, and each type has its nuances. Each of these technologies should be evaluated in light of the following business requirements:

1. Recovery Point Objective (RPO): This is the amount of data loss expressed in time units (typically minutes) that you will lose should there be a failover to the secondary site.   Continue reading

This is alarming

stuartphoto1

By Stuart Plotkin

 

Don’t Ignore that Alarm!Print

Ignore an alarm? Why would someone do that? Answer: because some tools send too many!

To avoid getting overloaded with meaningless alarms, it is important to implement best practices. The first best practice is to implement a software solution that is intelligent. It should:

  • Understand the limitations of your hardware
  • Take into consideration your particular workload
  • Let you know that you are heading for a problem before the problem begins
  • Eliminate useless alarms

If you have followed this first best practice, congratulations! You are headed in the right direction. Continue reading