Credit Card Transaction Timeouts – IOSQ Analysis

By Joe HydeCredit Card Transactions - IOSQ Analysis

Black Friday is one of the busiest transaction days of the year, and it often seems like an easy payday for most participating companies. But have you ever wondered what performance preparations must be made to accommodate the overly inflated volume of credit card transactions?

A large global bank was struggling because their latest version of a credit card swipe application was failing at high volume load testing. In preparations for Black Friday they needed the application to handle a much higher number of credit card swipes, but periodically their credit card transactions were timing out.

When we became involved they had spent weeks on the issue, thousands of man-hours and had incurred significant financial penalties because of the delays. They had spent the past two weeks on day-long conference calls with over 100 people on the phone (often forcing some off the line so others could join) all pointing fingers at one another. The performance team, application team, storage team, and the vendor all blamed one another for the timeouts.

You see, the delays had a significant revenue impact to their business as any credit card approval that timed out had to be sent over a competitor’s exchange, incurring significant fees. After the two weeks of conference calls proved to be unsuccessful in determining the root cause of the problem, they called us in. We took a deep dive into some of the key storage metrics and were able to provide the key insight in determining root cause of the timeouts in a few days of research and additional data acquisition.

Continue reading

IBM z/OS’s Microscope – GTF

By Joe Hyde

chemistry icons background

Remember the first time you looked at pond water under a microscope? Who knew such creatures even existed, let alone in a drop of water that appears clear to the naked eye.

IBM z/OS also provides a microscope. It’s called the Generalized Trace Facility, GTF for short. With a GTF I/O summary trace you can look deeply into the inner world of your storage systems. What appears innocuous at the RMF level can have some surprising characteristics when put under the GTF microscope. However, GTF contains so much data that it is not easy to “focus” this microscope and get out all the information. Fortunately, IntelliMagic now created software to process and analyze GTF I/O summary traces, such you can focus on the gems that are hidden in GTF I/O traces using IntelliMagic Vision.

To illustrate the value of analyzing GTF data, here is a story on how I used this feature to show something otherwise invisible. I recently blogged about DB2 work volumes that were exhibiting high device busy delay times. A DS8870 firmware upgrade eliminated this problem, but only when I analyzed the GTF I/O summary trace using the new feature could I really explain why the firmware upgrade made such a marked improvement. Continue reading

z/OS Petabyte Capacity Enablement

Brett

By Dave Heggen

We work with many large z/OS customers and have seen only one requiring more than a petabyte (PB) of primary disk storage in a single sysplex. Additional z/OS environments may exist, but we’ve not yet seen them (if you are that site, we’d love to hear from you!). The larger environments are 400-750 TB per sysplex and growing, so it’s likely those will reach a Petabyte requirement soon.iStock_000027232723Small

IBM has already stated that the 64K device limitation will not be lifted. Customers requiring more than 64K devices have gotten relief by migrating to larger devices (3390-54 and/or Extended Address Volumes) and by exploiting of Multiple SubSystems (MSS) for use by PAV Aliases and Metro Mirror (PPRC) Secondary and FlashCopy Target devices.

The purpose of this blog is to discuss the strategies of how to position existing and future technologies to allow for this required growth. Continue reading

Less is More – Why 32 HyperPAVs are Better than 128

Gilbert

By Gilbert Houtekamer, Ph.D.

When HyperPAV was announced, the extinction of IOSQ was expected to follow shortly.  And indeed for most customers IOSQ time is more or less an endangered species.  Yet in some cases a bit of IOSQ remains, and even queuing on HyperPAV aliases may be observed.  The reflexive reaction from a good performance analyst is to interpret the queuing as a shortage that can be addressed by adding more Hypers.  But is this really a good idea? Adding  aliases will only increase overhead and  will decrease WLM’s ability to handle important I/Os with priority.  Let me explain why.

HyperPAV, like many I/O related things in z/OS, works on an LCU basis.  LCUs are a management concept in z/OS: each LCU can support up 8 channels for data transfer, and up to 256 device addresses.   With HyperPAV, some of the 256 addresses are used for regular volumes (“base addresses”), and some are available as “aliases”.  You do not need to use all 256 addresses; it is perfectly valid to have no more than 64 base addresses and 32 aliases in an LCU.

Continue reading