Impact of z14 on Processor Cache and MLC Expenses

Todd Havekost

By Todd Havekost

Expense reduction initiatives among IT organizations typically prioritize efforts to reduce IBM Monthly License Charge (MLC) software expense, which commonly represents the single largest line item in the mainframe budget.

On current (z13 and z14) mainframe processors, at least one-third and often more than one-half of all machine cycles are spent waiting for instructions and data to be staged into level one processor cache so that they can be executed. Since such a significant portion of CPU consumption is dependent on processor cache efficiency, awareness of your key cache metrics and the actions you can take to improve cache efficiency are both essential.

This is the final article in a four-part series focusing on this vital but often overlooked subject area. (You can read Article 1, Article 2, and Article 3.) This article examines the changes in processor cache design for the z14 processor model. The z14 reflects evolutionary changes in processor cache from the z13 in contrast to the revolutionary changes that occurred between the zEC12 and z13. The cache design changes for the z14 were particularly designed to help workloads that place high demands on processor cache. These “high RNI” workloads frequently experienced a negative impact when migrating from the zEC12 to z13.

Continue reading

Getting the Most out of zEDC Hardware Compression

Todd-Havekost

By Todd Havekost

One of the challenges our customers tell us they face with their existing SMF reporting is keeping up with emerging z/OS technologies. Whenever a new element is introduced in the z infrastructure, IBM adds raw instrumentation for it to SMF. This is of course very valuable, but the existing SMF reporting toolset, often a custom SAS-based program, subsequently needs to be enhanced to support these new SMF metrics in order to properly manage the new technology.

z Enterprise Data Compression (zEDC) is one of those emerging that is rapidly gaining traction with many of our customers, and for good reasons:

  • It is relatively straightforward and inexpensive to implement.
  • It can be leveraged by numerous widely used access methods and products.
  • It reduces disk storage requirements and I/O elapsed times by delivering good compression ratios.
  • The CPU cost is very minimal since almost all the processing is offloaded to the hardware.

Continue reading