Impact of z14 on Processor Cache and MLC Expenses

Todd Havekost

By Todd Havekost

Expense reduction initiatives among IT organizations typically prioritize efforts to reduce IBM Monthly License Charge (MLC) software expense, which commonly represents the single largest line item in the mainframe budget.

On current (z13 and z14) mainframe processors, at least one-third and often more than one-half of all machine cycles are spent waiting for instructions and data to be staged into level one processor cache so that they can be executed. Since such a significant portion of CPU consumption is dependent on processor cache efficiency, awareness of your key cache metrics and the actions you can take to improve cache efficiency are both essential.

This is the final article in a four-part series focusing on this vital but often overlooked subject area. (You can read Article 1, Article 2, and Article 3.) This article examines the changes in processor cache design for the z14 processor model. The z14 reflects evolutionary changes in processor cache from the z13 in contrast to the revolutionary changes that occurred between the zEC12 and z13. The cache design changes for the z14 were particularly designed to help workloads that place high demands on processor cache. These “high RNI” workloads frequently experienced a negative impact when migrating from the zEC12 to z13.

Continue reading

Optimizing MLC Software Costs with Processor Configurations

Todd Havekost

By Todd Havekost

This is the third article in a four-part series focusing largely on a topic that has the potential to generate significant cost savings, but which has not received the attention it deserves, namely processor cache optimization. (Read part one here and part two here.) Without an understanding of the vital role processor cache plays in CPU consumption and clear visibility into the key cache metrics in your environment, significant opportunities to reduce CPU consumption and MLC expense may not be realized.

This article highlights how optimizing physical hardware configurations can substantially improve processor cache efficiency and thus reduce MLC costs. Three approaches to maximizing work executing on Vertical High (VH) logical CPs through increasing the number of physical CPs will be considered. Restating one of the key findings of the first article, work executing on VHs optimizes processor cache effectiveness, because its 1-1 relationship with a physical CP means it will consistently access the same processor cache.

Continue reading

Reduce MLC Software Costs by Optimizing LPAR Configurations

Todd HavekostBy Todd Havekost

A prominent theme among IT organizations today is an intense focus on expense reduction. For mainframe departments, this routinely involves seeking to reduce IBM Monthly License Charge (MLC) software expense, which commonly represents the single largest line item in their budget.

This is the second article in a four-part series focusing largely on a topic that has the potential to generate significant cost savings but which has not received the attention it deserves, namely processor cache optimization. (Read part one here).  Without an understanding of the vital role processor cache plays in CPU consumption and clear visibility into the key cache metrics in your environment, significant opportunities to reduce CPU consumption and MLC expense may not be realized.[1]

This article focuses on changes to LPAR configurations that can improve cache efficiency, as reflected in lower RNI values. The two primary aspects covered will be optimizing LPAR topology, and increasing the amount of work executing on Vertical High (VH) CPs through optimizing LPAR weights. Restating one of the key findings of the first article, work executing on VHs optimizes processor cache effectiveness, because its 1-1 relationship with a physical CP means it will consistently access the same processor cache.[2]

Continue reading

Lower MLC Software Costs with Processor Cache Optimization

Todd-HavekostBy Todd Havekost

It is common in today’s challenging business environments to find IT organizations intensely focused on expense reduction. For mainframe departments, this typically results in a high priority expense reduction initiative for IBM Monthly License Charge (MLC) software, which usually represents the single largest line item in their budget.

This article begins a four-part series focusing largely on a topic that has the potential to generate significant cost savings but which has not received the attention it deserves, namely processor cache optimization. The magnitude of the potential opportunity to reduce CPU consumption and thus MLC expense available through optimizing processor cache is unlikely to be realized unless you understand the underlying concepts and have clear visibility into the key metrics in your environment.

Subsequent articles in the series will focus on ways to improve cache efficiency, through optimizing LPAR weights and processor configurations, and finally on the value of additional visibility into the data commonly viewed only through the IBM Sub-Capacity Reporting Tool (SCRT) report. Insights into the potential impact of various tuning actions will be brought to life with data from numerous real-life case studies, gleaned from experience gained from analyzing detailed processor cache data from 45 sites across 5 countries.

Processor cache utilization plays a significant role in CPU consumption for all z processors, but that role is more prominent than ever on z13 and z14 models. Achieving the rated 10% capacity increase on a z13 processor versus its zEC12 predecessor (despite a clock speed that is 10% slower) is very dependent on effective utilization of processor cache. This article will begin by introducing the key processor cache concepts and metrics that are essential for understanding the vital role processor cache plays in CPU consumption.

Continue reading

DB2 for z/OS Buffer Pool Simulation

By Jeff BergerDB2 z/os buffer pool computer memory

For many years the price of z/OS memory has been decreasing, and IBM has been pushing the idea of large amounts of memory. DB2® for z/OS has virtually eliminated its virtual storage constraints.

DB2 performs best when it has lots of memory (i.e. real memory). Memory is still not free, but large memory can save money by reducing CPU consumption while at the same time reducing DB2 transaction response time. More memory also increases DB2 availability in cases where it is necessary to dump the DB2 address space, because if dumping causes paging to occur, the dump will take longer, and DB2 is not available during that time.

DB2 Buffer Pool Analyzer for z/OS

The first thing that comes to mind for the use of large memory is to increase the size of DB2 buffer pools. This can reduce the number of synchronous I/Os by increasing the buffer hit ratio. Furthermore, reducing the number of synchronous I/Os will reduce CPU consumption, because I/Os cost CPU time.

Continue reading