49

How data layout affects memory performance

 4 years ago
source link: https://www.tuicool.com/articles/hit/viaEvaz
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
qQFnMfr.jpg!web

The mental model most people have of how computer memory (aka Random Access Memory or RAM) operates is inaccurate. The assumption that any access to any byte in memory has the same low cost does not hold on modern processors. In this article, I’ll explain what developers need to know about modern memory and how data layout can affect performance.

Current memory is starting to look more like an extremely fast block storage device. Rather than reading or writing individual bytes, the processor is reading or writing groups of bytes that fill a cache line (commonly 32 to 128 bytes in size). An access to memory requires well over a hundred clock cycles, two orders of magnitude slower than executing an instruction on the processor. Thus, programmers might reconsider the data structures used in their program if they are interested in obtaining better performance.

qQbqEf6.png!web

Everything you need to grow your career.

With your free Red Hat Developer program membership, unlock our library of cheat sheets and ebooks on next-generation application development.

SIGN UP
qQbqEf6.png!web

Latency is not improving

The first thing to note about memory is that the latency for accessing main memory is not improving. Much of the bandwidth improvement seen on processors is due to transferring larger groups of bytes in a single transaction. In the 1980s, processors typically transferred a few bytes (4 or fewer bytes) at a time. Current processor’s memory operations are moving much larger groups of 32 to 128 bytes as a group—the amount of data that fits in a single cache line. Every time memory is accessed, there is some delay due to setup time selecting the location in memory being accessed. Sharing that latency between a larger group of bytes reduces the cost per byte. This is analogous to a bus not being any faster than a car, but the greater carrying capacity of the bus will get more people moved between two points than the car in a given amount of time.

However, these wider memory operations assume that all the data read or written is actually being used by the processor. If the processor fetches a 64-byte chunk of memory and only modifies one byte then stores that changed byte back to memory, more than 98% of the memory bandwidth has been wasted. Data structures may be padded for data alignment as mentioned in “ How to avoid wasting megabytes of memory a few bytes at a time .” Those unused bytes used to align fields in the data structure contribute to the wasted bandwidth every time the data structure is loaded from memory or stored to memory. Organizing the data structures to avoid padding for data alignment can lead to higher effective bandwidth.

The processor may also attempt to hide memory access latency by speculatively fetching data. The hardware analyzes the sequences memory accesses and detects accesses that have a constant number of bytes between them. Once these strides through memory are detected, the processor starts prefetching the memory before the code actually requests the memory, which reduces the latency observed in the code. For this approach to work, the access patterns used in the code need to be very simple, such as every n th element in an array. The memory latency of random memory accesses due to pointer chasing through linked lists will not be reduced by the prefetch mechanisms.

Vector-style instructions

New processors include vector-style instructions such as Advanced Vector Extensions (AVX), which can perform four or eight operations in parallel. However, to use these instructions, the operands need to group of adjacent elements in an array. Using Arrays of Structures (AoS) may prevent using the vector-style instruction on the fields from multiple structures. Developers may want to use Structure of Arrays (SoA) instead to get a data layout that allows the use of the vector instruction. Having like elements in arrays can also reduce the padding in the data, resulting in more effective memory bandwidth.

Given the way that the processor treats memory, developers might improve performance of memory-intensive applications by designing the data structures more like files on a block device:

  1. Arrange layout to minimize reading/writing useless bytes (padding for alignment)
  2. Minimize random accesses
  3. Access elements with predictable stride, ideally sequentially (stride 1)

For additional details on optimizing memory performance refer to Ulrich Drepper’s “ What Every Programmer Should Know about Memory .” It offers a great deal of useful information about how memory actually works. The Intel® 64 and IA-32 Architectures Optimization Reference Manual also goes in the great detail on how to structure code to obtain better performance from memory.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK