pmem.io: libvmemcache - buffer-based LRU cache
source link: https://pmem.io/2019/05/07/libvmemcache.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
libvmemcache - buffer-based LRU cache
Introduction
libvmemcache is a volatile key-value store optimized for operating on NVDIMM based space. However, it can work with any filesystem whether it is stored in memory (tmpfs) or on any storage device. Consequently, libvmemcache will be significantly less performant if it is stored on the storage device other than NVDIMMs.
libvmemcache is an embeddable and lightweight in-memory caching solution. It is designed to fully take advantage of large capacity memory, such as persistent memory with DAX through memory mapping in an efficient and scalable way.
Creation of cache
At the very beginning you have to call vmemcache_new()
in order to create a new vmemcache instance:
cache = vmemcache_new();
It creates an empty unconfigured vmemcache instance initialized with the default values.
Next, you can configure the parameters of cache to change their default values.
You can set the size of cache - it will be rounded up to a whole page size (4KB on x86):
vmemcache_set_size(cache, new_size);
You can set the block size of cache (256 bytes minimum, strongly recommended to be a multiple of 64 bytes). If cache is backed by a non byte-addressable medium, the block size should be 4096 (or a multiple) or performance will greatly suffer.
vmemcache_set_extent_size(cache, block_size);
You can also set the replacement policy that defines what will happen when an element is inserted into full cache:
vmemcache_set_eviction_policy(cache, repl_p);
- VMEMCACHE_REPLACEMENT_NONE - insert operation into full cache will fail (only manual eviction is possible)
- VMEMCACHE_REPLACEMENT_LRU - least recently accessed entry will be evicted to make space when needed
Then you have to call the vmemcache_add()
function in order to associate
cache with a backing storage medium in the given path:
vmemcache_add(cache, "/path/to/backing/storage/medium");
which may be a /dev/dax device or a directory on a regular filesystem (which may or may not be mounted with -o dax, either on persistent memory or any other backing storage).
Cache is ready to be used now.
Use of cache
There are three basic operations on cache.
You can put a new element into cache using the vmemcache_put()
function
vmemcache_put(cache, key, key_size, value, value_size);
that inserts a given (key, value) pair into cache.
You can get an element from cache using the vmemcache_get()
function:
vmemcache_get(cache, key, key_size, vbuf, vbufsize, offset, vsize);
that searches for an entry with the given key.
You can also evict an element from cache using the vmemcache_evict()
function:
vmemcache_evict(cache, key, ksize);
that removes the given key from cache.
Callbacks
You can register a hook to be called during eviction or after a cache miss,
using vmemcache_callback_on_evict()
or vmemcache_callback_on_miss()
,
respectively:
vmemcache_callback_on_evict(cache, callback_on_evict, arg);
vmemcache_callback_on_miss(cache, callback_on_miss, arg);
The extra arg will be passed to your function.
The ‘on evict’ callback function is called when an entry is being removed from cache. The function cannot prevent the eviction but the entry remains available for queries until the callback function returns. The thread that triggered the eviction is blocked in the meantime.
The ‘on miss’ callback function is called when a get query fails in order to provide an opportunity to insert the missing key. If the callback function calls put for that specific key, the get will return its value even if it does not fit into cache.
Miscellaneous
It is possible to obtain a piece of statistics about cache
using the vmemcache_get_stat()
function:
vmemcache_get_stat(cache, statistic, value, value_size);
The statistic can be one of the following:
- VMEMCACHE_STAT_PUT – count of puts
- VMEMCACHE_STAT_GET – count of gets
- VMEMCACHE_STAT_HIT – count of gets that were served from cache
- VMEMCACHE_STAT_MISS – count of gets that were not present in cache
- VMEMCACHE_STAT_EVICT – count of evictions
- VMEMCACHE_STAT_ENTRIES – current number of cache entries
- VMEMCACHE_STAT_DRAM_SIZE_USED – current amount of DRAM used
- VMEMCACHE_STAT_POOL_SIZE_USED – current usage of data pool
- VMEMCACHE_STAT_HEAP_ENTRIES – current number of heap entries
Statistics are enabled by default. They can be disabled at the compile time
of the vmemcache library if the STATS_ENABLED
CMake option is set to OFF.
A human-friendly description of the last error can be retrieved using
the vmemcache_errormsg()
function:
vmemcache_errormsg();
Delete cache
At the end you have to free any structures associated with cache:
vmemcache_delete(cache);
The complete example code can be found in the vmemcache repository.
Documentation
The complete libvmemcache manual can be found at pmem.io.
The contents of this web site and the associated GitHub repositories are BSD-licensed open source.
Recommend
-
9
Caching in Python Using the LRU Cache Strategy by
-
8
在计算机软件领域,缓存(Cache)指的是将部分数据存储在内存中,以便下次能够更快地访问这些数据,这也是一个典型的用空间换时间的例子。一般用于缓存的内存空间是固定的,当有更多的数据需要缓存的时候,需要将已缓存的部分数据清除后再...
-
14
Implementing an LRU Cache in Rust Jan 28 ・Updated on Jan 29 ・19 min read...
-
6
In this tutorial, we are going to see how to create a distributed cache (LRU: Least recently used) using ZooKeeper for leader election. Before starting you can find the source code of the project at t...
-
4
Implement an LRU Cache
-
2
计算速度太慢?试试 lru_cache 装饰器发布于 今天 14:14 众所周知,python语言是相当好用的,但是它的执行性能也是相对其他语言比较慢的。还好python提供了一个非常...
-
3
Multi-level vmemcache Jun 12, 2019 Kilobyte Libvmemcache Intr...
-
2
用 Go 实现一个 LRU cache crossoverJie · 大约4小时之前 · 41 次点击 · 预计阅读时间 2 分...
-
6
Midpoint Insertion Strategy in MySQL LRU Cache 8 min read 455 reads Published on 26...
-
5
Caching in Python With lru_cache There are many ways to achieve fast and responsive applications. Caching is one approach that, when used correctly, makes things much faster while decreasing the load on computing resourc...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK