This article gives the information to implement the class for LRU-caching to be implemented using C++ standard library. Here is the simplified version of the LRU cache using standard C++ library. This is the LRU cache template code to be used for instant usage.
Introduction
LRU Cache is a key-value container providing caching with a least-recently-used replacement mechanism which is a useful in any programming language as a performance optimization toolkit. LRU cache algorithm discards the least recently used items first. It requires to keep track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping track the "Least Recently Used" items.
The Need of Cache
Generally, the need for caching arises to preserve the result of some heavy or expensive operations so that, it can be reused without performing the same operation again and again in future. Expensive task could be anything that requires some complex calculations or the task which performs more time consuming operations e.g. I/O operation. If the total number of such results dealt with over the lifetime of the system does not consume excessive memory, it may suffice to store them in a simple key-value container (e.g. std::map), with the key being the input to the expensive function and the value being the result.
Limiting Cache Size
Applications which uses such a caching mechanism, might result into excessive memory usage problems. It could be because, they have no limit to store items or they have heavy items to store into the cache and it results into such problems over the time of application usage.
To avoid such problems, it it worth defining the size of the cache statically. For example, an online media player requires a collection of image files(album arts) while browsing the online content metadata might include a cache to load images of different sizes so as to avoid re-loading the most recently viewed images if the user revisits them. However, by taking the size-in-bytes of the stored images of different sizes into account, the maximum total memory consumed by cached objects could be controlled and the number of images cached automatically adjusted in response to the aggregate size.
Implementation
Standard Library based LRU-cache:
#ifndef _LRU_CACHE_HPP_
#define _LRU_CACHE_HPP_
#include < list >
#include < unordered_map >
#include < cassert >
#include < iostream >
// Class representing LRU Cache of any generic items with key-value pair.
template < typename Key, typename Value, size_t CacheSize >
class LRUCache
{
public:
/**
* @brief Constructor.
*
*/
LRUCache()
{
}
/**
* @brief Destructor.
*
*/
~LRUCache()
{
cache.clear();
lruList.clear();
}
/**
* @brief Check if the key exists in a cache.
* @param key Key
* @return true if key exists otherwise false.
*
*/
bool exist(Key key)
{
if (cache.find(key) == cache.end())
{
return false;
}
return true;
}
/*!
* @brief Clears the cache.
*
* Removes all cache entries and drops all usage count data and so on.
*
*/
void clear()
{
cache.clear();
lruList->clear();
}
/**
* @brief Fetches the cache data if the key exists in the cache.
* @brief If the given key exists, it makes the key and associated value
* @brief to be the most recently used one by moving the pair to the end of the list.
*
* @param key Key to be spliced.
* @return get value for the given key.
*
*/
Value fetch(const Key& key)
{
assert(cache.find(key) != cache.end());
// cache-hit
// Update access record by moving accessed key to back of the list
touch(key);
print();
// return the retrieved value
return cache[key].first;
}
/**
* @brief Adds the given Key-Value pair into the cache.
* @param key Key
* @param value Value
*
*/
void add(const Key& key, Value value)
{
//method should be called only when cache-miss happens
assert(cache.find(key) == cache.end());
if (cache.find(key) != cache.end())
{
cache[key].first = value;
}
else
{
insert(key, value);
}
touch(key);
}
private:
void print()
{
typename LRUList::reverse_iterator it = lruList.rbegin();
for (; it != lruList.rend(); it++)
{
//printf("%d\n", *it);
std::cout << cache[*it].first.c_str() << std::endl;
}
std::cout << "------------------" << std::endl;
}
/*!
* @brief Increase usage count for entry of the given key.
*
* @brief Touches an entry in cache, simulating access to it.
* @brief Usefull to splice item in the cache, reducing(or increasing)
* @brief risk for expiration for it.
*
* @param key key to touch
*
*/
void touch(const Key &key)
{
if (lruList.size() > 0)
{
lruList.splice(lruList.end(), lruList, cache[key].second);
}
}
/*!
* @brief Creates and adds key-value entry linked to the usage record into the cache.
* @brief It makes the space if necessary.
* @brief If key exists, it records the key as most-recently-used key.
* @brief risk for expiration for it.
*
* @param key key to insert.
* @param key value to insert.
*
*/
void insert(const Key& key, Value value)
{
// make space if necessary
if (lruList.size() == CacheSize)
{
erase();
}
// record key as most-recently-used key
typename std::list< Key >::iterator it = lruList.insert(lruList.end(), key);
// create key-value entry, linked to the usage record
cache.insert(std::make_pair(key, std::make_pair(value, it)));
}
/*!
* @brief It identifies and removes the least-recently used element in the cache.
* @brief It makes the space if necessary.
* @brief If key exists, it records the key as most-recently-used key.
* @brief risk for expiration for it.
*
* @param key key to insert.
* @param key value to insert.
*
*/
void erase()
{
assert(!lruList.empty());
// identify least-recently-used key
const typename UnorderedCache::iterator it = cache.find(lruList.front());
//erase both elements to completely purge record
cache.erase(it);
lruList.pop_front();
}
private:
// Key access history, most recent at back
typedef std::list< Key > LRUList;
// Key to value and key history iterator
typedef std::unordered_map< Key, std::pair< Value, typename std::list< Key >::iterator> > UnorderedCache;
LRUList lruList;
UnorderedCache cache;
};
#endif
Design
In the above class, the key data type and value data type are passed as parameters to the LRUCache class template.
Users of the C++ Standard Library needing an LRU-replacement cache generally gravitate towards std::map or std::unordered_map, because of their support for efficient keyed value accesses (O(log n) or O(1) access complexity). The problem then is how to implement the eviction strategy.
The most obvious solution is to use the std::list < key >.
In the example above, eviction mechanism is implemented using the std::list which maintains the key of the least recently used item at the head and the most recently used item at the tail. Whenever an item is accessed (used), the item is moved to the tail of the LRU cache list. Items from the list gets erased when the number of items in a cache reached the capacity. Notice that the map for (key, value) pair is actually implemented as (key, (value, list_iterator)) where the list_iterator points to an item in the LRU tracker. With this, we don’t need to search the list to find the key which is relatively expensive operation in O(n). Here, it uses std::unordered_map for (key,value) map to improve the average time complexity for add operation to O(1).
How to use the Class
Below code snippet shows the actual usage of the above LRUCache class. It shows a cache, that keeps data of type std::string and keys are of type int.
typedef LRUCache < int, string, 10 > Cache;
Cache myCache;
Cache object 'myCache' is allowed to keep only 10 items. Cache size is configured at the construction time and cannot be changed in the future.
You could put objects into the cache object using cache::add insert call:
When the cache size is full, it adds the newly requested item into the cache at most recently used position by deleting cache::erase the item at least recently used position.
myCache.add(1, "One");
myCache.add(2, "Two");
myCache.add(3, "Three");
myCache.add(4, "Four");
myCache.add(5, "Five");
myCache.add(6, "Six");
myCache.add(7, "Seven");
myCache.add(8, "Eight");
myCache.add(9, "Nine");
myCache.add(10, "Ten");
myCache.add(11, "Elevan");
Now, when you have some data in the cache, you may want to retrieve it back:
The cache::fetch call will return a data value, associated with the supplied key.
for (int i = 10; i > 0; i--)
{
if (myCache.exist(i))
{
myCache.fetch(i);
}
}
The cache::exist ensures that the key is present or not into the cache. If the key is present, then only we try to fetch the value of that key.
The cache::fetch call internally makes a call to cache::touch function which touches an entry in cache, simulating access to it. This is useful to splice the item into the cache, reducing(or increasing) risk for expiration for it.
The other usage of LRU cache could be for Image Cache as stated earlier:
typedef enum
typedef enum
{
CACHE_TYPE_SMALL,
CACHE_TYPE_MEDIUM,
CACHE_TYPE_LARGE
}CACHE_TYPE;
typedef LRUCache< CACHE_TYPE, std::pair< std::string, std::weak_ptr< uint8_t > >, 100> Large;
typedef LRUCache< CACHE_TYPE, std::pair< std::string, std::weak_ptr< uint8_t > >, 300> Medium;
typedef LRUCache< CACHE_TYPE, std::pair< std::string, std::weak_ptr< uint8_t > >, 500> Small;
Large largeSizeCache;
Medium mediumSizeCache;
Small smallSizeCache;
std::shared_ptr< uint8_t > JPEG = ImageScaler.GetImageRawData(JpegImagePath);
largeSizeCache.add(CACHE_TYPE_LARGE, std::make_pair("JPEG_HASH", ImageScaler.ScaleLarge(JPEG).get()));
mediumSizeCache.add(CACHE_TYPE_MEDIUM, std::make_pair("JPEG_HASH", ImageScaler.ScaleMedium(JPEG).get()));
smallSizeCache.add(CACHE_TYPE_SMALL, std::make_pair("JPEG_HASH", ImageScaler.ScaleSmall(JPEG).get()));
std::shared_ptr< uint8_t > PNG = ImageScaler.GetImageRawData(PngImagePath);
largeSizeCache.add(CACHE_TYPE_LARGE, std::make_pair("PNG_HASH", ImageScaler.ScaleLarge(PNG).get()));
mediumSizeCache.add(CACHE_TYPE_MEDIUM, std::make_pair("PNG_HASH", ImageScaler.ScaleMedium(PNG).get()));
smallSizeCache.add(CACHE_TYPE_SMALL, std::make_pair("PNG_HASH", ImageScaler.ScaleSmall(PNG).get()));
std::shared_ptr< uint8_t > GIF = ImageScaler.GetImageRawData(GifImagePath);
largeSizeCache.add(CACHE_TYPE_LARGE, std::make_pair("GIF_HASH", ImageScaler.ScaleLarge(GIF).get()));
mediumSizeCache.add(CACHE_TYPE_MEDIUM, std::make_pair("GIF_HASH", ImageScaler.ScaleMedium(GIF).get()));
smallSizeCache.add(CACHE_TYPE_SMALL, std::make_pair("GIF_HASH", ImageScaler.ScaleSmall(GIF).get()));
assert(mediumSizeCache.contains(CACHE_TYPE_MEDIUM));
The above example can be extended for Cache Mediator Pattern and with mediator it can be turned into Asynchronous Cache, High Throughput, Thread-Safe LRU Cache. We'll see such an implementation in my future article.
Happy Coding. :-)