Monday, May 9, 2016

Become A Better Programmer



Work on interesting and challenging Problems and become a better programmer. HackerRank offers you such a platform. Experts in design, development and data science are competing here. Where do you rank? HakerRank gathers the world’s experts to work on interesting and challenging problems.

HackerRank focuses on competitive programming challenges and has an online community of over one million computer programmers. HackerRank's programming challenges can be solved in a variety of programming languages (including, Java, C++, PHP, SQL, and many more) and span multiple computer science domains.

When a programmer submits a solution to a programming challenge, their submission is scored on the accuracy of their output and the execution time of their solution. Programmers are then ranked globally on the HackerRank leader-board and earn badges based on their accomplishments to drive competition among users. In addition to individual programming challenges, HackerRank also hosts contests where users compete on the same programming challenges during a set period of time and are then ranked at the conclusion of the event. 

In addition to supporting a variety of popular programming languages, HackerRank categorizes most of their programming challenges into a number of core Computer Science domains, including:

  • Artificial Intelligence: involves developing AI bots and using them against others.
  • Algorithms: Traditional algorithmic challenges.
  • Functional Programming: use functional programming abstractions to solve challenges.
  • Machine Learning: use predictive modeling and analysis to solve challenges.


Similar Platforms




Friday, May 6, 2016

LRU Cache Using C++


This article gives the information to implement the class for LRU-caching to be implemented using C++ standard library. Here is the simplified version of the LRU cache using standard C++ library. This is the LRU cache template code to be used for instant usage.


Introduction


LRU Cache is a key-value container providing caching with a least-recently-used replacement mechanism which is a useful in any programming language as a performance optimization toolkit. LRU cache algorithm discards the least recently used items first. It requires to keep track of what was used when, which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping track the "Least Recently Used" items.


The Need of Cache


Generally, the need for caching arises to preserve the result of some heavy or expensive operations so that, it can be reused without performing the same operation again and again in future. Expensive task could be anything that requires some complex calculations or the task which performs more time consuming operations e.g. I/O operation. If the total number of such results dealt with over the lifetime of the system does not consume excessive memory, it may suffice to store them in a simple key-value container (e.g. std::map), with the key being the input to the expensive function and the value being the result.

Limiting Cache Size


Applications which uses such a caching mechanism, might result into excessive memory usage problems. It could be because, they have no limit to store items or they have heavy items to store into the cache and it results into such problems over the time of application usage.

To avoid such problems, it it worth defining the size of the cache statically. For example, an online media player requires a collection of image files(album arts) while browsing the online content metadata might include a cache to load images of different sizes so as to avoid re-loading the most recently viewed images if the user revisits them. However, by taking the size-in-bytes of the stored images of different sizes into account, the maximum total memory consumed by cached objects could be controlled and the number of images cached automatically adjusted in response to the aggregate size.


Implementation


Standard Library based LRU-cache:

#ifndef _LRU_CACHE_HPP_
#define _LRU_CACHE_HPP_

#include < list >
#include < unordered_map >
#include < cassert > 
#include < iostream >

// Class representing LRU Cache of any generic items with key-value pair.
template < typename Key, typename Value, size_t CacheSize >
class LRUCache
{
public:
    /**
    * @brief Constructor.
    *
    */
    LRUCache()
    {
    }

    /**
    * @brief Destructor.
    *
    */
    ~LRUCache()
    {
        cache.clear();
        lruList.clear();
    }

    /**
    * @brief Check if the key exists in a cache.
    * @param key Key
    * @return true if key exists otherwise false.
    *
    */
    bool exist(Key key)
    {
        if (cache.find(key) == cache.end())
        {
            return false;
        }
        return true;
    }

    /*!
    * @brief Clears the cache.
    *
    * Removes all cache entries and drops all usage count data and so on.
    *
    */
    void clear()
    {
        cache.clear();
        lruList->clear();
    }

    /**
    * @brief Fetches the cache data if the key exists in the cache.
    * @brief If the given key exists, it makes the key and associated value
    * @brief to be the most recently used one by moving the pair to the end of the list.
    *
    * @param  key Key to be spliced.
    * @return get value for the given key.
    *
    */
    Value fetch(const Key& key)
    {
        assert(cache.find(key) != cache.end());

        // cache-hit
        // Update access record by moving accessed key to back of the list
        touch(key);

        print();

        // return the retrieved value
        return cache[key].first;
    }

    /**
    * @brief Adds the given Key-Value pair into the cache.
    * @param key Key
    * @param value Value
    *
    */
    void add(const Key& key, Value value)
    {
        //method should be called only when cache-miss happens
        assert(cache.find(key) == cache.end());

        if (cache.find(key) != cache.end())
        {
            cache[key].first = value;
        }
        else
        {
            insert(key, value);
        }

        touch(key);
    }

private:

    void print()
    {
        typename LRUList::reverse_iterator it = lruList.rbegin();

        for (; it != lruList.rend(); it++)
        {
            //printf("%d\n", *it);
            std::cout << cache[*it].first.c_str() << std::endl;
        }
        std::cout << "------------------" << std::endl;
    }

    /*!
    * @brief Increase usage count for entry of the given key.
    *
    * @brief Touches an entry in cache, simulating access to it.
    * @brief Usefull to splice item in the cache, reducing(or increasing)
    * @brief risk for expiration for it.
    *
    * @param key key to touch
    *
    */
    void touch(const Key &key)
    {
        if (lruList.size() > 0)
        {
            lruList.splice(lruList.end(), lruList, cache[key].second);
        }
    }

    /*!
    * @brief Creates and adds key-value entry linked to the usage record into the cache.
    * @brief It makes the space if necessary.
    * @brief If key exists, it records the key as most-recently-used key.
    * @brief risk for expiration for it.
    *
    * @param key key to insert.
    * @param key value to insert.
    *
    */
    void insert(const Key& key, Value value)
    {
        // make space if necessary
        if (lruList.size() == CacheSize)
        {
            erase();
        }

        // record key as most-recently-used key
        typename std::list< Key >::iterator it = lruList.insert(lruList.end(), key);

        // create key-value entry, linked to the usage record
        cache.insert(std::make_pair(key, std::make_pair(value, it)));
    }

    /*!
    * @brief It identifies and removes the least-recently used element in the cache.
    * @brief It makes the space if necessary.
    * @brief If key exists, it records the key as most-recently-used key.
    * @brief risk for expiration for it.
    *
    * @param key key to insert.
    * @param key value to insert.
    *
    */
    void erase()
    {
        assert(!lruList.empty());

        // identify least-recently-used key
        const typename UnorderedCache::iterator it = cache.find(lruList.front());

        //erase both elements to completely purge record
        cache.erase(it);
        lruList.pop_front();
    }

private:
    // Key access history, most recent at back
    typedef std::list< Key > LRUList;

    // Key to value and key history iterator
    typedef std::unordered_map< Key, std::pair< Value, typename std::list< Key >::iterator> > UnorderedCache;

    LRUList lruList;
    UnorderedCache cache;
};

#endif


Design


In the above class, the key data type and value data type are passed as parameters to the LRUCache class template.

Users of the C++ Standard Library needing an LRU-replacement cache generally gravitate towards std::map or std::unordered_map, because of their support for efficient keyed value accesses (O(log n) or O(1) access complexity). The problem then is how to implement the eviction strategy.

The most obvious solution is to use the std::list < key >.

In the example above, eviction mechanism is implemented using the std::list which maintains the key of the least recently used item at the head and the most recently used item at the tail. Whenever an item is accessed (used), the item is moved to the tail of the LRU cache list. Items from the list gets erased when the number of items in a cache reached the capacity. Notice that the map for (key, value) pair is actually implemented as (key, (value, list_iterator)) where the list_iterator points to an item in the LRU tracker. With this, we don’t need to search the list to find the key which is relatively expensive operation in O(n). Here, it uses std::unordered_map for (key,value) map to improve the average time complexity for add operation to O(1).


How to use the Class


Below code snippet shows the actual usage of the above LRUCache class. It shows a cache, that keeps data of type std::string and keys are of type int.


    typedef LRUCache < int, string, 10 > Cache;
    Cache myCache;
    

Cache object 'myCache' is allowed to keep only 10 items. Cache size is configured at the construction time and cannot be changed in the future.

You could put objects into the cache object using cache::add insert call:

When the cache size is full, it adds the newly requested item into the cache at most recently used position by deleting cache::erase the item at least recently used position.

    myCache.add(1, "One");
    myCache.add(2, "Two");
    myCache.add(3, "Three");
    myCache.add(4, "Four");
    myCache.add(5, "Five");
    myCache.add(6, "Six");
    myCache.add(7, "Seven");
    myCache.add(8, "Eight");
    myCache.add(9, "Nine");
    myCache.add(10, "Ten");
    myCache.add(11, "Elevan");
       
 

Now, when you have some data in the cache, you may want to retrieve it back:
The cache::fetch call will return a data value, associated with the supplied key.

    for (int i = 10; i > 0; i--)
    {
        if (myCache.exist(i))
        {
            myCache.fetch(i);
        }
    }
       

The cache::exist ensures that the key is present or not into the cache. If the key is present, then only we try to fetch the value of that key.

The cache::fetch call internally makes a call to cache::touch function which touches an entry in cache, simulating access to it. This is useful to splice the item into the cache, reducing(or increasing) risk for expiration for it.

The other usage of LRU cache could be for Image Cache as stated earlier:

    typedef enum
    typedef enum
    {
        CACHE_TYPE_SMALL,
        CACHE_TYPE_MEDIUM,
        CACHE_TYPE_LARGE
    }CACHE_TYPE;

    typedef LRUCache< CACHE_TYPE, std::pair< std::string, std::weak_ptr< uint8_t > >, 100> Large;
    typedef LRUCache< CACHE_TYPE, std::pair< std::string, std::weak_ptr< uint8_t > >, 300> Medium;
    typedef LRUCache< CACHE_TYPE, std::pair< std::string, std::weak_ptr< uint8_t > >, 500> Small;

    Large largeSizeCache;
    Medium mediumSizeCache;
    Small smallSizeCache;

    std::shared_ptr< uint8_t > JPEG = ImageScaler.GetImageRawData(JpegImagePath);
    largeSizeCache.add(CACHE_TYPE_LARGE, std::make_pair("JPEG_HASH", ImageScaler.ScaleLarge(JPEG).get()));
    mediumSizeCache.add(CACHE_TYPE_MEDIUM, std::make_pair("JPEG_HASH", ImageScaler.ScaleMedium(JPEG).get()));
    smallSizeCache.add(CACHE_TYPE_SMALL, std::make_pair("JPEG_HASH", ImageScaler.ScaleSmall(JPEG).get()));

    std::shared_ptr< uint8_t > PNG = ImageScaler.GetImageRawData(PngImagePath);
    largeSizeCache.add(CACHE_TYPE_LARGE, std::make_pair("PNG_HASH", ImageScaler.ScaleLarge(PNG).get()));
    mediumSizeCache.add(CACHE_TYPE_MEDIUM, std::make_pair("PNG_HASH", ImageScaler.ScaleMedium(PNG).get()));
    smallSizeCache.add(CACHE_TYPE_SMALL, std::make_pair("PNG_HASH", ImageScaler.ScaleSmall(PNG).get()));

    std::shared_ptr< uint8_t > GIF = ImageScaler.GetImageRawData(GifImagePath);
    largeSizeCache.add(CACHE_TYPE_LARGE, std::make_pair("GIF_HASH", ImageScaler.ScaleLarge(GIF).get()));
    mediumSizeCache.add(CACHE_TYPE_MEDIUM, std::make_pair("GIF_HASH", ImageScaler.ScaleMedium(GIF).get()));
    smallSizeCache.add(CACHE_TYPE_SMALL, std::make_pair("GIF_HASH", ImageScaler.ScaleSmall(GIF).get()));

    assert(mediumSizeCache.contains(CACHE_TYPE_MEDIUM));
 

The above example can be extended for Cache Mediator Pattern and with mediator it can be turned into Asynchronous Cache, High Throughput, Thread-Safe LRU Cache. We'll see such an implementation in my future article.

Happy Coding. :-)

Run any code without any Environment Setup


Run any Code ! for any Platform ! at Anywhere ! for FREE ! using Cloud9 !!! Write, run, and debug your code with Colud9's powerful and flexible cloud IDE. Collaborate on your workspaces publicly, or keep it private.

Cloud9 combines a powerful online code editor with a full Ubuntu workspace in the cloud. Cloud9 supports more than 40 languages, with class A support for PHP, Ruby, Python, JavaScript, Go, and more.

Cloud9 IDE is an open source, from version 3.0, online integrated development environment. It supports hundreds of programming languages, including PHP, Ruby, Perl, Python, JavaScript with Node.js, and Go. It enables developers to get started with coding immediately with pre-setup workspaces, collaborate with their peers with collaborative coding features, and web development features like live preview and browser compatibility testing.

It is written almost entirely in JavaScript, and uses Node.js on the back-end.

Cloud9 IDE














Cloud9 makes web development so easy, that, you can simply pick your configuration and develop your app. No need to spend valuable development time on system setup and maintenance. No need to install browsers and platforms in virtual machines for compatibility testing. Live Preview enables you to interactively see what your web application looks like in any browser! Just choose one of the 300+ browser/OS platform combinations.

Cloud9 Templates for Creating Workspaces







You can create, build and run any development stack in seconds. You can easily handle hundreds of thousands of files in your workspace and hundreds of thousands of lines of code in the editor.

Cloud9 offers some of the below powerful features

  • Built-in terminal, with npm and basic Unix commands
  • Code completion for snippets and identifiers
  • Real-time language analysis for JavaScript (marking common JavaScript pitfalls)
  • Variable/function name refactoring for JavaScript
  • Parenthesis, bracket, and quote character matching
  • Line numbers, warnings, and errors in the gutter
  • Debugger
  • Tabbed file management
  • Themes
  • Customizable key-bindings, including presets for Vim, Emacs, and Sublime Text
  • Built in Image Editor
  • Code reformatting via JSBeautify and CSSLint
  • Ability to drag-and-drop files into your project
  • Support for the following code repositories:
  • Support for deployment to:
  • Support for public and private projects
  • Plug-in support
  • Syntax highlighting for the following languages: C#, C/C++, Clojure, CoffeeScript, ColdFusion, CSS, Groovy, Java, Javascript, LaTeX, Lua, Markdown, OCaml, PHP, Perl, PowerShell, Python, Ruby, Scala, SCSS, SQL, Textile, X(HTML), XML

Similar Platforms

 
biz.