Introduction to Redis

Learn the concept of the cache and how Redis helps implement caching to improve the application's performance.

Before learning about Redis, there’s an important concept to understand, i.e., the cache. In this lesson, we'll learn about caching in general and how important it is in scaling up the performance of applications.

What is a cache, and why is it needed?

Press + to interact

Let’s start with an example. We all know about a Domain Name Server (DNS). A Domain Name Server is a directory consisting of domain names with their corresponding IP addresses so the user can reach that server. For example, when we hit the URL https://www.educative.ioin the browser, the following operations are performed:

  • The request hits a DNS, which can be maintained by our internet service provider (ISP).

  • In the DNS, there's a mapping for the URL to its IP address since all the servers and websites are hosted on physical computers.

  • The IP address is used to hit the server hosting the website. The server sends a response, and the web page is displayed in our browsers.

This is a very high-level view of DNS, but there are a lot of things going on during this process. Let’s assume that the DNS is not located near our location. So eventually,  it might take around 100 milliseconds (ms) to get the IP address for a URL. If we’re hitting the URL multiple times in a span of a few hours, it’s an unnecessary wait time of 100 ms for every request. There has to be some mechanism so that when we make a request to the URL for the first time, it can take 100 ms. But if we hit the same URL after accessing a few other URLs, it shouldn't take 100 ms to get the same IP address. Let’s look at the image below to help us understand the problem better.

Press + to interact
The DNS example
The DNS example

So how can we solve this problem?

Obviously, we can store a mapping of URLs and their corresponding IP addresses in our browser's storage, and the browser can refer to it for subsequent hits to the same URLs. The process of storing the precalculated or fetched data to avoid unnecessary computations is called caching, and the data structure that’s used to store the data is called a cache. Let’s go through the slides below to understand the scenario better.

Explanation

  • Slide 1: For the very first time, when our machine makes a request for a particular URL, the request goes to the cache. Obviously, we won’t find the mapping of the IP address (also known as a cache miss), and then we’ll call DNS to get the IP address. In this process, once we get the IP address, we’ll store the URL and its IP address in a key-value pair in the cache.

  • Slide 2: After surfing the internet, when we again make a request for the same URL, the cache returns the IP address (also known as cache hit) with a much shorter response time. This makes the operation much faster.

What is Redis?

We’ve learned how caching can help improve the application's performance. But the question arises: How can this caching be achieved? The answer is Redis. Although there are other caching systems, such as memcached and RabbitMQ, for now, our focus in this course will be Redis.

Redis is an in-memory data store that can behave as a database, caching system, or even a message brokermessagebroker. Due to its in-memory nature, the Redis server will always be running on the RAM and all the data will actually be stored on the RAM itself. This comes with the great advantage that accessing the data from Redis can be much faster, but there’s a limitation too. If the Redis server is restarted, all of our data is lost. To prevent this, Redis also supports storing the data at regular intervals to persistent storage so that data loss never happens.

Press + to interact

Redis is a NoSQL data store and stores data in the form of key-value pairs. It can store different types of data, like lists, sets, strings, etc. It can even help to implement the expiration of the data, transactional-based operations, and much more. Let’s discuss some of the real-world use cases of using Redis as a caching system and a message broker.

Using Redis as a caching system

Redis can be used as a caching system to speed up the performance of web applications. An example that we already discussed is the DNS looking up the IP address for a given URL. When Redis is used as a cache, it stores frequently accessed data in memory, which reduces the number of database calls required to retrieve the data. This can significantly improve the performance of web applications by reducing the latency caused by database calls. Redis also allows for the setting of expiration times for cached data, which means that old data can be automatically removed from the cache to make room for new data. Some of the scenarios where Redis is a great choice as a caching system include:

  • Applications that require low-latency responses

  • Applications that frequently access the same data

  • Applications that need to reduce the load on their database

A real-world example is Twitter. It’s one of the most popular social media platforms in the world, with millions of active users tweeting, retweeting, and liking posts every second. To handle such a massive volume of data, Twitter uses a combination of multiple technologies, including Redis as a caching system.

Twitter uses Redis as an in-memory data store to cache frequently accessed data, such as user profiles, tweets, timelines, and trends. By caching data in Redis, Twitter can reduce the number of requests sent to its databases, which helps to improve the performance and scalability of the platform.

Press + to interact

Here are some specific ways in which Twitter uses Redis as a caching system:

  • User profiles: Twitter caches user profiles in Redis to reduce the number of database calls required to retrieve user information. Whenever a user logs in to Twitter, their profile information is loaded from the Redis cache, which helps to improve the response time of the platform.

  • Tweets: Twitter caches tweets in Redis to speed up the process of retrieving tweets for users. When a user visits their timeline, Twitter fetches the most recent tweets from the Redis cache, which can significantly reduce the latency of the platform.

  • Timelines: Twitter caches timelines in Redis to reduce the number of database calls required to fetch tweets for a particular user's timeline. When a user requests their timeline, Twitter loads the most recent tweets from the Redis cache, which helps to improve the performance of the platform.

  • Trends: Twitter caches trending topics in Redis to reduce the number of database calls required to fetch the most popular topics. Whenever a user visits the trending topics section of the platform, Twitter retrieves the data from the Redis cache, which can significantly reduce the response time of the platform.

Using Redis as a message broker system

Redis can be used as a message broker to facilitate communication between different components of a distributed system. Redis supports a publish-subscribe messaging model, where publishers send messages to a channel, and subscribers listen to those channels for incoming messages. This allows different components of a distributed system to communicate with each other in a decoupled way. Redis also supports various other messaging patterns, such as request-response, where a client sends a message to a server and waits for a response. Some of the scenarios where Redis is a great choice as a message broker include the following:

  • Applications that require decoupled communication between components

  • Applications that need to handle high volumes of messages

  • Applications that need to perform messaging operations quickly and efficiently

A real-world example is Uber. It uses Redis as a message broker to facilitate communication between various microservices in its distributed system. Microservices are individual components of a larger system that communicate with each other through APIs or messaging systems like Redis.

Press + to interact

In Uber's case, Redis is used to implement a publish-subscribe messaging pattern, where publishers send messages to a channel, and subscribers listen to those channels for incoming messages. Uber uses this pattern to enable communication between various components of its system, including trip management, real-time dispatch, and payment processing.

  • Book a ride: When a passenger requests a ride in the Uber app, a message is published to a Redis channel indicating the passenger's location, destination, and other ride details. The dispatch service, which is responsible for finding available drivers and assigning them to the ride, subscribes to this channel and receives the message. The dispatch service then sends a message back to Redis indicating which driver has been assigned to the ride.

  • Payment processing system: When a ride is completed, a message is published to a Redis channel indicating that the ride has ended and the fare amount. The payment processing service subscribes to this channel and receives the message, then processes the payment for the ride.

Using Redis as a message broker provides several benefits for Uber's distributed system. Firstly, it allows for decoupled communication between different microservices, which helps to increase flexibility and scalability. Secondly, Redis supports high-throughput messaging, which allows Uber to handle the high volumes of messages generated by its system. Finally, Redis provides built-in features for handling message retries and failures, which helps to ensure the reliability and consistency of Uber's system.