Solution: Logger Rate Limiter

Let's solve the Logger Rate Limiter problem using the Hash Map pattern.

Statement

For the given stream of message requests and their timestamps as input, you must implement a logger rate limiter system that decides whether the current message request is displayed. The decision depends on whether the same message has already been displayed in the last SS seconds. If yes, then the decision is FALSE, as this message is considered a duplicate. Otherwise, the decision is TRUE.

Note: Several message requests, though received at different timestamps, may carry identical messages.

Constraint:

  • 11 \leq request.length 102\leq 10^{2}
  • 00 \leq timestamp 103\leq 10^{3}
  • Timestamps are in ascending order.
  • Messages can be written in lowercase or uppercase English alphabets.

Solution

So far, you’ve probably brainstormed some approaches and have an idea of how to solve this problem. Let’s explore some of these approaches and figure out which one to follow based on considerations such as time complexity and any implementation constraints.

Naive approach

The problem we’re trying to solve here implies that there’s a queue of messages with timestamps attached to them. Using this information, we can use a queue data structure to process the incoming message requests.

In addition to using a queue, we can use a set data structure to efficiently identify and remove duplicates.

Our naive approach is an event-driven algorithm, where every incoming message prompts us to identify and remove all those messages from the queue whose timestamps are more than SS seconds older than the timestamp of the new message. Whenever we remove a message from the queue, we also remove it from the set.

After performing this time limit expiry check, we can be certain that none of the messages in the queue and in the set are more than SS seconds older than the new message. Now, if this new message is present in the set, it’s a duplicate and we return FALSE.

Otherwise, we add it to the message queue and to the message set, and return TRUE.

The time and space complexity of this approach are O(n)O(n), where nn is the size of the message queue at any given point in time.

Optimized approach using hash maps

We need to know if a message already exists and keep track of its time limit. When thinking about such problems where two associated values need to be checked, we can use a hash map.

We can use all incoming messages as keys and their respective time limits as values. This will help us eliminate duplicates and respect the time limit of SS seconds as well.

Here is how we’ll implement our algorithm using hash maps:

  1. Initialize a hash map.

  2. When a request arrives, check if it’s a new request (the message is not among the keys stored in the hash map) or a repeated request (an entry for this message already exists in the hash map). If it’s a new request, accept it and add it to the hash map.

  3. If it’s a repeated request, compare the difference between the timestamp of the incoming request and the timestamp of the previous request with the same message. If this difference is greater than the time limit, SS, accept it and update the timestamp for that specific message in the hash map. Otherwise, reject it.

Let’s look at the code for this solution below:

class RequestLogger {
// initailization of requests hash map
private HashMap<String, Integer> requests;
int limit;
public RequestLogger(int timeLimit) {
requests = new HashMap<String, Integer> ();
limit = timeLimit;
}
// function to accept and deny message requests
public boolean messageRequestDecision(int timestamp, String request) {
if (!this.requests.containsKey(request)) {
this.requests.put(request, timestamp);
return true;
}
if (timestamp - this.requests.get(request) >= limit) {
this.requests.put(request, timestamp);
return true;
} else {
return false;
}
}
public static void main(String[] args) {
// Driver code
int[] times = { 1, 5, 6, 7, 15 };
String[] messages = {
"good morning",
"hello world",
"good morning",
"good morning",
"hello world"
};
RequestLogger obj = new RequestLogger(7);
for (int i = 0; i<messages.length; i++) {
System.out.print(i + 1);
System.out.println(".\tTime, message: {" + times[i] + ", '" + messages[i] + "'}");
System.out.println("\tMessage request decision: " + obj.messageRequestDecision(times[i], messages[i]));
System.out.println(PrintHyphens.repeat("-", 100));
}
}
}
Logger Rate Limiter

Solution summary

Let’s summarize our optimized algorithm:

  1. After initializing a hash map, whenever a request arrives, we check whether it’s a new request or a repeated request after the assigned time limit

  2. If the request meets either of the conditions mentioned in the above step, we accept and update the entry associated with that specific request in the hash map. Otherwise, reject the request and return the final decision.

Time complexity

The decision function checks whether a message has already been encountered, and if so, how long ago it was encountered. Thanks to the use of hash maps, both operations are completed in constant time—therefore, the time complexity of the decision function is O(1)O(1).

Space complexity

The space complexity of the algorithm is O(n)O(n), where nn is the number of incoming requests that we store.

Level up your interview prep. Join Educative to access 70+ hands-on prep courses.