Voiced by Amazon Polly |
Overview
Rate limiting, an essential aspect of API and web service management, ensures that incoming requests are controlled, preventing system abuse and enhancing reliability. This tutorial explores the implementation of an efficient rate limiter using TypeScript, Node.js, and Redis Cache, focusing on their synergistic utilization to enforce access control. In this comprehensive guide, we delve into the concept of rate limiting and its pivotal role in maintaining the stability and security of web applications.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Introduction
Rate Limiter
In the dynamic landscape of web applications and APIs, a rate limiter stands as a gatekeeper, orchestrating the flow of incoming requests to maintain optimal performance, prevent abuse, and ensure system stability. The concept of rate limiting revolves around careful traffic regulation by imposing specific constraints on the number of requests allowed within a defined timeframe. It is a fundamental mechanism to protect servers, services, and APIs from potential overload, denial-of-service (DoS) attacks, and misuse.
TypeScript
TypeScript stands as a powerful and adaptable programming language that extends the capabilities of JavaScript by incorporating static typing. It represents a superset of JavaScript, offering developers an enhanced and more structured approach to building scalable and robust applications. TypeScript introduces a strict typing system that enables the definition of explicit types for variables, function parameters, and return values, fostering clearer and more predictable code.
Redis Cache
Redis Cache is a high-performance, in-memory data store known for its speed and versatility. As a key-value store, Redis enables lightning-fast access to data by storing it in memory, providing sub-millisecond response times. It offers many data structures, including strings, lists, sets, and hashes, making it adaptable to various use cases, from caching and session management to real-time analytics and queuing.
Use Cases of Rate Limiter
- API Throttling: To prevent abuse or overloading of APIs, rate limiting sets boundaries on the number of requests an API can receive within a specific timeframe. This ensures fair usage and protects against malicious attacks or unintentional spikes in traffic.
- User Authentication and Login: Limiting the number of login attempts within a certain period helps thwart brute-force attacks to gain unauthorized access to user accounts. It adds a layer of security by restricting login attempts and safeguarding user credentials.
- Preventing DDoS Attacks: Rate limiting is an effective defense mechanism against Distributed Denial of Service (DDoS) attacks. Limiting the number of requests from an IP address or a specific source helps mitigate the impact of large-scale attacks that aim to overwhelm a server or network.
- Payment Gateway Protection: In payment processing systems, rate limiting secures transactions by restricting the number of payment requests. This guards against fraudulent activities and ensures the system operates smoothly, efficiently handling valid transactions.
- Content Delivery Networks (CDNs): CDNs employ rate limiting to manage traffic flow and deliver content efficiently. CDNs optimize content delivery and maintain a consistent user experience by controlling the number of server requests.
- Subscription Services: Platforms offering subscription-based services utilize rate limiting to regulate access to content or features based on subscription tiers. This ensures that users within different subscription levels receive fair and optimal service based on their plan.
Steps to Setup Rate Limiter
Here, we need to create three components,
- TypeScript with NodeJS and Express setup
- Redis setup
- Rate limiter middleware
TypeScript with NodeJS and Express setup
To set TypeScript in your NodeJS project, you can refer to my blog –
A Guide to Set Up Node.js Express with TypeScript
Create a src directory in the root folder and create an app.ts file in it. The code for setting up the server on port 3001 is –
1 2 3 4 5 6 7 |
import express, {Express, Request, Response} from "express"; import { rateLimiter, RateLimiterRule } from "./rateLimiter"; const app: Express = express(); app.listen(3001, () => { console.log("Application started at PORT 3001"); }) |
Setup Redis Client
You can choose between utilizing a locally hosted Redis server or an external service when setting up Redis. This particular setup guide will focus on employing a locally hosted Redis server.
The initial step to initiate the process involves installing Redis and activating the server. These steps are detailed in the Redis documentation, which offers comprehensive installation and setup procedures guidance. For detailed instructions and a step-by-step walkthrough, you can refer to the following link –
https://redis.io/docs/getting-started/installation/
Now, we will create a Redis Client in a redis.ts file in the src folder.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
import { createClient, RedisClientType } from 'redis'; const REDIS_CONFIG = { HOST: '127.0.0.1', PORT: 6379, } export let redisClient: RedisClientType | null = null; async function redisFunc() { if(! redisClient) { redisClient = createClient({ socket: { host: REDIS_CONFIG.HOST, port: REDIS_CONFIG.PORT, } }) } await redisClient.connect(); } redisFunc(); export default redisClient; |
Setup Rate Limiter Middleware
Create a file called rateLimiter.ts in the src folder with our type and middleware function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
import { NextFunction, Request, Response } from "express"; import { redisClient } from './redis'; export interface RateLimiterRule { endpoint: string, rate_limit: { time: number, limit: number } } export const rateLimiter = (rule: RateLimiterRule) => { const { endpoint, rate_limit } = rule; return async (request: Request, response: Response, next: NextFunction) => { const ipAddress = request.ip; const redisId = `${endpoint}/${ipAddress}`; const requests = await redisClient?.incr(redisId) ?? 0; if(requests === 1) { await redisClient?.expire(redisId, rate_limit.time); } if(requests > rate_limit.limit) { return response.status(429).send({ message: 'Too Much Request', }); } next(); } } |
This middleware function is versatile and adaptable for diverse API endpoints, each with specific rate-limiting regulations. Every rule defines both a period and a request limit.
The Redis key is formulated for efficient management by combining the endpoint and the respective IP address. As the incoming requests are processed, we increment the count while assigning a time-to-live (TTL) value to the key, ensuring its expiration if the request is recent.
Should the request surpass the defined limit, the function responds with a 429 Too Many Requests error, signaling the excessive demand. However, after the TTL duration, the system resumes accepting fresh requests, maintaining a controlled and regulated flow.
Using the middleware –
In app.ts, use this code for using middlewares –
1 2 3 4 5 6 7 8 9 10 |
const USER_RATE_LIMIT_RULE: RateLimiterRule = { endpoint: 'users', rate_limit: { time: 60, limit: 3, }, } app.use('/users', rateLimiter(USER_RATE_LIMIT_RULE)); |
- Here we are rate limiting applying the rule on /users endpoint.
- By passing this rule to the middleware, we can limit the number of requests made to the endpoint.
- The API now limits requests to only three requests within 60 seconds.
Now use ‘npm run dev’ or ‘yarn run dev’, whichever package manager you use for installation to start your typescript project.
Now, if we hit http://localhost:3001/users endpoint more than thrice in a 60 second period, it will return 429 errors.
“{“message”: “to much request”}”
Conclusion
Rate limiting is a crucial strategy for maintaining the health and integrity of applications, guarding against potential threats like DDoS attacks, and ensuring a consistent user experience.
Drop a query if you have any questions regarding Rate Limiter and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is an official AWS (Amazon Web Services) Advanced Consulting Partner and Training partner, AWS Migration Partner, AWS Data and Analytics Partner, AWS DevOps Competency Partner, Amazon QuickSight Service Delivery Partner, Amazon EKS Service Delivery Partner, Microsoft Gold Partner, and many more, helping people develop knowledge of the cloud and help their businesses aim for higher goals using best-in-industry cloud computing practices and expertise. We are on a mission to build a robust cloud computing ecosystem by disseminating knowledge on technological intricacies within the cloud space. Our blogs, webinars, case studies, and white papers enable all the stakeholders in the cloud computing sphere.
To get started, go through our Consultancy page and Managed Services Package, CloudThat’s offerings.
FAQs
1. Are API request rates limited?
ANS: – Yes, Rate Limiter limits how often you can call a particular method in a given time frame. Limits are tracked on a per-method (resource) basis (meaning that, for example, GET and POST requests are tracked separately), except progress requests, which are tracked per output. For example, calls to get job and output info all count towards the same limit, while calls for output progress are tracked separately per output.
2. Why are API request rates limited?
ANS: – Rate limiting sets boundaries on the requests an API can receive within a specific timeframe. This ensures fair usage and protects against malicious attacks or unintentional spikes in traffic.
WRITTEN BY Satyam Dhote
Click to Comment