API Gateway
Explore the types of API Gateways and its various features.
API Gateway is a fully managed service that allows us to create, publish, monitor, and maintain APIs. Essentially, API Gateway acts as a bridge, enabling seamless interactions between various services, applications, or microservices.
An API acts as the front door for our backend application services. Traditionally, developers would write code to route requests with the backend. However, integrating a managed service such as an API gateway to route requests to the backend can spare the developer from writing code. With an API gateway, we can put multiple backends behind a single domain that work together as a complete agile enterprise application.
Support for various protocols
API Gateway supports various API protocols such as HTTP APIs, REST APIs, Web Socket APIs, Mock APIs, GraphQL APIs, and SOAP APIs.
REST APIs: A widely used, straightforward service integration pattern commonly employed for one-way communication between clients and servers. It is helpful when providing access to resources inside a Virtual Private Cloud (VPC), providing a secure and isolated environment for communication.
HTTP APIs: It is a variation of REST API, characterized by its lightweight nature, offering streamlined functionality at a lower cost. It primarily supports HTTP proxy integration, where requests are forwarded directly to a backend HTTP endpoint without any intermediate processing or transformation.
WebSockets: It supports bidirectional communication which is helpful in chat applications or systems displaying asynchronous notifications.
Understanding these variations allows users to choose the API type that best aligns with their specific use cases and requirements.
API Gateway features
API Gateway offers a range of features to facilitate API development, deployment, and management. This is just a theoretical overview; we’ll discuss them in detail in the lessons ahead.
Service integration
Service integration in API Gateway refers to the process of connecting our API Gateway with backend services or resources. API Gateway serves as a central entry point for managing and securing APIs, and integrating it with various services allows us to seamlessly route, process, and transform requests and responses between clients and backend systems.
HTTP integration
We can configure an API Gateway to act as a proxy for requests directed to a third-party HTTP endpoint. For instance, in an application making calls to a third-party API on RapidAPI, embedding API keys directly in the client-side browser application poses a security risk. Also, managing multiple API invocations from the client can lead to cumbersome and disorganized code. Consequently, consolidating these interactions into a unified API within the API Gateway provides a more logical and secure approach.
Note: HTTP integration is differet from HTTP APIs. HTTP integration allows us to integrate our API Gateway endpoint with an existing HTTP endpoint or web service. However, HTTP APIs is a lightwieght alternative for RESTful APIs.
Throttling
Throttling in the context of APIs refers to limiting the rate at which clients or users can request an API. It is a mechanism implemented to control the amount of incoming traffic to prevent abuse, ensure fair usage, and protect the API infrastructure from being overwhelmed by excessive requests.
API Gateway allows us to define throttling in two components: the number of requests per second and the burst count. The rate determines the maximum aggregate rate over the second, and the burst defines the maximum number of concurrent requests.
Throttling within API Gateway operates across various levels. Here are the different levels of throttling:
Regional throttling for all accounts: In each AWS region, specific limits are set for the total number of API calls accepted. This constitutes a hard limit designed to enhance the resilience of applications in the event of a potential attack on the AWS region.
Throttling for the AWS region and account: Each AWS account is subject to a soft limit on API Gateway usage within a given AWS region. This limit applies to all API calls the AWS account manages within that specific AWS region.
Understanding these levels of throttling is essential for effectively managing API usage within the AWS infrastructure, ensuring optimal performance, and maintaining security under varying conditions.
Caching
Caching in API Gateway involves storing the response of an API endpoint so that subsequent identical requests can be served directly from the cache without hitting the backend server. This can significantly improve response times and reduce the load on backend resources as the API Gateway doesn’t have to go through all the computations.
Here are some of the key cache settings for an API Gateway:
Cache TTL (Time-to-Live): Cache TTL determines the duration for which a cached response remains valid. After the TTL expires, the cached response is considered stale, and the next request triggers a fetch from the backend to refresh the cache.
Cache size: API Gateway provides a cache capacity that defines the maximum number of entries that can be stored in the cache. If the cache reaches its capacity, older entries may be evicted to make room for new ones.
Cache status codes: We can configure which HTTP status codes should be cached. For example, we can configure API Gateway to only cache successful responses with
2xx
status codes and exclude errored responses.
Edge-based API
API gateway is a regional service. However, we can access the API across regions. For example, if we have an API Gateway in N. Virginia, it will respond to Frankfurt requests, but there will be a delay in response.
To cater to this network latency, we place the API Gateway close to the client. As we know, the global infrastructure of AWS cloud allows us to create Points of Presence (PoPs) at the edge locations. These edge locations are closer to the client than the region and carry the request through the strong network backbone of AWS. The PoPs, though not as efficient as regional data centers, are strong enough to deploy an API.
Security
Web applications that are publicly accessible are vulnerable to cyber attacks. We can expect many prying eyes if it’s a niche application we develop. API Gateway provides several features that can help safeguard the API. Some of these are discussed below:
Authentication and authorization: API Gateway offers multiple authentication mechanisms to control and secure access to the APIs, such as API keys, AWS Identity and Access Management (IAM) roles, Amazon Cognito user pools, JWT authorizers, and custom Lambda authorizers, also called custom authorizers. With wise implementation, we can fortify our API services to ensure they remain secure against breaches.
Data encryption: API Gateway encrypts data at rest and in transit. It supports SSL and TLS to secure communication between clients and the API Gateway endpoint. Furthermore, API Gateway offers SSL offloading, a process in which the SSL/TLS encryption and decryption tasks are moved from a web server to a specialized device or a component within a network infrastructure to relieve the web server from the resource-intensive tasks of handling SSL/TLS encryption and decryption.
Data model and validation: We can define and enforce a data model for the input request. API Gateway validates this model before actually passing it into our service. This forms a protective layer, guarding our services against stray invocation attempts.
Logging and monitoring: We can enable logging using Cloudwatch logs to get insights on API usage, errors, and potential security incidents. To effectively utilize these insights, we can configure CloudWatch alarms on these and send out notifications. In addition to this, integration with CloudWatch and X-Ray services, API Gateway enables an elegant framework for tracking and debugging individual API requests and responses.
Note: We can assign quotas to individual users by using API keys. For example, we can specify that each API key can authenticate up to 100 daily requests.
Deployment stages and staging environments
API Gateway facilitates non-disruptive deployments, allowing seamless deployment of newer API versions without impacting the existing ones. This streamlines the development cycle, promoting faster and more frequent deployments.
A staging environment allows us to test a deployment stage before pushing it to the production and validate changes to our APIs in a controlled environment, identify and fix issues early in the development lifecycle. Each stage represents a separate environment where we can deploy and test our API changes independently. We can also configure stage variables to define environment-specific configuration values, such as backend URLs, Lambda function ARNs, or integration settings.
Best practices
Here are some of the best practices when deploying an API Gateway:
Use staging environments: Utilize staging environments to test and validate API configurations before deploying to production. Staging environments allow us to catch issues early and ensure a smooth transition to production.
Implement API throttling: Configure appropriate throttling settings to control the rate of incoming requests. Throttling helps prevent abuse, ensures fair usage, and protects backend resources from being overwhelmed.
Secure APIs with API keys and IAM: Implement security measures, such as API keys or AWS Identity and Access Management (IAM) roles, to control access to our APIs. Use API keys for public APIs and IAM roles for internal or AWS-specific access.
Get hands-on with 1300+ tech skills courses.