Performance Optimization
Learn how to optimize the performance of an S3 bucket.
Amazon Simple Storage Service (S3) is a cornerstone in cloud storage, offering unparalleled scalability, durability, and flexibility for storing vast data. However, as data volumes grow and applications demand higher performance, optimizing S3 becomes imperative to ensure efficient and cost-effective operations.
The three key factors that govern the performance of S3 buckets are the request rate the bucket can handle, the request latency to get the first byte, and the throughput of the bucket to get more data in less time.
S3 offers and encourages various strategies and techniques for enhancing the performance of S3. From understanding the insights provided by S3 performance metrics to utilizing advanced optimization tactics, organizations can use S3 to its fullest potential and deliver exceptional user experiences. Let's dive in to explore these tactics and strategies.
Prefixes in S3 buckets
Though the structure of a bucket is flat, S3 allows us to structure objects in a hierarchy using prefixes. Let's say we have a bucket named my-bucket
and we have organized files in it as shown below:
my-bucket/images/cat.pngdog.pngvideos/man-running.mp3
Here my-bucket/images
is the prefix to get cat.png
. Similarly my-bucket/videos
is the prefix to get man-running.mp3
. Each prefix in an S3 bucket can handle 3500
Since each prefix has a maximum limit, we should avoid bottleneck prefixes being accessed by multiple users simultaneously. For instance, consider a banking application that stores transaction records for each customer ID as JSON objects on a daily basis at the day's end. At first sight, one might use the pattern /daily_transactions/date/user_id/transations
to store these records. Now, as the day ends, multiple bankers start uploading data against the same date
prefix, which only allows a maximum of 3500 TPS.
To optimize our performance, we need to diminish the dependency on the single prefix date
. One such approach could be to swap user_id
and date
to modify the pattern to /daily_transactions/user_id/date/transations
. Now, when the bankers start uploading data, the traffic will be divided into multiple customer IDs. This way, we can leverage the PUT TPS provided by the bucket.
Slow down errors
S3 buckets offer a durability of 99.99999999999 % (11 9’s). This means that in a year, we may experience a downtime of 53 minutes only. However, when the access demands to an S3 bucket increase beyond a certain limit, we may start experiencing the 503 slowdown errors. This indicates that we have areas of improvement.
To monitor our bucket for the 503 errors, we can set up CloudWatch alarms. When we receive the alarm, we can use the S3 Storage Lens advanced metrics to identify which prefix is slowing down the request responses. Also, we can use S3 server access logs to pinpoint the user or application generating these 503s for improvement.
Maximizing throughput through parallelism
S3 is a distributed system, and it is highly encouraged to scale the parallel requests horizontally to S3 service endpoints. This approach divides the traffic within S3 and the multiple paths over the network.
As the object size in an S3 bucket increases, the request latency for the object also increases. To cater to this problem, we can download the objects using byte ranges. The byte ranges in Amazon S3 refer to the ability to request specific portions of an object's data by specifying the byte range in the HTTP request headers. This feature allows clients to retrieve only the portions of the object they need rather than downloading the entire object.
Similarly, to upload large objects, use multipart uploads. The Multipart uploads in Amazon S3 are a feature that allows for the efficient uploading of large objects by breaking them into smaller parts. Each part is uploaded independently, and once all parts have been uploaded, they are combined to form the complete object. Establishing multiple connections to an S3 bucket at once can help improve the throughput of a bucket.
Amazon S3 Transfer Acceleration
S3 Transfer Acceleration is an effective way to reduce time latency caused by geographic distances. It uses globally distributed edge locations in the cloud to transport data between multiple resources. This means the data is transferred over the AWS network infrastructure instead of the internet. Since AWS infrastructure is purpose-built to transmit traffic between regions, this path is much faster.
S3 Transfer Acceleration uses an optimized network path to route data. In general, the greater the distance, the better the speed improvement we experience. Therefore, it is ideal for use cases transferring data across continents.
One notable feature of Transfer Acceleration is it only uses the optimized route if necessary. Thus, the users are only charged when the regular S3 traffic routes get busier. We can enable and disable S3 Transfer acceleration on an S3 bucket based on our requirements.
Get hands-on with 1300+ tech skills courses.