DynamoDBAWSTypeScriptLambda | 10 Min Read

How to Save Money by Using Compression to Store Attribute Values in DynamoDB

AWS DynamoDB places size constraints on us to ensure the platform remains fast. We can use compression to store our attribute values to meet these constraints as well as save money!

AWS DynamoDB is a fully managed NoSQL database that offers fast and scalable performance regardless of the size of your application. To make this possible DynamoDB places some constraints on you as the developer/user. The two key constraints that you need to pay attention to are.

  1. Any individual item in a table cannot exceed 400 KB in size.
  2. Query and Scan operations are limited to 1 MB of data scanned (After that, you’ll need to paginate your data to retrieve further data).

This means if your application handles large amounts of data it is very easy to hit these limits without thinking about it so it’s important we consider these limits when designing our application and storing data.

Besides meeting these constraints, there is another reason we want to keep the size of our items small and that’s cost. For every Write/Read Capacity Unit (W/RCU) we use, we pay for it along with the total amount of space consumed in the database. So, in short, the less space we use overall and per item, the more money we save in the long run!

But luckily, AWS has already thought about this and has listed some best practices we can follow when working with large items in DynamoDB, and today, we’ll be taking a closer look at one of those in particular, compression. So, by the end of this post, you’ll have a better understanding of why it’s useful to compress attributes on items in DynamoDB as well as how to do it with an example application.

Let’s get into it.

Defining Our Infrastructure

For our example, we’re going to be building a small blog-like application that stores the author, title, content, and date for each item. This is a helpful example because blogs and their posts can quickly grow in size especially as posts grow in length. But, the information covered in this post will apply to any project type using DynamoDB.

So, to get started create a new CDK project, and then in the `*-stack.ts` file in the `lib` directory, we’ll want to define a new DynamoDB table using the below code.

./lib/*-stack.ts
1// 👇 Define the DynamoDB Instance
2const dbTable = new Table(this, `DynamoDBTable`, {
3 partitionKey: { name: "author", type: AttributeType.STRING },
4 sortKey: { name: "title", type: AttributeType.STRING },
5 timeToLiveAttribute: "expires",
6 removalPolicy: RemovalPolicy.DESTROY,
7 billingMode: BillingMode.PAY_PER_REQUEST,
8});
ts

Then with our new table defined, we want to define two Lambda functions, one for writing to the table and one for reading from it so let’s define them next.

./lib/*-stack.ts
1// 👇 Define the Write Lambda
2const writeHandler = new NodejsFunction(this, "WriteHandler", {
3 runtime: Runtime.NODEJS_18_X,
4 entry: "./resources/write-handler.ts",
5 handler: "handler",
6 timeout: Duration.seconds(30),
7 environment: {
8 TABLE_NAME: dbTable.tableName,
9 },
10});
11
12// 👇 Define the Read Lambda
13const readHandler = new NodejsFunction(this, "ReadHandler", {
14 runtime: Runtime.NODEJS_18_X,
15 entry: "./resources/read-handler.ts",
16 handler: "handler",
17 timeout: Duration.seconds(30),
18 environment: {
19 TABLE_NAME: dbTable.tableName,
20 },
21});
ts

Finally, the last thing we need to do in our `*-stack.ts` file is grant permissions for the Lambda functions to perform their relevant interactions with the database. We can define these by adding two more lines of code at the bottom of the file.

./lib/*-stack.ts
1// 👇 Give write permissions to lambdas
2dbTable.grantWriteData(writeHandler);
3
4// 👇 Give read permissions to lambdas
5dbTable.grantReadData(readHandler);
ts

And, that’s it, all our infrastructure for our test is defined and we just need to make our lambda code so let’s do that next.

Lambda Code

Let’s start with the `write-handler.ts` lambda first so create a new file at `./resources/write-handler.ts` and add in the below code.

./resources/write-handler.ts
1import { DynamoDB } from "@aws-sdk/client-dynamodb";
2import { PutCommand } from "@aws-sdk/lib-dynamodb";
3import { loremIpsum } from "lorem-ipsum";
4import { gzipSync } from "zlib";
5
6const dynamodb = new DynamoDB({});
7
8export const handler = async () => {
9 const { TABLE_NAME } = process.env;
10
11 const date = new Date();
12 const todayISO = date.toISOString().split("T")[0];
13
14 const text = loremIpsum({
15 count: 100,
16 units: "paragraph",
17 format: "plain",
18 });
19
20 const compressedText = gzipSync(text);
21
22 // Uncompressed write to DB
23 const { ConsumedCapacity: uncompressedConsumedCapacity } =
24 await dynamodb.send(
25 new PutCommand({
26 TableName: TABLE_NAME,
27 Item: {
28 author: "cmurphy",
29 title: "uncompressed",
30 content: text,
31 date: todayISO,
32 },
33 ReturnConsumedCapacity: "TOTAL",
34 })
35 );
36
37 // Compressed write to DB
38 const { ConsumedCapacity: compressedConsumedCapacity } = await dynamodb.send(
39 new PutCommand({
40 TableName: TABLE_NAME,
41 Item: {
42 author: "cmurphy",
43 title: "compressed",
44 content: compressedText,
45 date: todayISO,
46 },
47 ReturnConsumedCapacity: "TOTAL",
48 })
49 );
50
51 /* eslint-disable no-console */
52 console.log(
53 `UNCOMPRESSED: Consumed WCU's => ${uncompressedConsumedCapacity?.CapacityUnits}`
54 );
55 console.log(`UNCOMPRESSED: data size ~ ${Math.round(text.length / 1024)} KB`);
56 console.log(
57 `COMPRESSED: Consumed WCU's => ${compressedConsumedCapacity?.CapacityUnits}`
58 );
59 console.log(
60 `COMPRESSED: data size ~ ${Math.round(compressedText.length / 1024)} KB`
61 );
62 /* eslint-enable no-console */
63};
ts

In this lambda, we generate a large block of text using the `lorem-ipsum` NPM package and then we store it in the database twice, one as the raw, uncompressed version and one as the compressed version using `gzipSync` from `zlib`. We then output the size of the data for both versions as well as the consumed WCUs that were used to write to DynamoDB. These console logs are what we’ll be referring to in the next section when we run our code and see the results in AWS CloudWatch.

But, before we get to that let’s finish defining our lambda functions by creating our `read-handler.ts` so create a new file at `./resources/read-handler.ts` and add in the below code.

./resources/read-handler.ts
1import { DynamoDB } from "@aws-sdk/client-dynamodb";
2import { GetCommand } from "@aws-sdk/lib-dynamodb";
3import { gunzipSync } from "zlib";
4
5const dynamodb = new DynamoDB({});
6
7export const handler = async () => {
8 const { TABLE_NAME } = process.env;
9
10 // Uncompressed read to DB
11 const {
12 ConsumedCapacity: uncompressedConsumedCapacity,
13 Item: uncompressedItem,
14 } = await dynamodb.send(
15 new GetCommand({
16 TableName: TABLE_NAME,
17 Key: {
18 author: "cmurphy",
19 title: "uncompressed",
20 },
21 ReturnConsumedCapacity: "TOTAL",
22 })
23 );
24
25 // Compressed read to DB
26 const { ConsumedCapacity: compressedConsumedCapacity, Item: compressedItem } =
27 await dynamodb.send(
28 new GetCommand({
29 TableName: TABLE_NAME,
30 Key: {
31 author: "cmurphy",
32 title: "compressed",
33 },
34 ReturnConsumedCapacity: "TOTAL",
35 })
36 );
37
38 const compressedTextOutput = gunzipSync(
39 compressedItem?.content as Buffer
40 ).toString();
41
42 /* eslint-disable no-console */
43 console.log(
44 `UNCOMPRESSED: Consumed RCU's => ${uncompressedConsumedCapacity?.CapacityUnits}`
45 );
46 console.log(
47 `UNCOMPRESSED: length of text => ${
48 (uncompressedItem?.content as string)?.length
49 } characters`
50 );
51 console.log(
52 `COMPRESSED: Consumed RCU's => ${compressedConsumedCapacity?.CapacityUnits}`
53 );
54 console.log(
55 `COMPRESSED: length of text => ${compressedTextOutput?.length} characters`
56 );
57 /* eslint-enable no-console */
58};
ts

The `read-handler.ts` lambda is much the same as the `write-handler.ts` one but this time instead of writing data to the database, we’re reading it and instead of compressing the data, we’re uncompressing it.

We’re also doing some more console logs at the end of the lambda to show the RCUs being consumed to read the data as well as the length of the text content to show no data has been lost in the compressing/uncompressing process compared to the never compressed version.

Finally, with everything defined and ready to go, let’s deploy our stack using `cdk deploy` and after accepting any prompts given to us, our new DynamoDB database and Lambda functions should be defined and ready to go. So, let’s now move on to testing them and seeing the compression in action.

Running our tests

Write

The first test we’re going to run is writing data into the database, to do this, trigger your lambda with the AWS CLI using the command `aws lambda invoke --function-name <LAMBDA_NAME> --invocation-type Event`; make sure to switch out the `<LAMBDA_NAME>` with the name of your deployed lambda function. If you don’t know the name of your function you can find it by running the command `aws lambda list-functions`.

With your lambda triggered, head over to AWS CloudWatch in your AWS Dashboard and inspect the logs under “log groups” on the left sidebar. You should see something similar to the one below but the numbers will likely differ due to `lorem-ipsum` generating a different length of text.

1UNCOMPRESSED: Consumed WCU's => 34
2UNCOMPRESSED: data size ~ 32 KB
3COMPRESSED: Consumed WCU's => 9
4COMPRESSED: data size ~ 8 KB

So as you can see the uncompressed version consumed ~275% more WCUs than the compressed version of the text which is a lot and could be costly in the long run.

Read

Now, with our data written into the database let’s try reading it back out and see how many RCUs are consumed in the process. To trigger the `read-handler.ts` lambda, we’re going to use the same command we did for the `write-handler.ts` one but switch out the `<LAMBDA_NAME>` for the `read-handler.ts` name. After triggering, the lambda, inspect the logs for it again on AWS CloudWatch and you should see something similar to the one below.

1UNCOMPRESSED: Consumed RCU's => 4
2UNCOMPRESSED: length of text => 32533 characters
3COMPRESSED: Consumed RCU's => 1.5
4COMPRESSED: length of text => 32533 characters

It’s not quite as big an increase this time but it’s still a ~165% difference between uncompressed and compressed which would still add up in the long term and at the end of the day, it’s always good to cut the cost. We can also see that no data was lost in the compression process which is great news!

Global Secondary Indexes (GSIs)

Before closing out there is one last test to be run and that is using Global Secondary Indexes (GSIs) on DynamoDB. GSIs are a way you can query your data efficiently using attributes that aren’t the originally defined partition and sort keys for the table. You can read more about them here if you’re interested. To define a GSI for our table, go back into your `*-stack.ts` file in your `lib` directory and add the below code below the original DynamoDB table definition.

./lib/*-stack.ts
1// 👇 Add Global Secondary Index for the dbTable
2dbTable.addGlobalSecondaryIndex({
3 indexName: "GSI1",
4 partitionKey: { name: "date", type: AttributeType.STRING },
5 sortKey: { name: "author", type: AttributeType.STRING },
6});
ts

This code defines a new GSI on our table called `GSI1` with the `partitionKey` of the GSI being `date` and the `sortKey` being `author`. This means we could now query using those attributes instead of the originally defined ones earlier. However, for our purposes, this isn’t important. What we’re interested in is how AWS manages updates to DynamoDB tables that have GSIs.

So, let’s test this out by deploying our stack again with `cdk deploy` and then triggering the write and read lambdas again using the same commands you used earlier. Then head back to CloudWatch and check out the new logs that have been generated.

Write Lambda

1UNCOMPRESSED: Consumed WCU's => 64
2UNCOMPRESSED: data size ~ 32 KB
3COMPRESSED: Consumed WCU's => 18
4COMPRESSED: data size ~ 8 KB

Read Lambda

1UNCOMPRESSED: Consumed RCU's => 4.5
2UNCOMPRESSED: length of text => 33082 characters
3COMPRESSED: Consumed RCU's => 1.5
4COMPRESSED: length of text => 33082 characters

Looking at the numbers nothing much has changed for the `read-handler.ts` lambda, they’re pretty much the same given that we’re reading slightly more text than last time due to more text being generated. But, where it’s getting interesting is the `write-handler.ts` lambda; we’re writing a similar amount of text (+/- 500 words) but we’re consuming 2 times the amount of WCUs. What’s going on here?

So, this is actually because of the GSI we implemented and is the reason I wanted to include this section. AWS documents it more here but what’s happening is because we’re writing an item to the table and the GSI of the table includes the attributes we’re updating, two writes are required; one to remove the old item from the index and one to add the new item into the index. These two writes are what lead to twice the WCUs being consumed than when we had no GSIs.

Destroying Your Stack

Once you’ve finished with your stack and you no longer need it, make sure to destroy it from your AWS account using the command `cdk destroy` to ensure all the provisioned resources are removed and you’re not charged for resources you’re not using.

Closing Thoughts

In this post, we covered why using compression is a useful tool to save money when working with DynamoDB. We also looked at how to implement it when writing and reading items using `zlib` as well as the percentage changes between uncompressed and compressed reads and writes.

Finally, if you’re interested in seeing the entire code for this CDK project, you can see it here on my GitHub repo along with all of my other CDK tutorials and examples.

And, until next time.

Thank you for reading.



Content

Latest Blog Posts

Below is my latest blog post and a link to all of my posts.

View All Posts

Content

Latest Video

Here is my YouTube Channel and latest video for your enjoyment.

View All Videos
AWS Bedrock Text and Image Generation Tutorial (AWS SDK)

AWS Bedrock Text and Image Generation Tutorial (AWS SDK)

Contact

Join My Newsletter

Subscribe to my weekly newsletter by filling in the form.

Get my latest content every week and 0 spam!