why even write the messages to SQS in the first place if they are just going to be discarded? The initial plan was to write the webhook body to DynamoDB and only write a single message to SQS pointing to the primary key. However, after computing the cost of the DynamoDB write throughput we would need to support this, we elected to continue posting to SQS and discarding messages we no longer need to process.
The other factor against posting the entire body to phone number library DynamoDB is that some platforms send only the ID of the item changing whereas others send the full body of the item. This can result in a multi-megabyte body which is greater than the 400K maximum item size for DynamoDB.
Multi-megabyte body webhooks also posed a problem for SQS which has a maximum message size of 256K. We don’t receive very many of these large hooks (less than 0.02%) so we strip the body down to the bare essentials (the ID and some metadata) and signal the downstream processors they will need to backup this item directly from the platform.
That’s the main architecture for aggregating the webhooks. We realised some other benefits which were pretty key as well though. Let’s look at those.
Change 3 – Lambda Optimizations
I know what you’re thinki
-
- Posts: 1362
- Joined: Thu Jan 02, 2025 6:53 am