The Bug That Kept Me Up at Night
A few months into building Jottings, I noticed something strange in the logs. Users were successfully creating sites—I could see them in the dashboard—but their "total sites" counter wasn't incrementing. Create a site, refresh the page, and it would show zero sites. Sometimes it'd update a few seconds later. Sometimes it wouldn't update at all.
This was a classic case of an incomplete mental model meeting production reality.
I'd written the site creation logic something like this:
- Create the site in the database
- Increment the user's site counter
Simple enough, right? The problem was that these were two separate database operations. And in a serverless, distributed world, "separate" is code for "things can go wrong between them."
If something crashed between step 1 and 2—whether a Lambda timeout, a network blip, or just bad timing—you'd end up with a orphaned site and an unsync'd counter. The user would see their site exist but the dashboard would claim they had zero sites.
I needed atomic operations.
Understanding Race Conditions in Databases
Here's the fundamental problem with distributed systems: they're distributed. Your code runs on multiple machines, sometimes simultaneously. When you have two operations that depend on each other, there's a window of time where the system is in an inconsistent state.
In Jottings' case, the sequence was:
Time 1: Create site record ✓
[User's totalSites field still unchanged]
Time 2: Increment totalSites counter ✓
[System is now consistent]
But what happens if Lambda crashes between Time 1 and Time 2? Or what if two creation requests hit at the same time, both reading totalSites: 5, and both trying to increment it to 6? You end up with data that doesn't match reality.
This isn't a theoretical problem. It happens. And the serverless architecture that makes Jottings so cost-effective—where functions are ephemeral and can be interrupted—makes it even more likely.
DynamoDB Transactions to the Rescue
DynamoDB has a feature specifically designed for this: TransactWriteCommand. It lets you bundle multiple database operations into a single atomic transaction. Either all of them succeed, or all of them roll back. No middle ground.
Here's the pattern I implemented:
const command = new TransactWriteCommand({
TransactItems: [
{
Put: {
TableName: SITES_TABLE,
Item: newSite,
ConditionExpression: "attribute_not_exists(id)"
}
},
{
Update: {
TableName: USERS_TABLE,
Key: { userId: user.id },
UpdateExpression: "ADD totalSites :one",
ExpressionAttributeValues: { ":one": 1 }
}
}
]
});
await dynamoDBClient.send(command);
What's happening here:
- Put the new site into the sites table (with a condition that it doesn't already exist)
- Update the user's totalSites counter
- Both operations happen together, atomically
If either one fails—whether it's a validation error, a duplicate key, or anything else—the entire transaction rolls back. Both operations either succeed or fail as a unit.
I also added the same pattern to site deletion (delete the site, decrement the counter) and applied it across all counter operations (totalSites, totalJots).
Why This Matters More Than It Seems
At first glance, this might seem like a database-level pedantry. Who cares if a counter is off by one for a few seconds?
But the user experience is what matters. When a creator publishes their first site on Jottings, they want to see that reflected immediately. They want to know their work is there. A counter that says "0 sites" when they just created one creates doubt—did it actually work?—and that doubt cascades into distrust of the platform.
Beyond UX, data integrity compounds. If counters drift out of sync, they stay out of sync. My analytics queries become unreliable. I can't trust my own data about how many sites exist or how active my platform is. Every report I generate comes with an asterisk.
With atomic transactions, I can trust my data. When a site creation request completes, I know both the site exists and the counter is updated. No edge cases. No drift.
The Implementation: Flattening Your Data Model
One thing I learned: the way you structure your data matters for transactions. I originally had user stats nested inside a user object:
// Before: nested stats
{
userId: "123",
profile: { ... },
stats: {
totalSites: 5,
totalJots: 42
}
}
// This made transactions awkward—you'd be updating nested fields
DynamoDB handles nested updates, but it's verbose. I refactored to flatten the structure:
// After: top-level counters
{
userId: "123",
profile: { ... },
totalSites: 5,
totalJots: 42
}
Now the ADD operation in my transaction is clean: UpdateExpression: "ADD totalSites :one". And more importantly, any DynamoDB operation—read, write, or transaction—is simpler and less error-prone.
Applying This Across Jottings
Once I had the pattern working, I applied it everywhere:
- Site creation: Atomic put + counter increment
- Site deletion: Atomic delete + counter decrement
- Jot creation/deletion: Same pattern for totalJots
- Subscription changes: Atomic subscription update + counter updates
The consistency rippled through the system. I could now confidently display user stats, knowing they'd always match the underlying data.
The Bigger Lesson
Building distributed systems is hard. The promise of serverless—infinite scale, pay-for-what-you-use, no servers to manage—comes with subtle complexity. You can't assume operations are atomic unless the system guarantees it.
DynamoDB's transaction feature is one way to make that guarantee explicit. When you need multiple pieces of data to stay in sync, use transactions. When you're tempted to do sequential operations, stop and ask: what if something fails between them?
For a bootstrapped microblogging platform, reliability matters. Your users might be casual bloggers, but they're placing trust in you to keep their writing safe and their stats accurate. Transactions help honor that trust.
If you're building on DynamoDB (or any distributed database), I'd recommend making atomic transactions one of your early architectural decisions, not something you bolt on later when consistency problems surface. It's easier to design for atomicity than to retrofit it.
Building Jottings has been a journey in learning that the simplest architectures often hide the most complexity. Transactions are one tool in the belt. If you're curious about other technical decisions behind Jottings, I write about them as I encounter them. Feel free to create your own site and follow along.