Event-Driven Architecture for Small Business: S3 + Lambda + DynamoDB
Most small businesses run on reactions. A customer uploads a document. Someone manually extracts data. A CSV gets emailed around. Mistakes happen. Your team gets slower as you grow.
Event-driven architecture fixes this. Not the way enterprise teams talk about it—with Kafka clusters and stream processors. The way it actually works for you: S3 triggers Lambda, Lambda processes, Lambda writes to DynamoDB. That’s it. And it scales from “$0 cost last month” to enterprise volume without rewriting a single line.
The Pattern That Works
Here’s what we build at Three Moons Network, over and over:
- Something lands in S3. A CSV upload. A PDF from an integration. A webhook dump.
- S3 event notification triggers Lambda. Automatically. No polling. No cron job.
- Lambda processes. Validates, transforms, calls Claude, parses the result.
- Lambda writes to DynamoDB. Structured data, queryable, indexable. Or writes back to S3 for the next stage.
- CloudWatch logs and metrics track it. You know what succeeded and what failed.
No servers to patch. No capacity planning. You pay for what you use—literally. 10 documents a month? You’re paying cents. 10,000 documents a month? Still under a few dollars if your Lambda is efficient.
Why This Pattern Wins
Resilience. S3 retries failed event notifications. Lambda has built-in automatic retries. If your code crashes, the event waits in the dead-letter queue. You can replay it after you fix the bug.
Scaling. Lambda parallelizes across invocations. If you get 1,000 document uploads at once, you get 1,000 Lambda functions running in parallel. Not queuing. Not waiting. Running. AWS handles the math.
Cost clarity. You pay for Lambda execution time (in 1ms increments), S3 storage, DynamoDB writes/reads, and that’s mostly it. No sustained load. No idle servers. For a small business doing 50–200 automated jobs per day, this lands at $15–$50/month in compute. For some projects: under $5.
Debugging. Every invocation logs to CloudWatch. Every error is timestamped and traceable. You see exactly which document triggered the failure and what the Lambda code did.
Real Example: Expense Report Processing
A law firm gets 30 expense reports a month as PDFs. Today: someone reads each one, extracts vendor, amount, category, date. Enters it into accounting software. Takes 3 hours. Mistakes happen.
With event-driven:
- Lawyer drops PDF into an S3 folder:
s3://expensereports/uploads/ - S3 triggers Lambda.
- Lambda calls Claude’s vision API: “Extract vendor, amount, category, date from this PDF.”
- Claude returns structured JSON.
- Lambda validates (amount > 0, category in allowed set, date is valid).
- Lambda writes to DynamoDB table
expenses: one record per report. - A separate Lambda (or schedule) reads DynamoDB daily and pushes to the accounting system via API.
Total time for lawyer: upload PDF. Total time for you to build this: 4–6 hours. Total cost: near zero.
The firm’s accountant can now query DynamoDB (“show me all expenses over $500 this month”) or build a dashboard. The process is auditable, repeatable, and scalable to 300 reports a month with no code changes.
Common Mistake: Over-Engineering
Teams often ask: “Should we use SNS to decouple S3 from Lambda? Should we add SQS to buffer traffic? Should we use EventBridge for more complex routing?”
For 5–50 person businesses: almost never. Here’s why:
- S3 → Lambda direct binding is simple and sufficient. It’s fully decoupled. One failure doesn’t block the next invocation.
- SQS adds latency and cost for low-volume use cases. You gain nothing.
- EventBridge is for complex, multi-service pipelines. If you’re talking to 5+ different services with branching logic, maybe. For most small business automations: premature complexity.
Build with S3 + Lambda + DynamoDB first. When you hit a real constraint—not a hypothetical—add the next layer.
Getting Started
You need:
- An S3 bucket. The event source.
- A Lambda function. The worker. Use Python 3.11+ with the Anthropic SDK or boto3.
- IAM role. Least-privilege: S3 read, DynamoDB write, CloudWatch logs, SNS (if you want error notifications).
- Terraform code. ~50 lines to wire it together. Or use CloudFormation. Or the AWS console if you’re just learning.
The first time, it takes a few hours. The second time, 30 minutes. By the tenth event-driven pipeline, you’re setting it up with templates.
Why It Matters for Small Business
Scaling from 2 founders to 20 people means automating the stuff that doesn’t scale. Expense reports. Invoice extraction. Data normalization. Customer onboarding workflows. These are all event-driven pipelines.
You don’t need a data engineering team. You don’t need Kafka. You need S3, Lambda, and DynamoDB. And that’s what wins for small businesses.
Get the free AI Readiness Checklist
15 questions to diagnose your team’s AI readiness, where you’ll see ROI fastest, and what to tackle first.
No spam. Unsubscribe anytime.
Ready to build AI that actually works?
Let’s talk about how SRE discipline transforms AI from a risky experiment into a reliable business system.
Book Your Free Discovery Call