Storage
You need object storage for your serverless app — file uploads, generated reports, media assets, data exports. S3 is the obvious choice, but wiring up buckets, event notifications, IAM policies, and Lambda triggers is tedious.
With defineBucket you declare the bucket once, and get a typed client, event processing, and automatic IAM wiring — all from a single export.
A simple bucket
You want to store user uploads. Define the bucket and use it from an HTTP handler.
import { defineBucket } from "effortless-aws";
export const uploads = defineBucket().build();After deploy, you get an S3 bucket named {project}-{stage}-uploads. Other handlers can reference it via deps and get a typed BucketClient for .put(), .get(), .delete(), and .list().
import { defineApi } from "effortless-aws";import { uploads } from "./uploads";
export const uploadFile = defineApi({ basePath: "/upload" }) .deps(() => ({ uploads })) .setup(({ deps }) => ({ uploads: deps.uploads })) .post({ path: "/{filename}" }, async ({ req, uploads }) => { await uploads.put(req.params.filename, req.body as Buffer); return { status: 201, body: { key: req.params.filename } }; });
export const getFile = defineApi({ basePath: "/files" }) .deps(() => ({ uploads })) .setup(({ deps }) => ({ uploads: deps.uploads })) .get({ path: "/{filename}" }, async ({ req, uploads }) => { const file = await uploads.get(req.params.filename); if (!file) return { status: 404, body: { error: "Not found" } }; return { status: 200, body: file.body.toString("base64"), headers: { "content-type": file.contentType ?? "application/octet-stream", }, }; });deps.uploads is a BucketClient — the Lambda gets IAM permissions for S3 operations on that specific bucket, all wired automatically.
Reading and writing objects
The BucketClient provides four operations:
// Upload a string or Bufferawait bucket.put("reports/monthly.csv", csvString);await bucket.put("images/photo.jpg", imageBuffer, { contentType: "image/jpeg" });
// Download — returns undefined if not foundconst file = await bucket.get("reports/monthly.csv");if (file) { console.log(file.body.toString()); // Buffer → string console.log(file.contentType); // "text/csv" or undefined}
// Deleteawait bucket.delete("reports/old.csv");
// List objects, optionally by prefixconst allFiles = await bucket.list();const reports = await bucket.list("reports/");// [{ key: "reports/monthly.csv", size: 1024, lastModified: Date }, ...]Reacting to uploads
You want to do something every time a file is uploaded — generate a thumbnail, scan for viruses, update a database. Instead of polling or building a pipeline, you can react to S3 events directly.
Add onObjectCreated and your function runs for every new object.
import { defineBucket } from "effortless-aws";
export const images = defineBucket({ prefix: "uploads/", suffix: ".jpg" }) .setup(({ bucket }) => ({ bucket })) .onObjectCreated(async ({ event, bucket }) => { console.log(`New image: ${event.key}, size: ${event.size} bytes`); const file = await bucket.get(event.key); if (file) { const thumbnail = await generateThumbnail(file.body); await bucket.put(`thumbnails/${event.key}`, thumbnail, { contentType: "image/jpeg", }); } });Use prefix and suffix to filter which objects trigger the Lambda. Only matching objects invoke your function — the rest are ignored.
The event object gives you:
event.key— object key (path within the bucket)event.size— object size in bytesevent.eventName— e.g."ObjectCreated:Put"event.eTag— object ETagevent.eventTime— ISO 8601 timestampevent.bucketName— S3 bucket name
Reacting to deletions
A single bucket handler has one terminal callback — either .onObjectCreated(...) or .onObjectRemoved(...), not both. To react to both events, define two buckets: a primary resource-only bucket and a secondary handler that observes the same resource.
Start with a resource-only bucket:
import { defineBucket } from "effortless-aws";
export const documents = defineBucket().build();Then wire each event to its own handler, taking the bucket as a dep:
import { defineBucket } from "effortless-aws";import { documents } from "./documents";
export const indexDocument = defineBucket() .deps(() => ({ documents })) .setup(({ deps }) => ({ documents: deps.documents })) .onObjectCreated(async ({ event, documents }) => { const file = await documents.get(event.key); if (file) await indexDocumentContent(event.key, file.body); });
// src/remove-from-index.tsexport const removeFromIndex = defineBucket() .onObjectRemoved(async ({ event }) => { await deleteFromIndex(event.key); });Each handler-bucket pair maps to its own S3 event notification filter and its own Lambda. The underlying S3 bucket can be shared via deps when you need the client inside the callback.
Processing with a database
Most file processors need to read or write data. Define a table and reference it via deps.
import { defineTable, defineBucket } from "effortless-aws";
type Invoice = { tag: string; key: string; size: number; uploadedAt: string };
export const invoiceRecords = defineTable<Invoice>().build();
export const invoices = defineBucket({ prefix: "invoices/" }) .deps(() => ({ invoiceRecords })) .setup(({ deps }) => ({ invoiceRecords: deps.invoiceRecords })) .onObjectCreated(async ({ event, invoiceRecords }) => { await invoiceRecords.put({ pk: "INVOICE", sk: `FILE#${event.key}`, data: { tag: "invoice", key: event.key, size: event.size ?? 0, uploadedAt: event.eventTime ?? new Date().toISOString(), }, }); });Each Lambda gets only the IAM permissions it needs — S3 for its own bucket, DynamoDB for the referenced table.
Using a bucket from a table stream
Buckets compose with any handler type, not just HTTP. A table stream handler can write to a bucket via deps:
import { defineTable, defineBucket } from "effortless-aws";
export const reports = defineBucket().build();
type Order = { tag: string; amount: number; status: string };
export const orders = defineTable<Order>() .deps(() => ({ reports })) .setup(({ deps }) => ({ reports: deps.reports })) .onRecord(async ({ record, reports }) => { if (record.eventName === "INSERT" && record.new) { const csv = `${record.new.pk},${record.new.data.amount},${record.new.data.status}\n`; await reports.put(`orders/${record.new.pk}.csv`, csv); } });Resource-only bucket
When you don’t need event processing — just a bucket that other handlers write to — omit the callbacks entirely. No Lambda is created.
export const assets = defineBucket().build();// No onObjectCreated/onObjectRemoved — just a bucket.// Reference it with deps from other handlers.See also
- Definitions reference — defineBucket — all configuration options
- Database guide — how to define tables and use them as deps
- HTTP API guide — how to use deps in HTTP handlers