Skip to content

S3 Adapter

@trokky/adapter-s3 provides scalable storage using AWS S3 for media and DynamoDB for documents, suitable for production deployments on AWS.

Terminal window
npm install @trokky/adapter-s3
import { S3Adapter } from '@trokky/adapter-s3';
const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'my-trokky-media',
tableName: 'trokky-documents',
});
const trokky = await TrokkyExpress.create({
storage: {
adapter: 's3',
region: process.env.AWS_REGION,
bucket: process.env.S3_BUCKET,
tableName: process.env.DYNAMODB_TABLE,
},
// ...
});
OptionTypeRequiredDescription
regionstringYesAWS region
bucketstringYesS3 bucket name
tableNamestringYesDynamoDB table name
prefixstringNoKey prefix for S3 objects
credentialsobjectNoAWS credentials
endpointstringNoCustom endpoint (for S3-compatible services)
Terminal window
aws s3 mb s3://my-trokky-media --region us-east-1

Configure CORS for the bucket:

{
"CORSRules": [
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "PUT", "POST", "DELETE"],
"AllowedOrigins": ["https://your-domain.com"],
"ExposeHeaders": ["ETag"]
}
]
}
Terminal window
aws dynamodb create-table \
--table-name trokky-documents \
--attribute-definitions \
AttributeName=pk,AttributeType=S \
AttributeName=sk,AttributeType=S \
AttributeName=type,AttributeType=S \
--key-schema \
AttributeName=pk,KeyType=HASH \
AttributeName=sk,KeyType=RANGE \
--global-secondary-indexes \
'[{
"IndexName": "type-index",
"KeySchema": [{"AttributeName": "type", "KeyType": "HASH"}],
"Projection": {"ProjectionType": "ALL"}
}]' \
--billing-mode PAY_PER_REQUEST \
--region us-east-1

Create an IAM policy for Trokky:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3Access",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::my-trokky-media",
"arn:aws:s3:::my-trokky-media/*"
]
},
{
"Sid": "DynamoDBAccess",
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem",
"dynamodb:Query",
"dynamodb:Scan"
],
"Resource": [
"arn:aws:dynamodb:us-east-1:*:table/trokky-documents",
"arn:aws:dynamodb:us-east-1:*:table/trokky-documents/index/*"
]
}
]
}
Terminal window
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_REGION=us-east-1
const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'my-trokky-media',
tableName: 'trokky-documents',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
});

When running on AWS services, use IAM roles instead of credentials:

// No credentials needed - uses instance role
const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'my-trokky-media',
tableName: 'trokky-documents',
});
AttributeTypeDescription
pkStringDOC#{type}
skStringDocument ID
typeStringSchema type
dataMapDocument data
createdAtStringISO timestamp
updatedAtStringISO timestamp
AttributeTypeDescription
pkStringMEDIA
skStringMedia ID
filenameStringFile name
mimeTypeStringMIME type
sizeNumberFile size
s3KeyStringS3 object key
metadataMapAdditional metadata
AttributeTypeDescription
pkStringUSER
skStringUsername
passwordHashStringBcrypt hash
roleStringUser role
createdAtStringISO timestamp
my-trokky-media/
├── media/
│ ├── images/
│ │ ├── abc123.jpg
│ │ ├── abc123-thumb.webp
│ │ └── abc123-medium.webp
│ └── files/
│ └── def456.pdf
└── exports/
└── backup-2024-01-15.json
new S3Adapter(options: S3AdapterOptions)
// Documents
createDocument(collection: string, data: any): Promise<Document>
getDocument(collection: string, id: string): Promise<Document | null>
updateDocument(collection: string, id: string, data: any): Promise<Document>
deleteDocument(collection: string, id: string): Promise<void>
listDocuments(collection: string, options?: ListOptions): Promise<Document[]>
// Media
uploadMedia(file: Buffer, metadata: MediaMetadata): Promise<MediaAsset>
getMedia(id: string): Promise<MediaAsset | null>
deleteMedia(id: string): Promise<void>
listMedia(options?: ListOptions): Promise<MediaAsset[]>
getMediaUrl(key: string): string
  1. Create CloudFront distribution pointing to S3 bucket
  2. Configure origin access identity (OAI)
  3. Update S3 bucket policy
const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'my-trokky-media',
tableName: 'trokky-documents',
cdnUrl: 'https://d1234567890.cloudfront.net',
});

For private content:

const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'my-trokky-media',
tableName: 'trokky-documents',
signedUrls: {
enabled: true,
expiresIn: 3600, // 1 hour
},
});
  1. Use on-demand capacity for unpredictable traffic
  2. Enable DAX for read-heavy workloads
  3. Use sparse indexes for filtered queries
  4. Batch operations for bulk imports
  1. Enable Transfer Acceleration for global uploads
  2. Use multipart uploads for large files
  3. Set appropriate storage class (Standard, IA, etc.)
  4. Enable versioning for content protection
Terminal window
# On-demand backup
aws dynamodb create-backup \
--table-name trokky-documents \
--backup-name "trokky-backup-$(date +%Y%m%d)"
# Enable point-in-time recovery
aws dynamodb update-continuous-backups \
--table-name trokky-documents \
--point-in-time-recovery-specification PointInTimeRecoveryEnabled=true
Terminal window
aws s3api put-bucket-versioning \
--bucket my-trokky-media \
--versioning-configuration Status=Enabled
async function exportData(storage: S3Adapter) {
const schemas = ['post', 'author', 'category'];
const backup: Record<string, any[]> = {};
for (const schema of schemas) {
backup[schema] = await storage.listDocuments(schema);
}
// Upload to S3
await storage.s3.putObject({
Bucket: storage.bucket,
Key: `exports/backup-${new Date().toISOString()}.json`,
Body: JSON.stringify(backup),
ContentType: 'application/json',
});
}

Use LocalStack for local AWS development:

Terminal window
# Start LocalStack
docker run -d -p 4566:4566 localstack/localstack
# Configure adapter
const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'local-bucket',
tableName: 'local-table',
endpoint: 'http://localhost:4566',
credentials: {
accessKeyId: 'test',
secretAccessKey: 'test',
},
});
Terminal window
# Create bucket
aws --endpoint-url=http://localhost:4566 s3 mb s3://local-bucket
# Create table
aws --endpoint-url=http://localhost:4566 dynamodb create-table \
--table-name local-table \
--attribute-definitions AttributeName=pk,AttributeType=S AttributeName=sk,AttributeType=S \
--key-schema AttributeName=pk,KeyType=HASH AttributeName=sk,KeyType=RANGE \
--billing-mode PAY_PER_REQUEST
const storage = new S3Adapter({
region: 'us-east-1',
bucket: 'trokky',
tableName: 'trokky-documents',
endpoint: 'http://minio.example.com:9000',
forcePathStyle: true,
credentials: {
accessKeyId: 'minioadmin',
secretAccessKey: 'minioadmin',
},
});
const storage = new S3Adapter({
region: 'nyc3',
bucket: 'my-space',
tableName: 'trokky-documents',
endpoint: 'https://nyc3.digitaloceanspaces.com',
});

Check IAM policy and ensure all required actions are allowed.

  • Add GSI for frequently filtered attributes
  • Enable DAX for caching
  • Use pagination for large result sets
  • Check bucket CORS configuration
  • Verify file size limits
  • Check S3 bucket policy