S3 Adapter
@trokky/adapter-s3 provides scalable storage using AWS S3 for media and DynamoDB for documents, suitable for production deployments on AWS.
Installation
Section titled “Installation”npm install @trokky/adapter-s3import { S3Adapter } from '@trokky/adapter-s3';
const storage = new S3Adapter({ region: 'us-east-1', bucket: 'my-trokky-media', tableName: 'trokky-documents',});With TrokkyExpress
Section titled “With TrokkyExpress”const trokky = await TrokkyExpress.create({ storage: { adapter: 's3', region: process.env.AWS_REGION, bucket: process.env.S3_BUCKET, tableName: process.env.DYNAMODB_TABLE, }, // ...});Configuration
Section titled “Configuration”| Option | Type | Required | Description |
|---|---|---|---|
region | string | Yes | AWS region |
bucket | string | Yes | S3 bucket name |
tableName | string | Yes | DynamoDB table name |
prefix | string | No | Key prefix for S3 objects |
credentials | object | No | AWS credentials |
endpoint | string | No | Custom endpoint (for S3-compatible services) |
AWS Setup
Section titled “AWS Setup”1. Create S3 Bucket
Section titled “1. Create S3 Bucket”aws s3 mb s3://my-trokky-media --region us-east-1Configure CORS for the bucket:
{ "CORSRules": [ { "AllowedHeaders": ["*"], "AllowedMethods": ["GET", "PUT", "POST", "DELETE"], "AllowedOrigins": ["https://your-domain.com"], "ExposeHeaders": ["ETag"] } ]}2. Create DynamoDB Table
Section titled “2. Create DynamoDB Table”aws dynamodb create-table \ --table-name trokky-documents \ --attribute-definitions \ AttributeName=pk,AttributeType=S \ AttributeName=sk,AttributeType=S \ AttributeName=type,AttributeType=S \ --key-schema \ AttributeName=pk,KeyType=HASH \ AttributeName=sk,KeyType=RANGE \ --global-secondary-indexes \ '[{ "IndexName": "type-index", "KeySchema": [{"AttributeName": "type", "KeyType": "HASH"}], "Projection": {"ProjectionType": "ALL"} }]' \ --billing-mode PAY_PER_REQUEST \ --region us-east-13. IAM Policy
Section titled “3. IAM Policy”Create an IAM policy for Trokky:
{ "Version": "2012-10-17", "Statement": [ { "Sid": "S3Access", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject", "s3:DeleteObject", "s3:ListBucket" ], "Resource": [ "arn:aws:s3:::my-trokky-media", "arn:aws:s3:::my-trokky-media/*" ] }, { "Sid": "DynamoDBAccess", "Effect": "Allow", "Action": [ "dynamodb:GetItem", "dynamodb:PutItem", "dynamodb:UpdateItem", "dynamodb:DeleteItem", "dynamodb:Query", "dynamodb:Scan" ], "Resource": [ "arn:aws:dynamodb:us-east-1:*:table/trokky-documents", "arn:aws:dynamodb:us-east-1:*:table/trokky-documents/index/*" ] } ]}Credentials
Section titled “Credentials”Environment Variables (Recommended)
Section titled “Environment Variables (Recommended)”export AWS_ACCESS_KEY_ID=your-access-keyexport AWS_SECRET_ACCESS_KEY=your-secret-keyexport AWS_REGION=us-east-1Explicit Credentials
Section titled “Explicit Credentials”const storage = new S3Adapter({ region: 'us-east-1', bucket: 'my-trokky-media', tableName: 'trokky-documents', credentials: { accessKeyId: process.env.AWS_ACCESS_KEY_ID, secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY, },});IAM Role (EC2/ECS/Lambda)
Section titled “IAM Role (EC2/ECS/Lambda)”When running on AWS services, use IAM roles instead of credentials:
// No credentials needed - uses instance roleconst storage = new S3Adapter({ region: 'us-east-1', bucket: 'my-trokky-media', tableName: 'trokky-documents',});DynamoDB Schema
Section titled “DynamoDB Schema”Documents
Section titled “Documents”| Attribute | Type | Description |
|---|---|---|
pk | String | DOC#{type} |
sk | String | Document ID |
type | String | Schema type |
data | Map | Document data |
createdAt | String | ISO timestamp |
updatedAt | String | ISO timestamp |
| Attribute | Type | Description |
|---|---|---|
pk | String | MEDIA |
sk | String | Media ID |
filename | String | File name |
mimeType | String | MIME type |
size | Number | File size |
s3Key | String | S3 object key |
metadata | Map | Additional metadata |
| Attribute | Type | Description |
|---|---|---|
pk | String | USER |
sk | String | Username |
passwordHash | String | Bcrypt hash |
role | String | User role |
createdAt | String | ISO timestamp |
S3 Organization
Section titled “S3 Organization”my-trokky-media/├── media/│ ├── images/│ │ ├── abc123.jpg│ │ ├── abc123-thumb.webp│ │ └── abc123-medium.webp│ └── files/│ └── def456.pdf└── exports/ └── backup-2024-01-15.jsonAPI Reference
Section titled “API Reference”Constructor
Section titled “Constructor”new S3Adapter(options: S3AdapterOptions)Methods
Section titled “Methods”// DocumentscreateDocument(collection: string, data: any): Promise<Document>getDocument(collection: string, id: string): Promise<Document | null>updateDocument(collection: string, id: string, data: any): Promise<Document>deleteDocument(collection: string, id: string): Promise<void>listDocuments(collection: string, options?: ListOptions): Promise<Document[]>
// MediauploadMedia(file: Buffer, metadata: MediaMetadata): Promise<MediaAsset>getMedia(id: string): Promise<MediaAsset | null>deleteMedia(id: string): Promise<void>listMedia(options?: ListOptions): Promise<MediaAsset[]>getMediaUrl(key: string): stringCDN Integration
Section titled “CDN Integration”CloudFront Setup
Section titled “CloudFront Setup”- Create CloudFront distribution pointing to S3 bucket
- Configure origin access identity (OAI)
- Update S3 bucket policy
const storage = new S3Adapter({ region: 'us-east-1', bucket: 'my-trokky-media', tableName: 'trokky-documents', cdnUrl: 'https://d1234567890.cloudfront.net',});Signed URLs
Section titled “Signed URLs”For private content:
const storage = new S3Adapter({ region: 'us-east-1', bucket: 'my-trokky-media', tableName: 'trokky-documents', signedUrls: { enabled: true, expiresIn: 3600, // 1 hour },});Performance
Section titled “Performance”DynamoDB Best Practices
Section titled “DynamoDB Best Practices”- Use on-demand capacity for unpredictable traffic
- Enable DAX for read-heavy workloads
- Use sparse indexes for filtered queries
- Batch operations for bulk imports
S3 Best Practices
Section titled “S3 Best Practices”- Enable Transfer Acceleration for global uploads
- Use multipart uploads for large files
- Set appropriate storage class (Standard, IA, etc.)
- Enable versioning for content protection
Backup and Recovery
Section titled “Backup and Recovery”DynamoDB Backup
Section titled “DynamoDB Backup”# On-demand backupaws dynamodb create-backup \ --table-name trokky-documents \ --backup-name "trokky-backup-$(date +%Y%m%d)"
# Enable point-in-time recoveryaws dynamodb update-continuous-backups \ --table-name trokky-documents \ --point-in-time-recovery-specification PointInTimeRecoveryEnabled=trueS3 Versioning
Section titled “S3 Versioning”aws s3api put-bucket-versioning \ --bucket my-trokky-media \ --versioning-configuration Status=EnabledExport Data
Section titled “Export Data”async function exportData(storage: S3Adapter) { const schemas = ['post', 'author', 'category']; const backup: Record<string, any[]> = {};
for (const schema of schemas) { backup[schema] = await storage.listDocuments(schema); }
// Upload to S3 await storage.s3.putObject({ Bucket: storage.bucket, Key: `exports/backup-${new Date().toISOString()}.json`, Body: JSON.stringify(backup), ContentType: 'application/json', });}Local Development
Section titled “Local Development”LocalStack
Section titled “LocalStack”Use LocalStack for local AWS development:
# Start LocalStackdocker run -d -p 4566:4566 localstack/localstack
# Configure adapterconst storage = new S3Adapter({ region: 'us-east-1', bucket: 'local-bucket', tableName: 'local-table', endpoint: 'http://localhost:4566', credentials: { accessKeyId: 'test', secretAccessKey: 'test', },});Create Local Resources
Section titled “Create Local Resources”# Create bucketaws --endpoint-url=http://localhost:4566 s3 mb s3://local-bucket
# Create tableaws --endpoint-url=http://localhost:4566 dynamodb create-table \ --table-name local-table \ --attribute-definitions AttributeName=pk,AttributeType=S AttributeName=sk,AttributeType=S \ --key-schema AttributeName=pk,KeyType=HASH AttributeName=sk,KeyType=RANGE \ --billing-mode PAY_PER_REQUESTS3-Compatible Services
Section titled “S3-Compatible Services”const storage = new S3Adapter({ region: 'us-east-1', bucket: 'trokky', tableName: 'trokky-documents', endpoint: 'http://minio.example.com:9000', forcePathStyle: true, credentials: { accessKeyId: 'minioadmin', secretAccessKey: 'minioadmin', },});DigitalOcean Spaces
Section titled “DigitalOcean Spaces”const storage = new S3Adapter({ region: 'nyc3', bucket: 'my-space', tableName: 'trokky-documents', endpoint: 'https://nyc3.digitaloceanspaces.com',});Troubleshooting
Section titled “Troubleshooting”Permission Denied
Section titled “Permission Denied”Check IAM policy and ensure all required actions are allowed.
Slow Queries
Section titled “Slow Queries”- Add GSI for frequently filtered attributes
- Enable DAX for caching
- Use pagination for large result sets
Upload Failures
Section titled “Upload Failures”- Check bucket CORS configuration
- Verify file size limits
- Check S3 bucket policy
Next Steps
Section titled “Next Steps”- Filesystem Adapter - Local development
- Cloudflare Adapter - Edge storage
- Deployment Guide - Production deployment