Skip to content

Filesystem Adapter

@trokky/adapter-filesystem stores content as JSON files on the local filesystem, making it ideal for development and Git-based content workflows.

Terminal window
npm install @trokky/adapter-filesystem
import { FilesystemAdapter } from '@trokky/adapter-filesystem';
const storage = new FilesystemAdapter({
contentDir: './content',
mediaDir: './uploads',
});
const trokky = await TrokkyExpress.create({
storage: {
adapter: 'filesystem',
contentDir: './content',
mediaDir: './uploads',
},
// ...
});
OptionTypeDefaultDescription
contentDirstring'./content'Directory for document JSON files
mediaDirstring'./uploads'Directory for media files
prettybooleantruePretty-print JSON (easier to read in Git)
indexingbooleantrueEnable index cache for faster queries

Documents are stored as JSON files organized by schema:

content/
├── post/
│ ├── abc123.json
│ ├── def456.json
│ └── ghi789.json
├── author/
│ ├── author-001.json
│ └── author-002.json
├── siteSettings/
│ └── singleton.json
└── _users/
└── admin.json
uploads/
├── images/
│ ├── photo-abc123.jpg
│ ├── photo-abc123-thumb.webp
│ └── photo-abc123-medium.webp
└── files/
└── document-def456.pdf
{
"_id": "abc123",
"_type": "post",
"_createdAt": "2024-01-15T10:00:00.000Z",
"_updatedAt": "2024-01-16T14:30:00.000Z",
"title": "My First Post",
"slug": {
"current": "my-first-post"
},
"content": "Hello world!",
"author": {
"_type": "reference",
"_ref": "author-001"
},
"status": "published"
}

The filesystem adapter is designed for Git workflows:

# Ignore cache and temporary files
content/.cache/
uploads/.tmp/
# Optionally ignore uploads in Git
# uploads/

With content as files, you can:

  • Track content changes in Git
  • Use pull requests for content review
  • Roll back to previous versions
  • Branch for content experiments
Terminal window
# Create a content branch
git checkout -b content/new-blog-post
# Make changes in Studio
# ...
# Commit content changes
git add content/
git commit -m "Add new blog post about Trokky"
# Create PR for review
gh pr create --title "New blog post" --body "Review the content"

The adapter maintains an index cache for faster queries:

content/
└── .cache/
└── index.json
{
"post": {
"abc123": {
"title": "My First Post",
"_createdAt": "2024-01-15T10:00:00.000Z",
"_updatedAt": "2024-01-16T14:30:00.000Z"
}
}
}
const storage = new FilesystemAdapter({
contentDir: './content',
indexing: false, // Disable cache
});
new FilesystemAdapter(options: FilesystemAdapterOptions)

Creates a new document.

const doc = await storage.createDocument('post', {
title: 'New Post',
content: 'Content here',
});

Retrieves a document by ID.

const doc = await storage.getDocument('post', 'abc123');

Updates an existing document.

const doc = await storage.updateDocument('post', 'abc123', {
title: 'Updated Title',
});

Deletes a document.

await storage.deleteDocument('post', 'abc123');

Lists documents with optional filtering.

const docs = await storage.listDocuments('post', {
limit: 10,
offset: 0,
orderBy: '_createdAt',
order: 'desc',
});

Uploads a media file.

const asset = await storage.uploadMedia(buffer, {
filename: 'photo.jpg',
mimeType: 'image/jpeg',
});

Retrieves media metadata.

const asset = await storage.getMedia('media-abc123');

Deletes a media file and all variants.

await storage.deleteMedia('media-abc123');
  • Development environments
  • Small to medium content volumes (hundreds of documents)
  • Git-based workflows
  • Static site generation
  • Local-first applications
  • High-traffic production sites
  • Large content volumes (thousands of documents)
  • Multi-server deployments
  • Real-time collaboration
  1. Enable indexing for faster list queries
  2. Use SSD storage for better I/O performance
  3. Limit media variants to reduce disk usage
  4. Archive old content periodically
const storage = new FilesystemAdapter({ contentDir: './content' });
const schemas = ['post', 'author', 'category'];
const backup = {};
for (const schema of schemas) {
backup[schema] = await storage.listDocuments(schema);
}
fs.writeFileSync('backup.json', JSON.stringify(backup, null, 2));
const backup = JSON.parse(fs.readFileSync('backup.json', 'utf-8'));
for (const [schema, docs] of Object.entries(backup)) {
for (const doc of docs) {
await storage.createDocument(schema, doc);
}
}
Terminal window
# Ensure write permissions
chmod -R 755 content/
chmod -R 755 uploads/
Terminal window
# Clear the cache
rm -rf content/.cache/

The cache will be rebuilt automatically on next query.

If you see file locking errors on concurrent writes, consider:

  • Using a database adapter for high-concurrency scenarios
  • Implementing a queue for write operations