Inside Arcellite: How File Storage Works on Your Own Hardware
Most self-hosted storage tools were built for sysadmins, not product teams. We built Arcellite's file layer with a different philosophy — versioning, access control, and upload streaming that just works, without configuring S3 policies.
When we started building Arcellite's file storage layer, we looked at what already existed. Tools like MinIO are powerful but require S3-compatible configuration knowledge most product teams don't have. Tools like Nextcloud are full-featured but sprawling. Neither was built for a team that just wants files to work.
Upload streaming
Arcellite streams uploads directly to disk using chunked multipart encoding. Files are never buffered fully in memory, so upload performance is determined by disk throughput and network speed — not server RAM. This means you can upload multi-gigabyte files on a $5 VPS without OOM errors.
Versioning by default
Every file write creates an immutable version record. Roll back to any previous state from the UI or the API. Version history is stored in the same database as your metadata, so you can query it: "show me all files modified by user X in the last 30 days."
Access control without IAM policies
Permissions in Arcellite use a simple model: owner, team, or public. No JSON policy documents. No bucket ACLs. No cross-account role assumptions. Teams are defined in your Arcellite user directory, and access follows from there.
This is intentionally less powerful than S3 IAM. That's a feature. The teams that come to us from S3-based stacks consistently say the same thing: they had policies nobody fully understood, and nobody wanted to touch them. A simpler model means fewer mistakes.
What we gave up
We don't support object storage federation across multiple servers yet — files live on the primary node. We don't have a CDN layer. For most teams running internal infrastructure, this is fine. For public-facing file delivery at scale, we're not the right tool yet. That's on the roadmap.