What is S3?
Amazon S3 (Simple Storage Service) was launched in 2006 and has become the de facto standard for object storage. Its API has been so widely adopted that "S3-compatible" has become a meaningful designation for storage services.
What Does S3 Compatible Mean?
When we say ElasticLake is S3-compatible, we mean:
1. API Compatibility
Our API accepts the same requests and returns the same responses as Amazon S3. This includes:
- Bucket operations: Create, list, delete buckets
- Object operations: PUT, GET, DELETE, HEAD objects
- Multipart uploads: For large file uploads
- Presigned URLs: Secure, time-limited access
2. SDK Compatibility
You can use the official AWS SDKs with ElasticLake. Just change the endpoint:
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://api.elasticlake.com',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY'
)
# Works exactly like AWS S3
s3.upload_file('local_file.txt', 'lake1--pond1--my-bucket', 'remote_file.txt')
3. Tool Compatibility
Popular tools like aws-cli, rclone, s3cmd, and cyberduck work out of the box.
Why S3 Compatibility Matters
No Vendor Lock-in
Your code doesn't need to change. If you ever need to migrate to or from ElasticLake, it's as simple as updating an endpoint URL.
Familiar Patterns
Your team already knows how to work with S3. There's no new API to learn.
Ecosystem Benefits
Thousands of tools and libraries support S3. You get access to all of them.
What We Add Beyond S3
While maintaining compatibility, we've added features that make ElasticLake unique:
- Lakes and Ponds: Hierarchical organization above buckets
- Predictable pricing: No surprise egress fees
- Built-in webhooks: Event-driven architecture made easy
Getting Started
Ready to try S3-compatible storage with better pricing? Check out our documentation to get started in minutes.