Hippius S3: A Drop-in Replacement for AWS S3, Powered by Bittensor
If you use AWS S3 today, switching to Hippius takes less than 5 minutes. No SDK changes. No new libraries. Just point your existing code at a different endpoint.
This post covers how it works and why you might want to.
Why decentralized storage matters for developers
AWS S3 is convenient, but it's also a single company controlling your data. That means outages, price hikes, account suspensions, and regional restrictions are all risks you absorb.
Hippius provides the same S3-compatible API, backed by a decentralized network of 459+ independent storage nodes on Bittensor. Your data is split into shards across the network using Reed-Solomon erasure coding. No single node holds your file. No single company controls your access.
The practical result: S3-compatible storage that's censorship-resistant by design, with no vendor lock-in.
Get started in 5 minutes
1. Create an account
Sign up at console.hippius.com using your Google or GitHub account.
Add credits via Console → Billing. You can pay with a credit card (Stripe) or TAO.
2. Create S3 credentials
Go to Console → S3 Storage → Create Master Token.
Save your credentials. Your Access Key ID starts with hip_, and you'll also receive a Secret Key.
3. Configure your S3 client
Use the following connection details: endpoint https://s3.hippius.com, region decentralized, AWS Signature V4, and path addressing style.
Code examples
Everything that works with AWS S3 works with Hippius. Here are the most common setups:
AWS CLI
export AWS_ACCESS_KEY_ID="hip_your_key"
export AWS_SECRET_ACCESS_KEY="your_secret"
export AWS_DEFAULT_REGION="decentralized"
# Create a bucket
aws s3 mb s3://my-bucket --endpoint-url https://s3.hippius.com
# Upload a file
aws s3 cp file.txt s3://my-bucket/file.txt --endpoint-url https://s3.hippius.com
# Download a file
aws s3 cp s3://my-bucket/file.txt ./file.txt --endpoint-url https://s3.hippius.com
# List objects
aws s3 ls s3://my-bucket/ --endpoint-url https://s3.hippius.com
# Generate a presigned URL (1 hour)
aws s3 presign s3://my-bucket/file.txt \
--endpoint-url https://s3.hippius.com \
--expires-in 3600
Python (boto3)
import boto3
from botocore.config import Config
s3 = boto3.client(
"s3",
endpoint_url="https://s3.hippius.com",
aws_access_key_id="hip_your_key",
aws_secret_access_key="your_secret",
region_name="decentralized",
config=Config(
signature_version="s3v4",
s3={"addressing_style": "path"}
),
)
# Upload
s3.upload_file("local_file.txt", "my-bucket", "remote_file.txt")
# Download
s3.download_file("my-bucket", "remote_file.txt", "local_copy.txt")
Python (minio)
from minio import Minio
client = Minio(
"s3.hippius.com",
access_key="hip_your_key",
secret_key="your_secret",
secure=True,
region="decentralized",
)
JavaScript / Node.js (minio)
const Minio = require("minio");
const client = new Minio.Client({
endPoint: "s3.hippius.com",
port: 443,
useSSL: true,
accessKey: "hip_your_key",
secretKey: "your_secret",
region: "decentralized",
});
What's supported
Hippius supports the full set of S3 operations you actually use:
| Category | Operations |
|---|---|
| Buckets | CreateBucket, DeleteBucket, ListBuckets, HeadBucket |
| Objects | PutObject, GetObject, HeadObject, DeleteObject, CopyObject |
| Listing | ListObjects, ListObjectsV2 |
| Multipart | InitiateMultipartUpload, UploadPart, CompleteMultipartUpload, AbortMultipartUpload |
| ACL | PutBucketAcl, GetBucketAcl, PutObjectAcl, GetObjectAcl |
| Tags | PutObjectTagging, GetObjectTagging, PutBucketTagging, GetBucketTagging |
| Policy | PutBucketPolicy, GetBucketPolicy, DeleteBucketPolicy |
| Presigned | PresignedGetObject, PresignedPutObject |
| Lifecycle | PutBucketLifecycle |
Not supported (yet): bucket versioning, cross-region replication, S3 Select.
Rate limits and pricing
The rate limit is 100 requests per minute per account. See hippius.com/pricing for current pricing. Payment is accepted via credit card (Stripe) or TAO.
What's happening under the hood
When you upload a file to Hippius S3, it doesn't go to a single server. Under the hood, Arion, our custom distributed storage engine, splits your file into 30 shards (10 data + 20 parity) and distributes them across independent miners on the Bittensor network.
To retrieve your file, any 10 of those 30 shards are enough. That means the network can lose 66% of its nodes and your data remains 100% intact.
From your application's perspective, it's just S3.
Get started
- Create account: console.hippius.com
- Read the docs: docs.hippius.com
- Check pricing: hippius.com/pricing
Questions? Join the community at community.hippius.com or reach us on Discord.
