Object Storage (S3)
S3-compatible object storage for files, backups, media, and application data. Powered by Garage.
Object Storage (S3)
HUC Object Storage provides S3-compatible storage for files, backups, media assets, and application data. It's a drop-in replacement for AWS S3 — any tool or SDK that works with S3 works with HUC.
Dashboard: storage.hostupcloud.com
S3 Endpoint: https://s3.hostupcloud.com
Region: blr1 (Bangalore, India)
Storage Classes & Pricing
| Class | Media | Rate (₹/GiB/mo) | Free Egress | Min Duration | Status |
|---|---|---|---|---|---|
| Standard | SATA SSD | ₹0.65 | 256 GiB | None | Active |
| Performance | NVMe SSD | ₹1.40 | 256 GiB | None | Active |
| Nearline | HDD | ₹0.45 | 512 GiB | 30 days | Active |
| Infrequent Access | HDD | ₹0.25 | 256 GiB | 30 days | Coming Soon |
| Active Archive | HDD | ₹0.15 | 256 GiB | 90 days | Coming Soon |
| Cold Archive | HDD | ₹0.08 | 128 GiB | 180 days | Coming Soon |
What's Included Free
- Unlimited ingress — upload as much as you want, no charge
- Unlimited API requests — no per-request fees
- Free egress allowance — 256–512 GiB/month depending on storage class
- Egress overage: ₹1.50/GiB beyond free allowance
- Minimum storage: 250 GiB per account
Billing is monthly, pay-as-you-go. You only pay for storage used and egress beyond your free allowance.
Quick Start
1. Create an Account
Sign in at storage.hostupcloud.com using your HostupCloud account (SSO).
2. Create a Bucket
- Go to Buckets → Create Bucket
- Enter a name (lowercase, 3-63 chars, letters/numbers/hyphens)
- Choose a storage class
- Optionally enable versioning or object locking (WORM)
- Click Create Bucket
3. Create an Access Key
- Go to Access Keys → Create Key
- Give it a name (e.g., "my-app-key")
- Copy and save the Access Key ID and Secret Key — the secret is only shown once
4. Connect with Your Tools
Use the S3 endpoint, region, and access key to connect from any S3-compatible tool.
Connection Examples
AWS CLI
# Configure credentials
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set region blr1
# List buckets
aws s3 ls --endpoint-url https://s3.hostupcloud.com
# Upload a file
aws s3 cp myfile.txt s3://my-bucket/ --endpoint-url https://s3.hostupcloud.com
# Download a file
aws s3 cp s3://my-bucket/myfile.txt ./downloaded.txt --endpoint-url https://s3.hostupcloud.com
# Sync a directory
aws s3 sync ./local-folder s3://my-bucket/backup/ --endpoint-url https://s3.hostupcloud.comrclone
# ~/.config/rclone/rclone.conf
[huc]
type = s3
provider = Other
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = https://s3.hostupcloud.com
region = blr1# List buckets
rclone lsd huc:
# Copy files
rclone copy ./local-folder huc:my-bucket/backup/
# Sync (mirror local to remote)
rclone sync ./local-folder huc:my-bucket/backup/
# Mount as filesystem
rclone mount huc:my-bucket /mnt/huc-s3 --daemonPython (boto3)
import boto3
s3 = boto3.client(
's3',
endpoint_url='https://s3.hostupcloud.com',
region_name='blr1',
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_KEY',
)
# List buckets
for bucket in s3.list_buckets()['Buckets']:
print(bucket['Name'])
# Upload file
s3.upload_file('myfile.txt', 'my-bucket', 'myfile.txt')
# Download file
s3.download_file('my-bucket', 'myfile.txt', 'downloaded.txt')
# Generate presigned URL (valid 1 hour)
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'my-bucket', 'Key': 'myfile.txt'},
ExpiresIn=3600,
)
print(url)Node.js (AWS SDK v3)
import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import { readFileSync } from 'fs';
const s3 = new S3Client({
endpoint: 'https://s3.hostupcloud.com',
region: 'blr1',
credentials: {
accessKeyId: 'YOUR_ACCESS_KEY',
secretAccessKey: 'YOUR_SECRET_KEY',
},
forcePathStyle: true,
});
// List buckets
const { Buckets } = await s3.send(new ListBucketsCommand({}));
console.log(Buckets);
// Upload file
await s3.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: 'myfile.txt',
Body: readFileSync('myfile.txt'),
}));MinIO Client (mc)
# Configure alias
mc alias set huc https://s3.hostupcloud.com YOUR_ACCESS_KEY YOUR_SECRET_KEY
# List buckets
mc ls huc
# Copy file
mc cp myfile.txt huc/my-bucket/
# Mirror directory
mc mirror ./local-folder huc/my-bucket/backup/s3cmd
# ~/.s3cfg
[default]
access_key = YOUR_ACCESS_KEY
secret_key = YOUR_SECRET_KEY
host_base = s3.hostupcloud.com
host_bucket = %(bucket)s.s3.hostupcloud.com
use_https = True# List buckets
s3cmd ls
# Upload
s3cmd put myfile.txt s3://my-bucket/
# Download
s3cmd get s3://my-bucket/myfile.txtTerraform
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "state/terraform.tfstate"
region = "blr1"
endpoint = "https://s3.hostupcloud.com"
access_key = "YOUR_ACCESS_KEY"
secret_key = "YOUR_SECRET_KEY"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_region_validation = true
force_path_style = true
}
}Supported Tools
Any S3-compatible client works with HUC Object Storage:
| Tool | Type | Platform |
|---|---|---|
| AWS CLI | Command line | Linux, macOS, Windows |
| rclone | Command line | Linux, macOS, Windows |
| s3cmd | Command line | Linux, macOS |
| MinIO Client (mc) | Command line | Linux, macOS, Windows |
| Cyberduck | GUI | macOS, Windows |
| Mountain Duck | Drive mount | macOS, Windows |
| WinSCP | GUI | Windows |
| Terraform | IaC | All |
| AWS SDK | Library | JavaScript, Python, Go, Java, .NET, Ruby, PHP |
| boto3 | Library | Python |
| MinIO SDK | Library | JavaScript, Python, Go, Java, .NET |
Features
Core S3 Operations
- PutObject / GetObject / DeleteObject — standard CRUD operations
- HeadObject / CopyObject — metadata and server-side copy
- ListObjects V1 + V2 — paginated object listing with prefix/delimiter
- Multipart upload — upload files up to 5 TiB in parts
- Pre-signed URLs — temporary signed URLs for upload and download
- Pre-signed POST — browser-based uploads with policy documents
- Chunked / streaming upload — upload without knowing content length
- CRC64NVME checksums — data integrity verification
- Conditional reads — If-Match, If-None-Match, If-Modified-Since
Bucket Management
- Create / delete / list buckets — full bucket lifecycle
- Bucket aliases — rename buckets without copying data
- Path-style + virtual-hosted style — both URL formats supported
- CORS configuration — cross-origin resource sharing rules
- Object versioning — keep multiple versions of objects
- Object locking (WORM) — write-once-read-many immutable storage
Access Control
- Per-key bucket permissions — read, write, owner per bucket per key
- Access key expiry dates — auto-expiring credentials
- Multiple API keys — create as many keys as needed
- SSE-C encryption — encrypt with your own keys
Data Protection
- Multi-node replication — data stored in 2-3 copies across nodes
- Object locking (WORM) — immutable storage for compliance
- TLS/HTTPS — all data encrypted in transit
- 11-nines durability — 99.999999999% designed durability
Static Website Hosting
- Serve bucket as website — host static sites directly from a bucket
- Custom index + error pages — configure default and error documents
- Redirect rules — URL redirect configuration
Admin API
- REST API — full programmatic control via Garage Admin API v2
- Cluster health — real-time node status and capacity monitoring
- OpenTelemetry — distributed tracing support
Limits
| Resource | Limit |
|---|---|
| Buckets per account | 100 |
| Objects per bucket | Unlimited |
| Max object size | 5 TiB |
| Max part size (multipart) | 5 GiB |
| Min part size (multipart) | 5 MiB |
| Bucket name length | 3–63 characters |
| Object key length | Up to 1024 bytes |
| Access keys per account | Unlimited |
| Minimum storage | 250 GiB per account |
What's Not Supported
These AWS S3 features are not available:
- Bucket policies (JSON IAM-style policies)
- IAM users, groups, and roles
- STS temporary credentials
- Server-side encryption (SSE-S3, SSE-KMS)
- Lifecycle rules (auto-expiration, storage class transitions)
- Event notifications (SNS, SQS, Lambda)
- S3 Select / Glacier
- Delete markers
- MFA Delete
- Bucket tagging
- CloudWatch metrics / CloudTrail logging
If your application depends on bucket policies or lifecycle rules, these features are not yet available. Use per-key permissions for access control and manage object cleanup manually or via cron scripts.
Use Cases
Backups & Archives
Store server backups, database dumps, and log archives with low-cost Nearline (HDD) storage:
# Daily backup with rclone
rclone sync /var/backups huc:my-backups/$(date +%Y-%m-%d)/Media & Assets
Host images, videos, and documents for web applications using presigned URLs:
# Generate a 24-hour download link
url = s3.generate_presigned_url(
'get_object',
Params={'Bucket': 'media', 'Key': 'video.mp4'},
ExpiresIn=86400,
)Static Website Hosting
Deploy static sites (React, Next.js export, Hugo, Jekyll) directly to a bucket.
Terraform State
Use HUC S3 as a remote backend for Terraform state files with locking.
Application Data
Store user uploads, generated reports, and application assets with the AWS SDK.
SLA
| Metric | Value |
|---|---|
| Availability | 99.9% |
| Durability | 99.999999999% (11-nines) |
| Network | 80 Gbps uplink capacity |
| Latency | < 5 ms from Bangalore DC |
See Support & SLA for credit schedules and claim procedures.
Getting Help
- Dashboard: storage.hostupcloud.com — manage buckets, keys, and usage
- Support: hostupcloud.help — submit a ticket
- Documentation: You're reading it
- S3 Endpoint:
https://s3.hostupcloud.com - Region:
blr1