HostUpCloudHostUpCloudDocs

Object Storage (S3)

S3-compatible object storage for files, backups, media, and application data. Powered by Garage.

Object Storage (S3)

HUC Object Storage provides S3-compatible storage for files, backups, media assets, and application data. It's a drop-in replacement for AWS S3 — any tool or SDK that works with S3 works with HUC.

Dashboard: storage.hostupcloud.com S3 Endpoint: https://s3.hostupcloud.com Region: blr1 (Bangalore, India)

Storage Classes & Pricing

ClassMediaRate (₹/GiB/mo)Free EgressMin DurationStatus
StandardSATA SSD₹0.65256 GiBNoneActive
PerformanceNVMe SSD₹1.40256 GiBNoneActive
NearlineHDD₹0.45512 GiB30 daysActive
Infrequent AccessHDD₹0.25256 GiB30 daysComing Soon
Active ArchiveHDD₹0.15256 GiB90 daysComing Soon
Cold ArchiveHDD₹0.08128 GiB180 daysComing Soon

What's Included Free

  • Unlimited ingress — upload as much as you want, no charge
  • Unlimited API requests — no per-request fees
  • Free egress allowance — 256–512 GiB/month depending on storage class
  • Egress overage: ₹1.50/GiB beyond free allowance
  • Minimum storage: 250 GiB per account

Billing is monthly, pay-as-you-go. You only pay for storage used and egress beyond your free allowance.

Quick Start

1. Create an Account

Sign in at storage.hostupcloud.com using your HostupCloud account (SSO).

2. Create a Bucket

  1. Go to BucketsCreate Bucket
  2. Enter a name (lowercase, 3-63 chars, letters/numbers/hyphens)
  3. Choose a storage class
  4. Optionally enable versioning or object locking (WORM)
  5. Click Create Bucket

3. Create an Access Key

  1. Go to Access KeysCreate Key
  2. Give it a name (e.g., "my-app-key")
  3. Copy and save the Access Key ID and Secret Key — the secret is only shown once

4. Connect with Your Tools

Use the S3 endpoint, region, and access key to connect from any S3-compatible tool.

Connection Examples

AWS CLI

# Configure credentials
aws configure set aws_access_key_id YOUR_ACCESS_KEY
aws configure set aws_secret_access_key YOUR_SECRET_KEY
aws configure set region blr1

# List buckets
aws s3 ls --endpoint-url https://s3.hostupcloud.com

# Upload a file
aws s3 cp myfile.txt s3://my-bucket/ --endpoint-url https://s3.hostupcloud.com

# Download a file
aws s3 cp s3://my-bucket/myfile.txt ./downloaded.txt --endpoint-url https://s3.hostupcloud.com

# Sync a directory
aws s3 sync ./local-folder s3://my-bucket/backup/ --endpoint-url https://s3.hostupcloud.com

rclone

# ~/.config/rclone/rclone.conf
[huc]
type = s3
provider = Other
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
endpoint = https://s3.hostupcloud.com
region = blr1
# List buckets
rclone lsd huc:

# Copy files
rclone copy ./local-folder huc:my-bucket/backup/

# Sync (mirror local to remote)
rclone sync ./local-folder huc:my-bucket/backup/

# Mount as filesystem
rclone mount huc:my-bucket /mnt/huc-s3 --daemon

Python (boto3)

import boto3

s3 = boto3.client(
    's3',
    endpoint_url='https://s3.hostupcloud.com',
    region_name='blr1',
    aws_access_key_id='YOUR_ACCESS_KEY',
    aws_secret_access_key='YOUR_SECRET_KEY',
)

# List buckets
for bucket in s3.list_buckets()['Buckets']:
    print(bucket['Name'])

# Upload file
s3.upload_file('myfile.txt', 'my-bucket', 'myfile.txt')

# Download file
s3.download_file('my-bucket', 'myfile.txt', 'downloaded.txt')

# Generate presigned URL (valid 1 hour)
url = s3.generate_presigned_url(
    'get_object',
    Params={'Bucket': 'my-bucket', 'Key': 'myfile.txt'},
    ExpiresIn=3600,
)
print(url)

Node.js (AWS SDK v3)

import { S3Client, ListBucketsCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import { readFileSync } from 'fs';

const s3 = new S3Client({
  endpoint: 'https://s3.hostupcloud.com',
  region: 'blr1',
  credentials: {
    accessKeyId: 'YOUR_ACCESS_KEY',
    secretAccessKey: 'YOUR_SECRET_KEY',
  },
  forcePathStyle: true,
});

// List buckets
const { Buckets } = await s3.send(new ListBucketsCommand({}));
console.log(Buckets);

// Upload file
await s3.send(new PutObjectCommand({
  Bucket: 'my-bucket',
  Key: 'myfile.txt',
  Body: readFileSync('myfile.txt'),
}));

MinIO Client (mc)

# Configure alias
mc alias set huc https://s3.hostupcloud.com YOUR_ACCESS_KEY YOUR_SECRET_KEY

# List buckets
mc ls huc

# Copy file
mc cp myfile.txt huc/my-bucket/

# Mirror directory
mc mirror ./local-folder huc/my-bucket/backup/

s3cmd

# ~/.s3cfg
[default]
access_key = YOUR_ACCESS_KEY
secret_key = YOUR_SECRET_KEY
host_base = s3.hostupcloud.com
host_bucket = %(bucket)s.s3.hostupcloud.com
use_https = True
# List buckets
s3cmd ls

# Upload
s3cmd put myfile.txt s3://my-bucket/

# Download
s3cmd get s3://my-bucket/myfile.txt

Terraform

terraform {
  backend "s3" {
    bucket   = "my-terraform-state"
    key      = "state/terraform.tfstate"
    region   = "blr1"
    endpoint = "https://s3.hostupcloud.com"

    access_key = "YOUR_ACCESS_KEY"
    secret_key = "YOUR_SECRET_KEY"

    skip_credentials_validation = true
    skip_metadata_api_check     = true
    skip_region_validation      = true
    force_path_style            = true
  }
}

Supported Tools

Any S3-compatible client works with HUC Object Storage:

ToolTypePlatform
AWS CLICommand lineLinux, macOS, Windows
rcloneCommand lineLinux, macOS, Windows
s3cmdCommand lineLinux, macOS
MinIO Client (mc)Command lineLinux, macOS, Windows
CyberduckGUImacOS, Windows
Mountain DuckDrive mountmacOS, Windows
WinSCPGUIWindows
TerraformIaCAll
AWS SDKLibraryJavaScript, Python, Go, Java, .NET, Ruby, PHP
boto3LibraryPython
MinIO SDKLibraryJavaScript, Python, Go, Java, .NET

Features

Core S3 Operations

  • PutObject / GetObject / DeleteObject — standard CRUD operations
  • HeadObject / CopyObject — metadata and server-side copy
  • ListObjects V1 + V2 — paginated object listing with prefix/delimiter
  • Multipart upload — upload files up to 5 TiB in parts
  • Pre-signed URLs — temporary signed URLs for upload and download
  • Pre-signed POST — browser-based uploads with policy documents
  • Chunked / streaming upload — upload without knowing content length
  • CRC64NVME checksums — data integrity verification
  • Conditional reads — If-Match, If-None-Match, If-Modified-Since

Bucket Management

  • Create / delete / list buckets — full bucket lifecycle
  • Bucket aliases — rename buckets without copying data
  • Path-style + virtual-hosted style — both URL formats supported
  • CORS configuration — cross-origin resource sharing rules
  • Object versioning — keep multiple versions of objects
  • Object locking (WORM) — write-once-read-many immutable storage

Access Control

  • Per-key bucket permissions — read, write, owner per bucket per key
  • Access key expiry dates — auto-expiring credentials
  • Multiple API keys — create as many keys as needed
  • SSE-C encryption — encrypt with your own keys

Data Protection

  • Multi-node replication — data stored in 2-3 copies across nodes
  • Object locking (WORM) — immutable storage for compliance
  • TLS/HTTPS — all data encrypted in transit
  • 11-nines durability — 99.999999999% designed durability

Static Website Hosting

  • Serve bucket as website — host static sites directly from a bucket
  • Custom index + error pages — configure default and error documents
  • Redirect rules — URL redirect configuration

Admin API

  • REST API — full programmatic control via Garage Admin API v2
  • Cluster health — real-time node status and capacity monitoring
  • OpenTelemetry — distributed tracing support

Limits

ResourceLimit
Buckets per account100
Objects per bucketUnlimited
Max object size5 TiB
Max part size (multipart)5 GiB
Min part size (multipart)5 MiB
Bucket name length3–63 characters
Object key lengthUp to 1024 bytes
Access keys per accountUnlimited
Minimum storage250 GiB per account

What's Not Supported

These AWS S3 features are not available:

  • Bucket policies (JSON IAM-style policies)
  • IAM users, groups, and roles
  • STS temporary credentials
  • Server-side encryption (SSE-S3, SSE-KMS)
  • Lifecycle rules (auto-expiration, storage class transitions)
  • Event notifications (SNS, SQS, Lambda)
  • S3 Select / Glacier
  • Delete markers
  • MFA Delete
  • Bucket tagging
  • CloudWatch metrics / CloudTrail logging

If your application depends on bucket policies or lifecycle rules, these features are not yet available. Use per-key permissions for access control and manage object cleanup manually or via cron scripts.

Use Cases

Backups & Archives

Store server backups, database dumps, and log archives with low-cost Nearline (HDD) storage:

# Daily backup with rclone
rclone sync /var/backups huc:my-backups/$(date +%Y-%m-%d)/

Media & Assets

Host images, videos, and documents for web applications using presigned URLs:

# Generate a 24-hour download link
url = s3.generate_presigned_url(
    'get_object',
    Params={'Bucket': 'media', 'Key': 'video.mp4'},
    ExpiresIn=86400,
)

Static Website Hosting

Deploy static sites (React, Next.js export, Hugo, Jekyll) directly to a bucket.

Terraform State

Use HUC S3 as a remote backend for Terraform state files with locking.

Application Data

Store user uploads, generated reports, and application assets with the AWS SDK.

SLA

MetricValue
Availability99.9%
Durability99.999999999% (11-nines)
Network80 Gbps uplink capacity
Latency< 5 ms from Bangalore DC

See Support & SLA for credit schedules and claim procedures.

Getting Help

  • Dashboard: storage.hostupcloud.com — manage buckets, keys, and usage
  • Support: hostupcloud.help — submit a ticket
  • Documentation: You're reading it
  • S3 Endpoint: https://s3.hostupcloud.com
  • Region: blr1

On this page