Use Storage Buckets for Uploads, Exports, and Assets
Railway Buckets provide S3-compatible object storage inside your project. This guide covers three common patterns: accepting user uploads via presigned URLs, generating files in background jobs, and serving assets through a backend proxy.
Prerequisites
- A Railway project with at least one service
- Familiarity with S3-compatible APIs
Create a bucket
- Open your Railway project
- Click + New on the project canvas and select Bucket
- Choose a region and name. You cannot change the region after creation.
- The bucket deploys with S3-compatible credentials available in the Credentials tab
For full details, see Storage Buckets.
Connect your service to the bucket
Use Railway's variable references to pass bucket credentials to your service:
- Click on your service in the project canvas
- Go to the Variables tab
- Use the auto-inject feature to add bucket credentials for your S3 client library (AWS SDK, Bun S3, etc.)
Railway supports automatic credential injection for common libraries. See Storage Buckets for details.
Pattern 1: User uploads via presigned URLs
When to use: User-generated content, profile pictures, file attachments. Any scenario where the client uploads directly to storage without routing through your server.
Presigned URLs let users upload files directly to the bucket without routing binary data through your server.
How it works
- Your backend generates a presigned PUT URL using the S3 SDK
- Your frontend uploads directly to that URL
- The file lands in the bucket without touching your server
Backend: generate a presigned URL
Frontend: upload to the presigned URL
CORS configuration
If uploading from a browser, configure CORS on the bucket. See the Storage Buckets docs for CORS setup instructions.
Pattern 2: Background exports and report generation
When to use: Scheduled reports, CSV exports, PDF generation. Any file created by a background process that users download later.
Workers or cron jobs can generate files and upload them to the bucket for later retrieval. To choose between cron jobs, workers, and queues, see the dedicated guide.
How it works
- A background worker or cron job generates a report (CSV, PDF, etc.)
- The worker uploads the file to the bucket using the S3 SDK
- When a user requests the file, your backend generates a presigned GET URL
Upload from a worker
Serve via presigned GET URL
Pattern 3: Serving assets through a backend proxy
When to use: Serving private files that require access control, or files that need processing (resizing, watermarking) before delivery.
Railway Buckets are private. There are no public bucket URLs. To serve stored files to end users, route requests through your backend.
How it works
- Store assets in the bucket
- Your backend fetches the object from S3 and streams it to the client
- Add caching headers to reduce repeated fetches
For high-traffic asset serving, consider using presigned GET URLs instead to avoid routing all downloads through your server.
Next steps
- Storage Buckets - Full reference for bucket configuration, CORS, and credential injection
- Running a Cron Job - Schedule report generation
- Private Networking - Connect services over the internal network