BYO Storage
SnapSharp can upload every screenshot directly to a bucket you control. The returned URL points at your domain (or your provider's), no SnapSharp egress fees, and you keep ownership of the bytes.
All supported providers speak the S3 API, so a single SnapSharp setting covers them all — you just pick the provider and we plug the right endpoint in for you.
Supported providers
| Provider | Endpoint resolved | Path-style | Notes |
|---|---|---|---|
| AWS S3 | SDK default (regional) | virtual-hosted | Most common choice. |
| Cloudflare R2 | https://{accountId}.r2.cloudflarestorage.com | path-style | Free egress to Cloudflare Workers / public r2.dev. |
| Google Cloud Storage | https://storage.googleapis.com | path-style | Requires interoperability mode + HMAC keys. Bucket names must not contain _. |
| Backblaze B2 | https://s3.{region}.backblazeb2.com | path-style | Region required (e.g. us-west-002). |
| Wasabi | https://s3.{region}.wasabisys.com | path-style | Region required (e.g. us-east-1). |
| MinIO | user-provided | path-style | Self-hosted. Endpoint is mandatory. |
| Custom S3-compatible | user-provided | path-style | Any other vendor exposing the S3 API. |
Quick setup
- Open Dashboard → Settings → Storage.
- Pick your provider.
- Fill in:
- Bucket — the destination bucket name.
- Region — required for B2 and Wasabi; auto-derives the endpoint.
- Access Key ID / Secret Access Key — provider's S3 credentials.
- Account ID — Cloudflare R2 only.
- Endpoint URL — required for MinIO / Custom; optional override for the rest.
- Path Prefix — folder inside the bucket (default
snapsharp/).
- Click Test connection to verify.
- Save.
Then add upload_to_s3=true to any screenshot request:
curl "https://api.snapsharp.dev/v1/screenshot?url=https://example.com&upload_to_s3=true" \
-H "Authorization: Bearer YOUR_API_KEY"The public URL is returned in the X-S3-URL response header. Use
upload_to_s3_signed_url=true to get a time-limited signed URL in
X-S3-Signed-URL instead.
Per-provider setup
AWS S3
-
Create a bucket in your preferred region.
-
IAM policy — minimum permissions:
{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": ["s3:PutObject", "s3:GetObject", "s3:HeadBucket"], "Resource": [ "arn:aws:s3:::YOUR_BUCKET", "arn:aws:s3:::YOUR_BUCKET/*" ] }] } -
Create an IAM user, attach the policy, generate an access key. Paste both into SnapSharp.
Cloudflare R2
- Create an R2 bucket in the Cloudflare dashboard.
- Note your Account ID (top-right of the R2 page).
- Create an R2 API token with Object Read & Write permission scoped to your bucket.
- In SnapSharp, paste the Account ID, Access Key ID, Secret Access Key, and bucket name. The endpoint is derived automatically.
- Optional: set a Public URL Pattern if you've attached a custom domain
or use the
r2.devpublic hostname:https://pub-xxxxx.r2.dev/{key}.
Google Cloud Storage
GCS exposes an S3-compatible API via interoperability mode.
- Create a bucket. Bucket names cannot contain underscores (
_) — GCS rejects them under interoperability. - In Cloud Storage → Settings → Interoperability, enable interoperability for your project.
- Create a service account with
roles/storage.objectAdminon the bucket, then create HMAC keys for that service account. - In SnapSharp pick Google Cloud Storage and paste the HMAC access ID
and secret. The endpoint is preset to
https://storage.googleapis.com.
Backblaze B2
- Create a B2 bucket. Note the region (e.g.
us-west-002) — visible in the bucket details. - Create an Application Key scoped to the bucket with at least
listFiles,readFiles,writeFilescapabilities. - In SnapSharp pick Backblaze B2, set the region (region is required — the endpoint is derived from it), and paste the keyID + applicationKey into Access Key ID / Secret Access Key.
Wasabi
- Create a Wasabi bucket and note its region (e.g.
us-east-1). - Create an Access Key (IAM-style) with
PutObject,GetObject,HeadBucketon the bucket. - In SnapSharp pick Wasabi, set the region, paste credentials. Endpoint is derived automatically.
MinIO / Custom S3
For self-hosted MinIO or any S3-compatible vendor:
- Provide an explicit Endpoint URL (e.g.
https://minio.your-domain.com). - Create an access key + secret on your server.
- Make sure the endpoint is reachable from SnapSharp's API egress.
Common pitfalls
- CORS — only relevant if your front-end fetches the screenshot URL directly from a browser. Server-to-server reads (the common case) don't need CORS configured.
- Bucket privacy — by default, objects are private. Either configure the
bucket as public, use signed URLs (
upload_to_s3_signed_url=true), or proxy reads through your own backend. - GCS underscores — GCS bucket names cannot contain underscores when used via interoperability mode. SnapSharp catches this in the settings UI.
- R2 public URL — R2 doesn't have a single canonical public URL pattern;
attach a custom domain or use the
r2.devhostname and set it as the Public URL Pattern in SnapSharp. - Permission scope — at minimum SnapSharp needs
PutObjecton the prefix andHeadBucketon the bucket (for the connection test).
Out of scope (for now)
- Azure Blob Storage — different auth model; tracked separately.
- Per-folder lifecycle / TTL configuration on the SnapSharp side — set these directly in your provider.
- Per-request bucket selection — the configured bucket applies to every upload.