Menu
Docs/BYO Storage

BYO Storage

SnapSharp can upload every screenshot directly to a bucket you control. The returned URL points at your domain (or your provider's), no SnapSharp egress fees, and you keep ownership of the bytes.

All supported providers speak the S3 API, so a single SnapSharp setting covers them all — you just pick the provider and we plug the right endpoint in for you.

Supported providers

ProviderEndpoint resolvedPath-styleNotes
AWS S3SDK default (regional)virtual-hostedMost common choice.
Cloudflare R2https://{accountId}.r2.cloudflarestorage.compath-styleFree egress to Cloudflare Workers / public r2.dev.
Google Cloud Storagehttps://storage.googleapis.compath-styleRequires interoperability mode + HMAC keys. Bucket names must not contain _.
Backblaze B2https://s3.{region}.backblazeb2.compath-styleRegion required (e.g. us-west-002).
Wasabihttps://s3.{region}.wasabisys.compath-styleRegion required (e.g. us-east-1).
MinIOuser-providedpath-styleSelf-hosted. Endpoint is mandatory.
Custom S3-compatibleuser-providedpath-styleAny other vendor exposing the S3 API.

Quick setup

  1. Open Dashboard → Settings → Storage.
  2. Pick your provider.
  3. Fill in:
    • Bucket — the destination bucket name.
    • Region — required for B2 and Wasabi; auto-derives the endpoint.
    • Access Key ID / Secret Access Key — provider's S3 credentials.
    • Account ID — Cloudflare R2 only.
    • Endpoint URL — required for MinIO / Custom; optional override for the rest.
    • Path Prefix — folder inside the bucket (default snapsharp/).
  4. Click Test connection to verify.
  5. Save.

Then add upload_to_s3=true to any screenshot request:

curl "https://api.snapsharp.dev/v1/screenshot?url=https://example.com&upload_to_s3=true" \
  -H "Authorization: Bearer YOUR_API_KEY"

The public URL is returned in the X-S3-URL response header. Use upload_to_s3_signed_url=true to get a time-limited signed URL in X-S3-Signed-URL instead.

Per-provider setup

AWS S3

  1. Create a bucket in your preferred region.

  2. IAM policy — minimum permissions:

    {
      "Version": "2012-10-17",
      "Statement": [{
        "Effect": "Allow",
        "Action": ["s3:PutObject", "s3:GetObject", "s3:HeadBucket"],
        "Resource": [
          "arn:aws:s3:::YOUR_BUCKET",
          "arn:aws:s3:::YOUR_BUCKET/*"
        ]
      }]
    }
  3. Create an IAM user, attach the policy, generate an access key. Paste both into SnapSharp.

Cloudflare R2

  1. Create an R2 bucket in the Cloudflare dashboard.
  2. Note your Account ID (top-right of the R2 page).
  3. Create an R2 API token with Object Read & Write permission scoped to your bucket.
  4. In SnapSharp, paste the Account ID, Access Key ID, Secret Access Key, and bucket name. The endpoint is derived automatically.
  5. Optional: set a Public URL Pattern if you've attached a custom domain or use the r2.dev public hostname: https://pub-xxxxx.r2.dev/{key}.

Google Cloud Storage

GCS exposes an S3-compatible API via interoperability mode.

  1. Create a bucket. Bucket names cannot contain underscores (_) — GCS rejects them under interoperability.
  2. In Cloud Storage → Settings → Interoperability, enable interoperability for your project.
  3. Create a service account with roles/storage.objectAdmin on the bucket, then create HMAC keys for that service account.
  4. In SnapSharp pick Google Cloud Storage and paste the HMAC access ID and secret. The endpoint is preset to https://storage.googleapis.com.

Backblaze B2

  1. Create a B2 bucket. Note the region (e.g. us-west-002) — visible in the bucket details.
  2. Create an Application Key scoped to the bucket with at least listFiles, readFiles, writeFiles capabilities.
  3. In SnapSharp pick Backblaze B2, set the region (region is required — the endpoint is derived from it), and paste the keyID + applicationKey into Access Key ID / Secret Access Key.

Wasabi

  1. Create a Wasabi bucket and note its region (e.g. us-east-1).
  2. Create an Access Key (IAM-style) with PutObject, GetObject, HeadBucket on the bucket.
  3. In SnapSharp pick Wasabi, set the region, paste credentials. Endpoint is derived automatically.

MinIO / Custom S3

For self-hosted MinIO or any S3-compatible vendor:

  1. Provide an explicit Endpoint URL (e.g. https://minio.your-domain.com).
  2. Create an access key + secret on your server.
  3. Make sure the endpoint is reachable from SnapSharp's API egress.

Common pitfalls

  • CORS — only relevant if your front-end fetches the screenshot URL directly from a browser. Server-to-server reads (the common case) don't need CORS configured.
  • Bucket privacy — by default, objects are private. Either configure the bucket as public, use signed URLs (upload_to_s3_signed_url=true), or proxy reads through your own backend.
  • GCS underscores — GCS bucket names cannot contain underscores when used via interoperability mode. SnapSharp catches this in the settings UI.
  • R2 public URL — R2 doesn't have a single canonical public URL pattern; attach a custom domain or use the r2.dev hostname and set it as the Public URL Pattern in SnapSharp.
  • Permission scope — at minimum SnapSharp needs PutObject on the prefix and HeadBucket on the bucket (for the connection test).

Out of scope (for now)

  • Azure Blob Storage — different auth model; tracked separately.
  • Per-folder lifecycle / TTL configuration on the SnapSharp side — set these directly in your provider.
  • Per-request bucket selection — the configured bucket applies to every upload.
BYO Storage