If you need a screenshot service that handles thousands of requests per second on a single small VM, Go + Fiber is the right shape. Goroutines saturate I/O without thread pool tuning, the standard library has everything you need, and Fiber gives you Express-style routing on top of fasthttp's zero-allocation parser.
This tutorial builds a Go HTTP service that wraps SnapSharp. You'll get a /screenshot endpoint with caching, validation, and graceful shutdown — production-grade in roughly 200 lines of Go.
Prerequisites
- Go 1.22+.
- A free SnapSharp API key from snapsharp.dev/sign-up.
- Familiarity with
go mod,context.Context, and HTTP clients. - (Optional) Redis for shared caching across instances.
Step 1: scaffold the project
mkdir snap-go && cd snap-go
go mod init github.com/yourname/snap-go
go get github.com/gofiber/fiber/v2
go get github.com/redis/go-redis/v9
go get github.com/snapsharp/sdk-goThe official Go SDK is hosted at github.com/snapsharp/sdk-go. You can also call the HTTP API directly with net/http if you want zero dependencies.
Step 2: a minimal Fiber app
main.go:
package main
import (
"log"
"os"
"github.com/gofiber/fiber/v2"
"github.com/gofiber/fiber/v2/middleware/logger"
"github.com/gofiber/fiber/v2/middleware/recover"
"github.com/snapsharp/sdk-go"
)
func main() {
apiKey := os.Getenv("SNAPSHARP_API_KEY")
if apiKey == "" {
log.Fatal("SNAPSHARP_API_KEY is required")
}
client := snapsharp.NewClient(apiKey)
app := fiber.New(fiber.Config{
AppName: "snap-go",
DisableStartupMessage: false,
ReadTimeout: 30 * time.Second,
WriteTimeout: 120 * time.Second,
})
app.Use(recover.New())
app.Use(logger.New())
app.Get("/health", func(c *fiber.Ctx) error {
return c.JSON(fiber.Map{"ok": true})
})
app.Get("/screenshot", screenshotHandler(client))
port := os.Getenv("PORT")
if port == "" {
port = "8080"
}
log.Fatal(app.Listen(":" + port))
}Don't forget the imports for time — Go's compiler will complain.
Step 3: the screenshot handler
package main
import (
"net/url"
"github.com/gofiber/fiber/v2"
"github.com/snapsharp/sdk-go"
)
func screenshotHandler(client *snapsharp.Client) fiber.Handler {
return func(c *fiber.Ctx) error {
target := c.Query("url")
if target == "" {
return fiber.NewError(fiber.StatusBadRequest, "url required")
}
if err := validatePublicURL(target); err != nil {
return fiber.NewError(fiber.StatusBadRequest, err.Error())
}
opts := snapsharp.ScreenshotOptions{
Width: c.QueryInt("width", 1280),
Height: c.QueryInt("height", 720),
Format: c.Query("format", "png"),
BlockAds: true,
}
image, err := client.Screenshot(c.Context(), target, opts)
if err != nil {
log.Printf("screenshot failed: %v", err)
return fiber.NewError(fiber.StatusBadGateway, "capture failed")
}
c.Set("Cache-Control", "public, max-age=86400, stale-while-revalidate=604800")
c.Set("Content-Type", "image/"+opts.Format)
return c.Send(image)
}
}
func validatePublicURL(s string) error {
u, err := url.Parse(s)
if err != nil {
return fmt.Errorf("invalid url")
}
if u.Scheme != "http" && u.Scheme != "https" {
return fmt.Errorf("only http(s) allowed")
}
host := u.Hostname()
if host == "" || host == "localhost" {
return fmt.Errorf("invalid host")
}
for _, prefix := range []string{"127.", "10.", "192.168.", "169.254."} {
if strings.HasPrefix(host, prefix) {
return fmt.Errorf("private network blocked")
}
}
return nil
}c.Context() returns the per-request context. Pass it to the SDK so timeouts and cancellation propagate. If a client disconnects mid-request, the goroutine cleans up.
Run:
SNAPSHARP_API_KEY=sk_live_... go run .Visit http://localhost:8080/screenshot?url=https://github.com and you'll see the image.
Step 4: in-memory caching with sync.Map
For a single-instance service, an in-memory LRU cache eliminates repeat calls to SnapSharp:
package main
import (
"crypto/sha256"
"encoding/hex"
"sync"
"time"
)
type cacheEntry struct {
data []byte
expires time.Time
}
type Cache struct {
store sync.Map
ttl time.Duration
}
func NewCache(ttl time.Duration) *Cache {
c := &Cache{ttl: ttl}
go c.cleanupLoop()
return c
}
func (c *Cache) cleanupLoop() {
t := time.NewTicker(5 * time.Minute)
defer t.Stop()
for range t.C {
now := time.Now()
c.store.Range(func(k, v any) bool {
if entry, ok := v.(*cacheEntry); ok && entry.expires.Before(now) {
c.store.Delete(k)
}
return true
})
}
}
func (c *Cache) Get(key string) ([]byte, bool) {
v, ok := c.store.Load(key)
if !ok {
return nil, false
}
entry := v.(*cacheEntry)
if entry.expires.Before(time.Now()) {
c.store.Delete(key)
return nil, false
}
return entry.data, true
}
func (c *Cache) Set(key string, data []byte) {
c.store.Store(key, &cacheEntry{
data: data,
expires: time.Now().Add(c.ttl),
})
}
func cacheKey(url string, opts snapsharp.ScreenshotOptions) string {
h := sha256.Sum256([]byte(fmt.Sprintf("%s|%d|%d|%s", url, opts.Width, opts.Height, opts.Format)))
return hex.EncodeToString(h[:8])
}Wire it into the handler:
func screenshotHandler(client *snapsharp.Client, cache *Cache) fiber.Handler {
return func(c *fiber.Ctx) error {
// ... validation ...
key := cacheKey(target, opts)
if data, ok := cache.Get(key); ok {
c.Set("X-Cache", "HIT")
c.Set("Content-Type", "image/"+opts.Format)
return c.Send(data)
}
image, err := client.Screenshot(c.Context(), target, opts)
if err != nil {
return fiber.NewError(fiber.StatusBadGateway, "capture failed")
}
cache.Set(key, image)
c.Set("X-Cache", "MISS")
c.Set("Content-Type", "image/"+opts.Format)
return c.Send(image)
}
}In main:
cache := NewCache(24 * time.Hour)
app.Get("/screenshot", screenshotHandler(client, cache))For multi-instance deployments, swap the in-memory cache for Redis (next step).
Step 5: Redis-backed cache for multi-instance
package main
import (
"context"
"time"
"github.com/redis/go-redis/v9"
)
type RedisCache struct {
client *redis.Client
ttl time.Duration
}
func NewRedisCache(url string, ttl time.Duration) (*RedisCache, error) {
opts, err := redis.ParseURL(url)
if err != nil {
return nil, err
}
return &RedisCache{client: redis.NewClient(opts), ttl: ttl}, nil
}
func (c *RedisCache) Get(ctx context.Context, key string) ([]byte, bool) {
val, err := c.client.Get(ctx, key).Bytes()
if err != nil {
return nil, false
}
return val, true
}
func (c *RedisCache) Set(ctx context.Context, key string, data []byte) error {
return c.client.Set(ctx, key, data, c.ttl).Err()
}Swap the in-memory cache for Redis if REDIS_URL is set:
var cache interface {
Get(ctx context.Context, key string) ([]byte, bool)
Set(ctx context.Context, key string, data []byte) error
}
if rurl := os.Getenv("REDIS_URL"); rurl != "" {
rc, err := NewRedisCache(rurl, 24*time.Hour)
if err != nil {
log.Fatalf("redis: %v", err)
}
cache = rc
} else {
// fall back to in-memory implementation with adapter
}Now multiple instances behind a load balancer share the same cache state.
Step 6: graceful shutdown
Without graceful shutdown, an in-flight screenshot dies when you redeploy. With it, Fiber waits for handlers to complete:
import (
"context"
"os/signal"
"syscall"
)
func main() {
// ... setup ...
go func() {
if err := app.Listen(":" + port); err != nil {
log.Printf("server error: %v", err)
}
}()
ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM)
defer stop()
<-ctx.Done()
log.Println("shutdown signal received")
shutdownCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := app.ShutdownWithContext(shutdownCtx); err != nil {
log.Printf("shutdown error: %v", err)
}
log.Println("shutdown complete")
}ShutdownWithContext(30s) gives in-flight handlers up to 30 seconds to complete. After that, they're dropped. SIGTERM from Kubernetes or Docker triggers this gracefully.
Step 7: rate limiting
Fiber has a built-in rate limiter that uses an in-memory store by default:
import "github.com/gofiber/fiber/v2/middleware/limiter"
app.Use(limiter.New(limiter.Config{
Max: 30,
Expiration: 1 * time.Minute,
KeyGenerator: func(c *fiber.Ctx) string {
return c.IP()
},
LimitReached: func(c *fiber.Ctx) error {
return c.Status(429).JSON(fiber.Map{"error": "rate limit exceeded"})
},
}))For multi-instance, use the Redis storage adapter from github.com/gofiber/storage/redis.
Step 8: docker deployment
Dockerfile:
FROM golang:1.22-alpine AS build
WORKDIR /src
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -trimpath -ldflags="-s -w" -o /out/snap-go .
FROM gcr.io/distroless/static-debian12
COPY --from=build /out/snap-go /snap-go
EXPOSE 8080
ENTRYPOINT ["/snap-go"]The final image is ~10 MB on top of distroless. No glibc, no shell, no attack surface.
docker-compose.yml:
services:
app:
build: .
ports:
- "8080:8080"
environment:
- SNAPSHARP_API_KEY=${SNAPSHARP_API_KEY}
- REDIS_URL=redis://redis:6379/0
depends_on:
- redis
redis:
image: redis:7-alpineRun with docker compose up -d. Add nginx in front for TLS.
Step 9: deployment to Fly.io
fly.toml:
app = "snap-go"
primary_region = "fra"
[build]
dockerfile = "Dockerfile"
[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
[[vm]]
cpu_kind = "shared"
cpus = 1
memory_mb = 256256 MB RAM is plenty — Go's runtime is tiny and the SDK is mostly waiting on HTTP. Total cost for low-traffic services is sub-$5/month.
fly secrets set SNAPSHARP_API_KEY=sk_live_...
fly redis create
fly deployCommon pitfalls
Pitfall 1: not using c.Context(). If you pass context.Background() to the SDK, request cancellation doesn't propagate. Always use the request context.
Pitfall 2: forgetting WriteTimeout. Default Fiber timeouts are short. Full-page screenshots can take 60+ seconds. Bump WriteTimeout: 120 * time.Second in the Fiber config.
Pitfall 3: in-memory cache without TTL. Without periodic cleanup, the sync.Map grows forever. The cleanup loop in Step 4 handles this; don't skip it.
Pitfall 4: leaking goroutines on errors. If you launch a goroutine for async work, ensure it returns when the context is canceled. select { case <-ctx.Done(): return ... } is your friend.
Pitfall 5: Redis client without context. go-redis v9 requires a context on every call. Don't pass context.Background() from a request handler — use c.Context().
Final code
Three files:
main.go— server setup, routing, graceful shutdown.cache.go— in-memory and Redis cache implementations.handler.go— screenshot handler with validation.
Around 250 lines total. Compiles to a 10 MB static binary that runs anywhere.
Conclusion
Go + Fiber is the highest-throughput shape for a screenshot service. Goroutines handle the I/O concurrency for free. The standard library's HTTP client is rock-solid. Distroless containers are small, secure, and quick to start. Pair it with SnapSharp and you have an edge-deployable service that costs pennies per million requests.
Next steps: explore the screenshot API reference, read the Rust + Axum tutorial, or compare with the Node.js patterns.
Related: Pricing · Why headless Chrome crashes · Webhooks for real-time notifications