A Rust screenshot service is overkill for most teams — until you need 10,000 req/sec on a single small box, predictable tail latencies, and zero memory leaks. Axum + Tokio is the modern Rust web stack, and SnapSharp's HTTP API plays well with reqwest. The result is a typed, async, lock-free service that compiles to a single static binary.
This tutorial walks through the full integration. By the end you'll have a Rust service with validated routes, shared state, graceful shutdown, and Docker deployment.
Prerequisites
- Rust 1.75+ (use
rustup). - Familiarity with
async/await,Result, and Cargo. - A free SnapSharp API key from snapsharp.dev/sign-up.
Step 1: scaffold
cargo new snap-rs --bin
cd snap-rsAdd dependencies to Cargo.toml:
[package]
name = "snap-rs"
version = "0.1.0"
edition = "2021"
[dependencies]
axum = "0.7"
tokio = { version = "1", features = ["full"] }
tower = "0.4"
tower-http = { version = "0.5", features = ["trace", "cors", "limit"] }
reqwest = { version = "0.12", features = ["json", "rustls-tls"], default-features = false }
serde = { version = "1", features = ["derive"] }
serde_json = "1"
url = "2"
thiserror = "1"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
anyhow = "1"reqwest with rustls-tls avoids OpenSSL — the binary stays portable.
Step 2: a typed SnapSharp client
src/snapsharp.rs:
use reqwest::{Client, StatusCode};
use serde::Serialize;
use thiserror::Error;
use std::time::Duration;
const BASE_URL: &str = "https://api.snapsharp.dev/v1";
#[derive(Debug, Error)]
pub enum SnapsharpError {
#[error("HTTP error: {0}")]
Http(#[from] reqwest::Error),
#[error("API error {status}: {message}")]
Api { status: u16, message: String },
}
#[derive(Debug, Clone, Serialize)]
pub struct ScreenshotOptions {
pub url: String,
pub width: u32,
pub height: u32,
pub format: String,
pub block_ads: bool,
pub full_page: bool,
}
impl Default for ScreenshotOptions {
fn default() -> Self {
Self {
url: String::new(),
width: 1280,
height: 720,
format: "png".into(),
block_ads: true,
full_page: false,
}
}
}
#[derive(Clone)]
pub struct Snapsharp {
http: Client,
api_key: String,
}
impl Snapsharp {
pub fn new(api_key: impl Into<String>) -> Self {
let http = Client::builder()
.timeout(Duration::from_secs(120))
.pool_idle_timeout(Duration::from_secs(90))
.build()
.expect("reqwest builder");
Self { http, api_key: api_key.into() }
}
pub async fn screenshot(&self, opts: &ScreenshotOptions) -> Result<Vec<u8>, SnapsharpError> {
let url = format!("{}/screenshot", BASE_URL);
let resp = self.http.post(&url)
.bearer_auth(&self.api_key)
.json(opts)
.send()
.await?;
if !resp.status().is_success() {
let status = resp.status().as_u16();
let message = resp.text().await.unwrap_or_default();
return Err(SnapsharpError::Api { status, message });
}
let bytes = resp.bytes().await?.to_vec();
Ok(bytes)
}
}thiserror gives us clean error variants. The Snapsharp struct is Clone because it wraps an Arc-internal reqwest::Client — cheap to clone, share across handlers.
Step 3: shared state and the router
src/main.rs:
mod snapsharp;
use axum::{
extract::{Query, State},
http::{HeaderMap, HeaderValue, StatusCode},
response::{IntoResponse, Response},
routing::get,
Router,
};
use serde::Deserialize;
use std::{net::SocketAddr, sync::Arc};
use tower_http::trace::TraceLayer;
use tracing_subscriber::EnvFilter;
use url::Url;
use crate::snapsharp::{Snapsharp, ScreenshotOptions, SnapsharpError};
#[derive(Clone)]
struct AppState {
snap: Snapsharp,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
tracing_subscriber::fmt()
.with_env_filter(EnvFilter::try_from_default_env().unwrap_or_else(|_| "info".into()))
.init();
let api_key = std::env::var("SNAPSHARP_API_KEY")
.map_err(|_| anyhow::anyhow!("SNAPSHARP_API_KEY is required"))?;
let state = AppState { snap: Snapsharp::new(api_key) };
let app = Router::new()
.route("/health", get(|| async { "ok" }))
.route("/screenshot", get(screenshot_handler))
.with_state(state)
.layer(TraceLayer::new_for_http());
let port: u16 = std::env::var("PORT").ok().and_then(|s| s.parse().ok()).unwrap_or(8080);
let addr: SocketAddr = ([0, 0, 0, 0], port).into();
tracing::info!("listening on {}", addr);
let listener = tokio::net::TcpListener::bind(addr).await?;
axum::serve(listener, app).await?;
Ok(())
}with_state(state) makes the Snapsharp client available to every handler via the State extractor. Type-safe, no global variables.
Step 4: the screenshot handler
#[derive(Debug, Deserialize)]
struct ScreenshotQuery {
url: String,
#[serde(default = "default_width")]
width: u32,
#[serde(default = "default_height")]
height: u32,
#[serde(default = "default_format")]
format: String,
#[serde(default)]
full_page: bool,
}
fn default_width() -> u32 { 1280 }
fn default_height() -> u32 { 720 }
fn default_format() -> String { "png".to_string() }
async fn screenshot_handler(
State(state): State<AppState>,
Query(q): Query<ScreenshotQuery>,
) -> Result<Response, AppError> {
validate_url(&q.url)?;
if q.width < 320 || q.width > 3840 || q.height < 240 || q.height > 2160 {
return Err(AppError::BadRequest("dimension out of range".into()));
}
let opts = ScreenshotOptions {
url: q.url,
width: q.width,
height: q.height,
format: q.format.clone(),
block_ads: true,
full_page: q.full_page,
};
let image = state.snap.screenshot(&opts).await?;
let mut headers = HeaderMap::new();
let content_type = format!("image/{}", q.format);
headers.insert("Content-Type", HeaderValue::from_str(&content_type).unwrap());
headers.insert(
"Cache-Control",
HeaderValue::from_static("public, max-age=86400, stale-while-revalidate=604800"),
);
Ok((StatusCode::OK, headers, image).into_response())
}
fn validate_url(s: &str) -> Result<(), AppError> {
let parsed = Url::parse(s).map_err(|_| AppError::BadRequest("invalid url".into()))?;
match parsed.scheme() {
"http" | "https" => {}
_ => return Err(AppError::BadRequest("only http(s) allowed".into())),
}
let host = parsed.host_str().unwrap_or("");
if host.is_empty() || host == "localhost" {
return Err(AppError::BadRequest("invalid host".into()));
}
for prefix in ["127.", "10.", "192.168.", "169.254."] {
if host.starts_with(prefix) {
return Err(AppError::BadRequest("private network blocked".into()));
}
}
Ok(())
}Axum's Query extractor parses + validates the query string into our typed struct. If width is missing the default_width() fn fills it. Bad input fails fast with a 400.
Step 5: typed error responses
#[derive(Debug, thiserror::Error)]
enum AppError {
#[error("bad request: {0}")]
BadRequest(String),
#[error("upstream: {0}")]
Upstream(#[from] SnapsharpError),
}
impl IntoResponse for AppError {
fn into_response(self) -> Response {
let (status, message) = match &self {
AppError::BadRequest(msg) => (StatusCode::BAD_REQUEST, msg.clone()),
AppError::Upstream(SnapsharpError::Api { status, message }) => {
let s = StatusCode::from_u16(*status).unwrap_or(StatusCode::BAD_GATEWAY);
(s, message.clone())
}
AppError::Upstream(_) => (StatusCode::BAD_GATEWAY, "capture failed".into()),
};
let body = serde_json::json!({ "error": message });
(status, axum::Json(body)).into_response()
}
}The ? operator in handlers propagates errors automatically. Map every error variant to a sensible HTTP response in one place.
Step 6: rate limiting middleware
tower-governor is the canonical Tower middleware for per-IP rate limiting:
# Cargo.toml
tower_governor = "0.4"use tower_governor::{governor::GovernorConfigBuilder, GovernorLayer};
let governor_conf = Arc::new(
GovernorConfigBuilder::default()
.per_second(2)
.burst_size(30)
.finish()
.unwrap(),
);
let app = Router::new()
// ... routes ...
.layer(GovernorLayer { config: governor_conf });Per-IP, 30 burst, 2/sec sustained. Anyone spamming gets a 429 automatically.
Step 7: graceful shutdown
use tokio::signal;
async fn shutdown_signal() {
let ctrl_c = async {
signal::ctrl_c().await.expect("ctrl_c");
};
#[cfg(unix)]
let terminate = async {
signal::unix::signal(signal::unix::SignalKind::terminate())
.expect("signal")
.recv()
.await;
};
#[cfg(not(unix))]
let terminate = std::future::pending::<()>();
tokio::select! {
_ = ctrl_c => {},
_ = terminate => {},
}
tracing::info!("shutdown signal received");
}
// In main:
axum::serve(listener, app)
.with_graceful_shutdown(shutdown_signal())
.await?;When SIGTERM arrives (Kubernetes rolling deploy, Docker stop), Axum stops accepting new connections and lets in-flight handlers finish. No half-captured screenshots.
Step 8: Docker deployment
Dockerfile:
FROM rust:1.75-slim AS build
WORKDIR /src
RUN apt-get update && apt-get install -y pkg-config && rm -rf /var/lib/apt/lists/*
COPY Cargo.toml Cargo.lock ./
COPY src ./src
RUN cargo build --release --locked
FROM gcr.io/distroless/cc-debian12
COPY --from=build /src/target/release/snap-rs /snap-rs
EXPOSE 8080
ENTRYPOINT ["/snap-rs"]The release binary is ~5 MB. With distroless, the final image is ~25 MB. No package manager, no shell, minimal attack surface.
docker-compose.yml:
services:
app:
build: .
ports:
- "8080:8080"
environment:
- SNAPSHARP_API_KEY=${SNAPSHARP_API_KEY}
- RUST_LOG=info,tower_http=debug
restart: unless-stoppedStep 9: deploying to Fly.io
fly.toml:
app = "snap-rs"
primary_region = "fra"
[build]
dockerfile = "Dockerfile"
[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
[[vm]]
cpu_kind = "shared"
cpus = 1
memory_mb = 128128 MB RAM is enough — the Tokio runtime overhead is tiny and the SDK is mostly waiting on HTTP. Cold starts are ~50ms.
fly secrets set SNAPSHARP_API_KEY=sk_live_...
fly deployCommon pitfalls
Pitfall 1: blocking in async context. Don't call sync I/O (file system, blocking HTTP libs) from an async handler — it stalls the runtime. Use reqwest (async), tokio::fs, or wrap blocking work in tokio::task::spawn_blocking.
Pitfall 2: cloning Snapsharp into every handler. Don't create a new reqwest::Client per request — it leaks connections. The Arc-shared client in AppState is the correct pattern.
Pitfall 3: short timeouts. reqwest defaults to no timeout, but our Client::builder().timeout(Duration::from_secs(120)) caps every request. Without a timeout, a hanging upstream pins a Tokio task forever.
Pitfall 4: missing with_graceful_shutdown. Without it, SIGTERM kills in-flight requests immediately. Always wire up shutdown for production deploys.
Pitfall 5: serde errors in production. A bad request body crashes the handler unless you wrap Json<T> extraction. Axum returns a 400 automatically for invalid JSON, but custom validation (URL, ranges) still needs explicit checks.
Final code
Two files:
src/main.rs— server, handler, validation, errors.src/snapsharp.rs— typed SDK wrapper.
Around 250 lines total. Compiles to a single 5 MB static binary.
Conclusion
Rust + Axum is the right tool when you need predictable performance and zero runtime errors from a screenshot service. The borrow checker forces correct concurrency. The compile-time type system catches bugs that integration tests would miss in dynamic languages. The result is a service that runs at high throughput on cheap hardware and never crashes from a memory leak.
Next steps: explore the Go + Fiber tutorial, read the screenshot API reference, or look at the Phoenix Elixir pattern for actor-model concurrency.
Related: Pricing · Why headless Chrome crashes · Webhooks for real-time notifications