I compress images almost every week for client sites and my own projects. Over time I’ve learned that “compress as much as possible” is a lazy answer — the real goal is to find the sweet spot where file size is minimized without a noticeable hit to perceived quality. In this post I’ll walk you through how I evaluate image compression tools so you can speed up your site while keeping visuals crisp and faithful to the original.

Start with a testing strategy, not a single click

Before you try a bunch of tools at random, define what “good” means for your project. For me that always includes a few concrete checkpoints:

  • Representative image set: hero photos, product images, screenshots, illustrations, and images with fine detail (text, hair, textures).
  • Target breakpoints and sizes: full-bleed hero (1920–2560px), content column (800–1200px), thumbnails (200–400px).
  • Perceptual thresholds: what level of degradation is acceptable at 100% and at 200% zoom.
  • Support matrix: which formats and fallbacks you can realistically deliver (WebP, AVIF, JPEG, PNG, SVG).
  • Having a consistent test set means you can compare tools apples-to-apples. I keep a small folder of 10–20 images that reflect typical content for the sites I work on.

    What metrics actually matter

    File size is obviously important — smaller = faster — but relying on bytes alone ignores visual quality. Here are the metrics and signals I use together:

  • File size: baseline metric for savings and bandwidth.
  • Dimensions & upscaling behavior: does the tool preserve the pixel dimensions, or does it resample by default?
  • SSIM / MS-SSIM: structural similarity metrics that correlate better with human perception than PSNR.
  • VMAF: Netflix’s perceptual quality metric — especially useful for video-like content.
  • Butteraugli: developed by Google, useful for comparing JPEG perceptual differences.
  • Lighthouse / WebPageTest: real-world page performance metrics that show the end-user benefit.
  • In practice I use a mix of automated metrics (SSIM, VMAF) and quick human checks — zoomed visual comparison, toggling original vs compressed, and a diff view to highlight subtle artifacts.

    Lossy vs lossless vs smart conversions

    Not every image needs maximum fidelity, and not every conversion tool behaves the same. Here’s how I think about options:

  • Lossless — rare in production unless you need perfect fidelity (archival, client images). File size savings are limited.
  • Lossy — the default for photos. The trick is selecting a quality level that removes redundant data but preserves edges and skin tones.
  • Smart conversions — conversion to next-gen formats (AVIF, WebP) with tuned quality that often beats JPEG at the same size.
  • For most sites I use aggressive lossy for thumbnails, moderate lossy for in-content images, and cautious lossy or even lossless for product images where buyers expect pixel-perfect detail.

    Tools and libraries I test

    Testing should include both local tools and SaaS/CDN optimizers. Here are options I try and why:

  • Squoosh (GUI) — great for quick interactive comparisons and trying AVIF/WebP presets.
  • libvips (sharp, Vips CLI) — super-fast and great for server-side batch processing.
  • ImageMagick — flexible, but slower than libvips for large batches.
  • MozJPEG / mozjpeg-cjpeg — produces better-looking JPEGs at lower sizes than baseline encoders.
  • Guetzli — excellent quality but extremely slow; rarely practical.
  • TinyPNG / TinyJPG — nice balance for PNG & JPEG via API, good for designers who don’t want tooling work.
  • Cloudinary / Imgix / Fastly Image Optimizer — CDNs that handle format negotiation and responsive sizes on the fly.
  • My testing often pairs a local encoder (like mozjpeg or libvips+AVIF) with a CDN that will serve the best format based on client support. That combo gives me control over baseline quality and the benefits of runtime format negotiation.

    How I run a practical comparison

    Here’s the sequence I use when evaluating a new compressor or workflow:

  • Pick 8–12 representative images (various subjects and resolutions).
  • Export a baseline: uncompressed PNG or high-quality JPEG (baseline bytes and visual).
  • Run the tool at a few quality settings (e.g., Q=60, 75, 85 for JPEG; try comparable settings for WebP/AVIF).
  • Capture metrics: output bytes, SSIM, and a quick perceptual diff (butteraugli or visual diff).
  • Do a side-by-side inspection at 100% and 200% zoom and check for artifacts on faces, text, and textures.
  • Measure page impact with Lighthouse/WebPageTest using a simple page that includes each image variant.
  • I store results in a tiny CSV and screenshot comparisons so I can justify the settings I pick for production. This method also surfaces corner cases: maybe AVIF is tiny but creates banding on gradients at a particular quality setting — that’s useful to know.

    Practical tips that save time and headaches

  • Profile first, optimize second: Use Lighthouse or WebPageTest to find which images are actually in the critical path. Don’t blindly optimize tiny decorative graphics.
  • Prefer responsive images: Serve multiple sizes with srcset or use a responsive image CDN. Smaller resolutions for mobile often yield the biggest gains.
  • Format negotiation: Use AVIF/WebP where supported with a robust JPEG/PNG fallback.
  • Automate in CI/CD: Integrate libvips or a cloud optimizer in your build pipeline so images are optimized consistently.
  • Test at multiple quality declarations: “Quality 80” means different things in different encoders. Don’t assume parity.
  • Be mindful of color profiles: Strip unnecessary metadata and embed sRGB when the source is inconsistent — it avoids color shifts.
  • Quick comparison table

    Tool Strengths Tradeoffs
    libvips Fast, memory-efficient, great for pipelines Less GUI-friendly; needs configuration
    MozJPEG Better JPEG quality at low sizes Encoding slower than baseline cjpeg
    AVIF Best size vs quality in many cases Browser support improving; encoding can be slow
    Cloudinary / Imgix On-the-fly resizing & format negotiation Cost and vendor lock-in

    When visual tests beat numbers

    I once reduced a portfolio hero from 1.6MB to 300KB using AVIF and mozjpeg. SSIM and byte savings looked great, but a quick client-side check revealed subtle color banding in the sky at 200% zoom that my metrics didn’t penalize enough. The fix was simple: slightly increase the quality for that specific image and re-run. The lesson: automated metrics point you in the right direction, but final judgement belongs to a visual check at the sizes people actually view the image.

    Rollout strategy

    When you’re confident in settings, roll changes incrementally. Start with non-critical images, monitor Lighthouse performance, and use real-user metrics (RUM) for bandwidth and paint timings. If you use a CDN, test format negotiation and cache headers to avoid double-encoding costs.

    If you want, I can share a small script I use (libvips + mozjpeg presets) that outputs AVIF, WebP, and JPEG fallbacks and logs SSIM scores for each image. It’s a handy starting point if you want reproducible comparisons across projects.