I compress images almost every week for client sites and my own projects. Over time I’ve learned that “compress as much as possible” is a lazy answer — the real goal is to find the sweet spot where file size is minimized without a noticeable hit to perceived quality. In this post I’ll walk you through how I evaluate image compression tools so you can speed up your site while keeping visuals crisp and faithful to the original.
Start with a testing strategy, not a single click
Before you try a bunch of tools at random, define what “good” means for your project. For me that always includes a few concrete checkpoints:
Having a consistent test set means you can compare tools apples-to-apples. I keep a small folder of 10–20 images that reflect typical content for the sites I work on.
What metrics actually matter
File size is obviously important — smaller = faster — but relying on bytes alone ignores visual quality. Here are the metrics and signals I use together:
In practice I use a mix of automated metrics (SSIM, VMAF) and quick human checks — zoomed visual comparison, toggling original vs compressed, and a diff view to highlight subtle artifacts.
Lossy vs lossless vs smart conversions
Not every image needs maximum fidelity, and not every conversion tool behaves the same. Here’s how I think about options:
For most sites I use aggressive lossy for thumbnails, moderate lossy for in-content images, and cautious lossy or even lossless for product images where buyers expect pixel-perfect detail.
Tools and libraries I test
Testing should include both local tools and SaaS/CDN optimizers. Here are options I try and why:
My testing often pairs a local encoder (like mozjpeg or libvips+AVIF) with a CDN that will serve the best format based on client support. That combo gives me control over baseline quality and the benefits of runtime format negotiation.
How I run a practical comparison
Here’s the sequence I use when evaluating a new compressor or workflow:
I store results in a tiny CSV and screenshot comparisons so I can justify the settings I pick for production. This method also surfaces corner cases: maybe AVIF is tiny but creates banding on gradients at a particular quality setting — that’s useful to know.
Practical tips that save time and headaches
Quick comparison table
| Tool | Strengths | Tradeoffs |
|---|---|---|
| libvips | Fast, memory-efficient, great for pipelines | Less GUI-friendly; needs configuration |
| MozJPEG | Better JPEG quality at low sizes | Encoding slower than baseline cjpeg |
| AVIF | Best size vs quality in many cases | Browser support improving; encoding can be slow |
| Cloudinary / Imgix | On-the-fly resizing & format negotiation | Cost and vendor lock-in |
When visual tests beat numbers
I once reduced a portfolio hero from 1.6MB to 300KB using AVIF and mozjpeg. SSIM and byte savings looked great, but a quick client-side check revealed subtle color banding in the sky at 200% zoom that my metrics didn’t penalize enough. The fix was simple: slightly increase the quality for that specific image and re-run. The lesson: automated metrics point you in the right direction, but final judgement belongs to a visual check at the sizes people actually view the image.
Rollout strategy
When you’re confident in settings, roll changes incrementally. Start with non-critical images, monitor Lighthouse performance, and use real-user metrics (RUM) for bandwidth and paint timings. If you use a CDN, test format negotiation and cache headers to avoid double-encoding costs.
If you want, I can share a small script I use (libvips + mozjpeg presets) that outputs AVIF, WebP, and JPEG fallbacks and logs SSIM scores for each image. It’s a handy starting point if you want reproducible comparisons across projects.