Image Compression
NOTE: I no longer stand by this methodology and the results obtained. If I were to redo this, I would use hyperfine and include other compressors.
Image compression is an easy way to save storage space and bandwidth.
It makes webpages much smaller and faster, and is so easy there really isn’t a reason not to compress.
On this blog, I’ve been compressing my images with optipng
, I haven’t been keeping track but I typically see a size reduction of about 20-25%.
This can add up fast for users with slow connections, or users using mobile data.
At the Quaternion Institute, we have an automated build system to take branding-related vector images, render them into PNGs for different purposes (such as social media avatars, favicons, general-purpose logos, etc), then compress them with optipng
that works really nicely.
The Small Question
Lately though, I’ve been wondering if optipng
is really the best option.
I decided to do some yak-shaving and figure it out.
Searching through the Void Linux software repositories, I found five pieces of software that I want to compare and chart the performance of:
I’m going to only test lossless compression here. Testing lossy compression would create many more variables and would produce different-looking images that could only be compared subjectively.
Notes
I did not test AdvanceCOMP 2.1 because it has several algorithms.
Criteria and Methodology
I am comparing output size and compression time.
The whole reason I’m compressing images is to make them smaller, but if it takes a while then it might not actually be worth it.
I will create two graphs: one with the output size of each program, and another with the average time of 6 compression runs on my system with few programs running to eliminate fluctuation.
I will use time $program
to get the run time, and du -b output.png
to get the file size after compression, recording each in a CSV of format software,image,bytes,time
.
System Specs
- Void Linux
- AMD FX 6300 CPU
- 8GB RAM
- AMD R9 280 GPU
Test Images
I mostly want to compress screenshots, so I will be testing using the following screenshots captured with flameshot:
Hypothesis
I predict either OptiPNG or Oxipng will do the best. OptiPNG will likely do well because of its popularity, and Oxipng will likely do well because software written in Rust has a history of performance feats over ubiquitous software.
While on the topic of image efficiency, I’d like to mention FLIF, an interesting image format that beats most others but is not supported in many places.
Results
I’ve collected the data and made some nice graphs using Plotly Express. Click either graph to see an interactive HTML version with the full data. The data, images, and graph-generating scripts are on my Gitlab.
Output File Size
We want small file sizes, lower is better. In this test, optipng did better than oxipng by a very slight margin. The two do much better than the others, so execution time will be the real tie-breaker.
Execution Time
Pngquant was the fastest, followed by oxipng. Optipng and pngcrush fall behind by a decent bit. Oxipng is written in Rust, which is famous at this point for enabling developers to create efficient multithreading, I’m not that surprised that it did so well.
Conclusion
Overall, I’m going to say Oxipng is the winner. Optipng produced slightly smaller files, but was quite a bit slower. If you really need to spare every byte possible, use Optipng, otherwise use Oxipng.
Further Testing
One issue I became aware of is that Flameshot (which I took the screenshots with) appears to be a frontend for other tools on the system.
On my system, I believe it may have used scrot
.
Using images created in a different way may provide more accurate testing.
For example, Kodak actually has a standard set of images used for testing compression.
Comments