Biz & IT —

Google reduces JPEG file size by 35%

New algorithm is based on human psychovisual system. Images look better, too.

Google reduces JPEG file size by 35%
Harry Langdon/Getty Images

Google has developed and open-sourced a new JPEG algorithm that reduces file size by about 35 percent—or alternatively, image quality can be significantly improved while keeping file size constant. Importantly, and unlike some of its other efforts in image compression (WebP, WebM), Google's new JPEGs are completely compatible with existing browsers, devices, photo editing apps, and the JPEG standard.

The new JPEG encoder is called Guetzli, which is Swiss German for cookie (the project was led by Google Research's Zurich office). Don't pay too much attention to the name: after extensive analysis, I can't find anything in the Github repository related to cookies or indeed any other baked good.

There are numerous ways of tweaking JPEG image quality and file size, but Guetzli focuses on the quantization stage of compression. Put simply, quantization is a process that tries to reduce a large amount of disordered data, which is hard to compress, into ordered data, which is very easy to compress. In JPEG encoding, this process usually reduces gentle colour gradients to single blocks of colour and often obliterates small details entirely.

The difficult bit is finding a balance between removing detail, and keeping file size down. Every lossy encoder (libjpeg, x264, lame) does it differently.

Guetzli, according to Google Research, uses a new psychovisual model—called Butteraugli, if you must know—to work out which colours and details to keep, and which to throw away. "Psychovisual" in this case means it's based on the human visual processing system. The exact details of Butteraugli are buried within hundreds of high-precision constants, which produce a model that "approximates colour perception and visual masking in a more thorough and detailed way" than other encoders.

What we don't know, however, is how Google Research worked out those high-precision constants. They seem to be computer-generated, or at least computer-optimised. Google Research has a thing for neural networks and machine learning: perhaps a huge corpus of images was pushed through a neural net and a more nuanced and accurate psychovisual model came out the other end?

Original image on the left, libjpeg in the middle, Guetzli on the right. You can see fewer artifacts in the Guetzli example, and the file size is smaller.
Original image on the left, libjpeg in the middle, Guetzli on the right. You can see fewer artifacts in the Guetzli example, and the file size is smaller.
Original image on the left, libjpeg in the middle, Guetzli on the right.
Original image on the left, libjpeg in the middle, Guetzli on the right.

While the primary use case of Guetzli will be reducing file size, Google Research reckons it can also be used to increase the perceived quality of JPEGs while keeping the file size the same. When comparing Guetzli-encoded images against libjpeg (a popular open-source encoder), "75 percent of ratings are in favour of Guetzli. This implies the Butteraugli psychovisual image similarity metric which guides Guetzli is reasonably close to human perception at high quality levels."

In any case, the proof of a new algorithm is in the eating. Guetzli is freely available to download from Github. Web masters, graphic designers, and photographers are free to give it a go—or not. It's also worth noting at this point that encoding images with Guetzli, due to the more involved quantization process, is slower than libjpeg. Unlike so many other previous attempts at shaking up image compression, though, at least Guetzli should be compatible with existing browsers and devices.

Now read about Google's "zoom, enhance!" algorithm that creates detailed images from tiny, pixelated source images...

Listing image by Harry Langdon/Getty Images

Channel Ars Technica