Lepton Image Compression

(Pat David) #1

Hi all!

Dropbox recently blogged about a new lossless image compression algorithm that they’ve developed for jpeg compression where they are showing some impressive results:


Wondering if there is any possibility of implementing something like this? As the owner of a photography-centric forum it not surprisingly interests me greatly… :wink:

The code is available under an Apache license on github:

(Jeff Atwood) #2

It is very cool, unclear how we can use it at the moment.

(Kevin P. Fleming) #3

There’s not really anything Discourse can do here, unless it can recompress images after upload. Still, until browsers are capable of decompression Lepton-compressed images, there wouldn’t be any point in offering them to users. There’s also the complication of having to store multiple versions of the image and offering the right one to capable browsers… not trivial.

(Pat David) #4

The indication from Dropbox is that this decodes very quickly (faster than bandwidth on the wire to the client). In other words - the storage medium is Lepton, the images are decoded as they are sent to the client from the server. FTA:

Lepton can decompress significantly faster than line-speed for typical consumer and business connections. Lepton is a fully streamable format, meaning the decompression can be applied to any file as that file is being transferred over the network. Hence, streaming overlaps the computational work of the decompression with the file transfer itself, hiding latency from the user.

The compression for storage is transparent to the user - they only see a JPG as requested. On the server side the storage is Lepton. My thought was a possible task to compress images as uploaded and store them as Lepton, then logic to decompress as requested by clients.

In fact, this appears to be the use-case for Dropbox themselves. They are storing the files on-disk in Lepton, and streaming the decompressed JPG as they are requested.

(Kevin P. Fleming) #5

So they are compressing the JPG into Lepton? That seems crazy :slight_smile: If they are decompressing the JPG and then compressing with Lepton, then both steps would need to be reversed to send a JPG to a client. JPG compression is not fast.

(Pat David) #6

Yep. The reason is that they are seeing ~22% compression in size (lossless). Also again, I can only refer to Dropbox’s claims that their decompression from Lepton → JPG is faster than you can send the JPG down the wire to the client, so transparent to the end user.

You’re assuming that they need to decompress the JPG to compress with Lepton (I’m not sure that they do, but I’ll have to read more carefully the steps and try out their supplied code).

Maybe, but it’s irrelevant here as there’s no need to compress with JPG again.

The only question I have is how much horsepower is required to decompress.

(Eli the Bearded) #7

The description makes it sound like in the common case, the recompressor only needs a row of 8x8 blocks. But for progressive JPEG, it will need to store the whole uncompressed image. The decompression from lepton to JPEG does not (apparently) need as much memory, even with progressive JPEG.