I implemented Floyd-Steinberg dithering in the past, but it was not worth it.
I think we need some compression-friendly dithering. Do you know anybody who could help us?
pngquant uses Floyd-Steinberg modified for better color handling.
I believe, that dithering will always increase filesize because of its' random nature.
Only purpose of this feature — to pleasure our eyes.
Dithering can be hidden under flag, just like in Ps. Users will decide.
I think we need some compression-friendly dithering. Do you know anybody who could help us?
I think, we may ask @kornelski.
I mean, I made three versions of image:
B looked as nice as C, but was slightly larger, so I thought that allowing more colors is better than dithering (both increase the file size).
I think we need dithering, that consists of some repetitive patterns, i.e. it should be "friendly" to Deflate algorithm - make B have only 20 kB (so it is still as nice as C, but smaller).
BTW. I also think, that pngquant performs a better Deflate (which also takes about 100x more time than UPNG.js: e.g. 30ms vs. 3000ms), so it can make B have only 20 kB, while using the same dithering as I did.
Oh, I see.
I don't know dithering algorithm, that can handle this case.
pngquant computes mse error, have min and max quality settings and don't write file if its' size too big or quality degrades dramatically.
Maybe you find this thread useful
https://encode.ru/threads/1757-Lossy-DEFLATE-lossy-PNG
And this project particually
https://github.com/foobaz/lossypng
Yes, pngquant calculates mean square error, and applies dithering only in areas with high error. This way areas that don't need dithering don't get the extra noise.
pngquant also does edge detection (similar to Prewitt algorithm) and disables dithering on the edges. This prevents anti-aliasing look like fur.
In pngquant 90% of time is spent on extra runs of K-means. If you use --speed 10
the whole recompression (on i7 2.3Ghz) takes ~80ms dithered, 50ms undithered.
(BTW, TinyPNG doesn't have its own algorithm. It's just a GUI for pngquant).
Most helpful comment
Yes, pngquant calculates mean square error, and applies dithering only in areas with high error. This way areas that don't need dithering don't get the extra noise.
pngquant also does edge detection (similar to Prewitt algorithm) and disables dithering on the edges. This prevents anti-aliasing look like fur.
In pngquant 90% of time is spent on extra runs of K-means. If you use
--speed 10
the whole recompression (on i7 2.3Ghz) takes ~80ms dithered, 50ms undithered.(BTW, TinyPNG doesn't have its own algorithm. It's just a GUI for pngquant).