Hacker News new | past | comments | ask | show | jobs | submit login
The Art of PNG Glitch (ucnv.github.io)
222 points by erikschoster on Sept 15, 2015 | hide | past | favorite | 45 comments



I was always interested in glitching images - but was frustrated with the checksum/read errors. I took a slightly different approach with JPGCrunk (http://www.mrspeaker.net/dev/jpgcrunk/): instead of modifying the image then trying to display it, randomly mess with the internals of the encoder (I used a JavaScript implementation so it was easy to modify https://github.com/mrspeaker/jpgcrunk/blob/master/scripts/en...)

This way the "glitching" happens inside the encoder algorithm, and then there's nothing to have to repair!


Whoa, cool.

Reminds me a bit of a video codec technique to exaggerate the motion frames and compression ala "datamoshing".

Keyframes are removed which leaves only information from the previous frame to be updated by motion frames. Than update/motion frames are reiterated sequentially which creates an effect that just further saturates the previous frame and looks like a fluid melting effect.

in use: https://www.youtube.com/watch?v=mvqakws0CeU

how to via ffpmeg: https://www.youtube.com/watch?v=tYytVzbPky8


Gah, that just makes me unconformable on so many levels. It's not helped by the content of the video either...


Nice, very appropriate design for the web app!


I made an animated PNG glitching demo a while ago. http://codepen.io/lbebber/pen/EjVPao

The approach is simple: just mess with the base64 string.


Chrome seems to load the examples really slowly, and some of them not at all.


They're all (except the detail thumbnails) initally loaded as a blank.png and then substituted in with javascript for absolutely no reason. That kind of thing gets me to close the tab real fast.


May have been a connectivity issue. Seemed to work ok on my chrome.


No connectivity issue here, same (slow) in Chrome on OS X.


Same here.



Messing with that faders feels weirdly good.


I didn't know PNG was so simple. Encode scanlines in terms of each other, then pass the whole thing to DEFLATE... and it is effective. That's very elegant.


PNG is one of my favourite file formats. It also has a chunking system which lets you store extra data, which isn't really covered here. Definitely worth looking into if you're interested in binary format design.

The other image format that's worth knowing is TIFF, because it's an insanely simple and flexible format to write out if you don't have access to a library. It lets you re-order the image data by tiles or scan lines, and lets you put various tables almost anywhere in the file, which makes it great for outputting large images from a parallel renderer: you can chop it up however is suitable for the algorithm, write out the parts to disk as you get them, and then write out a table at the end describing the order at the end of the file, without having to seek.


In particular PNG may have a chunk that store the gamma information of the file, that is important to correct the differences between the standard configuration of PC and Mac monitors. This value is difficult to find, and some programs ignore it and other use it. I had a few nightmares cases trying to make a webpage look right until I realize that the PNG has a hidden gamma value.

More details of a similar case: http://morris-photographics.com/photoshop/articles/png-gamma...


The gamma chunk can also be used for some interesting tricks:

https://news.ycombinator.com/item?id=10192413


TIFF is a nightmare to read, because there are too many options. It tries to be a format of many other formats.


a.k.a. Thousands of Incompatible File Formats.

These days it's basically "what libtiff supports".


Actually it is (or at least used to be) more like "what Photoshop supports". Libtiff implementers often look to what Photoshop does to see how to implement something. But libtiff has a very large influence too. (I don't think Photoshop supports BigTIFF yet.)


Writing TIFF is trivial. Reading it is a nightmare if you want to account for its myriad options.


It's effective, but it's not /very/ effective. A 1D byte-level compression like deflate is an inappropriate tool for images.

As an experiment I just opened a screenshot in Preview.app and exported to PNG with and without an alpha channel - adding alpha makes it 335KB vs 300KB. Since the alpha contains no information, there's no excuse for it to be 30KB compressed!

ffv1 or your favorite video codec with a lossless mode will produce much smaller files and decode much faster than png.


> 1D byte-level compression

Now you've got me curious what the compression ratio would be if you just re-ordered an image's pixels using a Hilbert curve (so that in e.g. a 100x100 image, the first 100px scanline contains the first 100px of the Hilbert path through the original image) before passing it to PNG for compression.


Replying to myself because I just tested it.

• Lenna.png (http://i.imgur.com/pPNbiyG.png): 475KB.

• Lenna.Hilbertized.png (http://i.imgur.com/kU8C3yG.png): 709KB.

Clearly, not a good idea, at least for photographic sources.


That idea isn't too bad, it probably works most places but fails very badly in a few spots. So it might be good for lossy compression - I actually tried this once, but lost the code a while back...

In this case, PNG's filtering makes it kind of 2D-aware, so that probably works better on the original. But if you tried splitting it into 8x8 blocks, then predicting each one from the left-upperleft-upper neighbors, well, you'd have the basics to modern DCT codecs.


It interferes with the filtering. You should try to first filter, then "Hilbertize" and then deflate


Paeth filter helps. This PNG is only 251 bytes:

https://upload.wikimedia.org/wikipedia/commons/8/89/PNG-Grad...


In my own experiments compressing pixel art, I found that gzipped .bmp files were smaller than the output of pngcrush just as often as they were larger. On average the pngs were slightly smaller, but you can tar together a whole bunch of similar bmps and gzip the lot, and then gzip really blows png out of the water. For a compressed file format specialised for images, I found png's performance very disappointing and decided not to use it.


While it's definitely not my cup of tea, Glitch Art certainly has a long and twisted history ;)

http://kernelmag.dailydot.com/issue-sections/features-issue-...


Related - online tool to play with JPG glitching: https://snorpey.github.io/jpg-glitch/


I am reminded of the story behind the cover art for the soundtrack to "The Social Network": http://www.rob-sheridan.com/TSN/


My favorite is Figure 17, the alpha glitched image with "PNG" written four times.


I also stopped at that figure and thought “I really like this one — I get it”, which surprised me because the other examples looked too trivial to appreciate or too messy for me to understand.


How was the header image made?


Whhhyyyyyy ?


Ironically, the big header image is JPG

http://ucnv.github.io/pnglitch/files/header.jpg


I think the very last one is my favorite. It has an interesting mix of modified and unmodified components.


[flagged]


The problem isn't lazy-loading. The problem is webpage authors who don't specify the width and height attributes on <image> tags, which allows the browser to correctly flow them before they load.


Just not until you've scrolled down to their obituary.


Then you read like 2 words of it and the screen jumps as another image trickles in.


This web page is pretty image-heavy though. Lazy loading optimizes for the reader who reads partway through and abandons reading the rest of the page.


Optimises what exactly?


data usage. Generally worth optimising, to be honest (both for the client and the server).


The interesting part about PNG is that since it uses the DEFLATE algorithm, and only applies compression per row/line, with no awareness of relationships between lines, is that in most cases, the effective compression is nearly the same as if you took apart each row of uncompressed pixels as an individual image, and took all those separate images and put them in a zip file.

Disregarding the cruft of headers and other file format overhead, there would be a direct relationship between the size of a PNG image, and the raw, uncompressed row-level data zipped up and handled in a similar way.


I don't think that's true.

So first of all this whole article is essentially about the "filters" that are used to encode the data more efficiently based on the surrounding data, before it's passed to deflate. Basically compute some kind of a delta from the pixel compared to the pixel up of it, left of it, or a combination of the two, with the deltas generally compressing better than the raw data would have. That's already a big difference to compressing each row separately.

Second, the deflate compression does not happen individually for each row. It happens individually for each IDAT block, but those blocks can be of fairly arbitrary size. Using a separate IDAT block for each row would seem very odd. (Can you even use filters in that case, or does the state reset with each new block?)


That's wrong: PNG uses a predictor that potentially depends on the previous row

https://en.wikipedia.org/wiki/Portable_Network_Graphics#Filt...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: