Hacker News new | past | comments | ask | show | jobs | submit login
Demoscene accepted as UNESCO cultural heritage in The Netherlands (demoscene-the-art-of-coding.net)
872 points by Vinnl on July 5, 2023 | hide | past | favorite | 162 comments



I guess the site is hugged to death, http://archive.today/Z5RHj


This is great to hear. Demoscene is one of the most influential things I have came across my entire life, and changed how I code forever.

I remember watching Farbrausch's "fr-08 .the .produkt" [0] when it came out and telling myself "If a computer can do this with 64KB of data, at this speed, my programs should be able to do the same, or at least shall be close". I was forever poisoned at this point, and this simple sentence shaped my whole academic life and career.

[0]: https://www.pouet.net/prod.php?which=1221

P.S.: Rewatching it, again, for the nth time. Hats off to chaos, fiver2, kb, doj, ryg & yoda.

P.P.S: I show people YouTube version of Elevated (https://www.pouet.net/prod.php?which=52938), and ask them to guess the binary size rendering this thing in real time. The answer blows everyone's mind, every time.


> If a computer can do this with 64KB of data

If you like that kind of "compression", I hope you know Linus "lft" Åkesson's "A Mind Is Born": 256 bytes.

https://linusakesson.net/scene/a-mind-is-born/


I know this. It's also very impressive, and coded on a C64 IIRC (checked the page, yes).

This demo is more on the side of "do magic and observe output", rather than "let's make something looking normal in an impossible way" (i.e: elevated) or "let's make something impossible possible" (i.e. 8088 MPH [0]).

All three kinds are equally impressive in my book, yet I prefer the latter ones more.

My main takeaway from Demoscene is not "compression", but the possibility of writing code which performs very well, and what I found is, it's not very hard to do at the end of the day. You just have to be mindful about what you're doing, and how that thing called computer works under the hood.

[0]: https://www.pouet.net/prod.php?which=65371


It is at the same time "knowing your instrument quite well" and "be able to produce quality [at structural level]" and "be able to produce quality [at final output level]".


My main takeaway from Demoscene is not "compression"

Compression is still a big part of it, just not in the "take the original data, minimize it and reproduce it later" sense of the word. The procedural generation employed by these demos is still a form of compression: the highly-specialized decompressor needs only a few bytes of information to create the intended effect (texture or shape).

If anything, it shows how much compression can be achieved if the decompressor can be tuned to the data domain, as opposed to general-purpose compression algorithms.


It is not just a matter of information science in terms of "how can one concisely describe the object - that would be the strict informational content", but also clever procedural method to generate an output starting from the specifications of a machine.

In the case of the Farbrausch procedural generation, a chaining of parametrized modules creates the textures etc. - as those who have played with their generation system, ".werkkzeug", know directly ( .werkkzeug1, 2004: https://www.pouet.net/prod.php?which=12511 ; .werkkzeug4, 2019? : https://www.pouet.net/prod_nfo.php?which=91144 )

In the case of flt's "A mind is born", several efficiency techniques are used - overlapping of palette and SID registers etc., but especially notable could be that the melody is generated by a sequence-after-seed process (Linear-Feedback Shift Register), where the seed is chosen so that a good melody is returned ( full explanation of the code: https://linusakesson.net/scene/a-mind-is-born/ )


> My main takeaway from Demoscene is not "compression", but the possibility of writing code which performs very well

This 100%. It's not exclusive to demoscene, but demoscene is certainly yet another inspiration to write lean apps. (The performance of Electron is another :) )

I'll never ever be able to work for an AAA game company because I couldn't stomach dumping 100GB of raw texture data onto someone's hard drive.


Why waste clock cycles decompressing those textures instead, especially considering the rise of DirectStorage and similar technologies?


Why waste clock cycles and storage and bandwidth carrying around 100GB of textures that you don't need most of the time?


"compression" is sort of a weird way of describing it, though, when the primary goal is to make something extremely small (not to take something extant and compress it).

Not a knock on demos at all! (the originals handed around on floppy disks also inspired me to become a coder in the early 90s). Just saying that it's different to try to make the most amazing thing you can in a wide open 4k than it is to start with something 16k and try to reduce it.


> a weird way of describing it

Hence the bunny quotes. Not really «mak[ing] something extremely small», the idea is to pack as much wonder value as achievable in as as little resources as possible.

You can use 'compress', as long as the need to interpret as needed is held - the term is legitimate for the idea of e.g. "pressing together" what would normally be the content of a wardrobe in a suitcase.


Will Wright Discusses the Demoscene:

https://www.youtube.com/watch?v=m7iuFVmTJus

>You can take any piece of content in the game, and imagine an algorithmic solution to it. Or also, you know, a way that the player could customize that object of thing.

>There's this group in Europe called the Demoscene that make these very elaborate demos for a computer that fit into very tiny little memory blocks, you know like 64K of memory, and you run the thing, and in fact it algorithmically generates about 100 megabytes worth of data, you know these rich 3D environment, generated music, generated wave files, generated animation.

>And they're developing techniques to generate, you know, huge amounts of interesting data, with very very simple, elegant, compression algorithms.

>And this is a skill that game developers used to have, back in the 8-bit days. That was the only ways to do a game like Karateka(?), was to find all these little tips and tricks to compress things and generate them algorithmically.

>But since the CD-ROM came out, and very cheap hard drives, storage is cheap, so basically we've lost that skill set, and now we attack all those problems with brute force. I think we've lost something by dropping that skill set.

The Future of Content — Will Wright’s Spore Demo at GDC 3/11/2005: What I learned about content from the Sims. …and why it’s driven me to procedural methods. …And what I now plan to do with them. Talk by Will Wright, Game Developers Conference, 3/11/2005.

https://donhopkins.medium.com/the-future-of-content-will-wri...

>Algorithmic compression.

>Games consist of a mix of code and data. Computers use code to compress data. The ratio of code to data has changed over time.

>Games used to be mostly code and very little content, so compression was important.

>CDROM is the medium that was the death knell for the algorithm.

>Myst was a very elaborate and beautiful slide show, with a vast amount of data. It looked like they had a great time building this world. Building the world is a fun game in itself.

>At the other end of the spectrum from CDROMs: The Demo Scene. Algorithmic compression of graphics and music.

Will Wright GDC 2005 Spore (The Future of Content) Remastered: 6:18

https://youtu.be/ofA6YWVTURU?t=378


CD-ROM didn't make Myst a no-brainer, as Rand Miller discusses in this excellent Ars Technica piece: https://www.youtube.com/watch?v=EWX5B6cD4_4

More info in the extended cut starting at this timestamp: https://youtu.be/5qxg0ykOcgM?t=3465


Interesting quotes. I worship Will Wright and his work - and I've written basically nothing but procedural games with the kinds of micro-controls he describes, which I find so interesting. But I have to disagree with the contention that this is fundamentally a form of compression, as opposed to a paradigm centered around the joy of building things for synthesis rather than replay. The compression view is of course bound up in the origins of floppy-based demos, but the scene continues today.

The costs now run the other way, in fact. If you viewed the main benefit of procedural systems as compression, you're much better off saving the compute cost and just pre-generating and streaming terabytes worth of game assets. (Arguably, Spore could have even benefited from such an approach). The lasting benefit of the procedural is that whoever plays it gets a truly unique experience that no one has seen before, or will see again.


Just want to add that the first HUD for Star Citizen, which I wrote, was essentially 100% programmatically drawn and programmatically animated, such that every element and vector shape down to the needles and dots on the targeting reticles were defined in ways meant to be completely skinnable in form, in how they behaved over time, how they might behave differently given "ship damage", and so on.

This turned out to impose a hefty 20% CPU drain on the entire game, dragging down the frame rate. It was my own design decision and I argued that 20% was worth it (and that by the time the game was released, CPU speeds would have at least quadrupled), but digging my heels in on the issue of not pre-rendering assets was most of how I ended up leaving the project.

So that's why I would (emotionally, maybe) argue that compression has little to do with decisions to generate things procedurally, since compression was not a goal whatsoever, although the speed penalty was as if a large amount of compression needed to be overcome.


Thank you, that's an interesting response to his quotes, too! Like the proverbial plate of shrimp, this stuff has been coming up a lot recently, so I've been researching and recombining ideas from Will's old talks about Spore and with Brian Eno, David MacKay's Dasher text input system, and I've also dug up some even older unpublished thoughts from Ed Fredkin and John Cocke about a Theory of Dreams, and how they relate to LLMs and compression.

Being able to fit a program that generates lots of content on a small floppy disk or cdrom is one goal, less important today, but the important part is that thinking of procedural content generation as decompression of noise (random, user generated, contextual, or environmental) is a useful technique even with today's virtually unlimited storage and high speed delivery.

Will speaks about compression in information theoretic and arithmetic coding terms, not just referring to standard compression algorithms like "jpeg" or "mp3", but to information encoding and decoding theory, using compression techniques for procedural generation, like LLMs.

Here's a great video of Will Wright and Brian Eno discussing generative systems and demonstrating cellular automata with Mirek's Cellebration to Brian Eno's generative music, at a talk at the Long Now Foundation,:

Will Wright and Brian Eno - Generative Systems (excepts from talk):

https://www.youtube.com/watch?v=UqzVSvqXJYg

>Game designer Will Wright and musician Brian Eno discuss the generative systems used in their respective creative works. This clip features original music by Brian Eno. Will Wright and Brian Eno on "Playing with Time." In a dazzling duet Will Wright and Brian Eno give an intense clinic on the joys and techniques of "generative" creation.

Playing with Time | Brian Eno and Will Wright (entire talk):

https://www.youtube.com/watch?v=Dfc-DQorohc

>Will Wright, creator of the video games "Sim City," "The Sims," and the forthcoming "Spore," will speak on playing with time. "Playing with Time" was given on June 26, 02006 as part of Long Now's Seminar series.

Generative Music – Brian Eno (1996) (inmotionmagazine.com):

https://inmotionmagazine.com/eno1.html

https://news.ycombinator.com/item?id=24702201

Sandspiel Studio, Brian Eno, Wave Function Collapse, visual programming, cellular automata, etc:

https://news.ycombinator.com/item?id=34561910

Here's a simple low-tech pre-LLM example that shows the equivalence of compression and procedural content generation:

Take a huge text file of HN postings, and compress it with gzip or compress or some other robust compression algorithm. The better the algorithm, the more the output will look like random noise. Then slice the compressed file in half, and replace the second half with random numbers. Then uncompress it. You'll find that at the point you sliced it, it keeps on writing out almost plausible text for a while, consisting of highly probably snippets of commonly encountered words and phrases, then goes downhill towards incoherence. It's not as coherent or confident as an LLM, but the point is to show how low the bar is for using compression for procedural content generation.

LLMs are essentially a form of compression of the world's knowledge or whatever they're trained on, not just word frequencies or pixel patterns, but also concepts and ideas.

Ed Fredkin described John Cocke's Theory of Dreams, which describes dreams as a kind of procedural content generation based on running your brain's decoder over random noise inputs:

https://news.ycombinator.com/item?id=36597206

DonHopkins 1 day ago | parent | context | favorite | on: No one cares about your dreams unless you’re a fam...

ON THE SOUL: Ed Fredkin, Unpublished Manuscript

http://www.digitalphilosophy.org/wp-content/uploads/2015/07/

>The John Cocke Theory of Dreams was told to me, on the phone, late one night back in the early 1960’s. John’s complete description was contained in a very short conversation approximately as follows:

>“Hey Ed. You know about optimal encoding, right?”

>“Yup.”

>“Say the way we remember things is using a lossy optimal encoding scheme; you’d get efficient use of memory, huh?”

>“Uh huh.” “Well the decoding could take into account recent memories and sensory inputs, like sounds being heard, right?”

>“Sure!”

>“Well, if when you’re asleep, the decoder is decoding random bits (digital noise) mixed in with a few sensory inputs and taking into account recent memories and stuff like that, the output of the decoder would be a dream; huh?”

>I was stunned.

I found a video of an excellent interview Ed Fredkin in 1990 in which he explains John Cocke's Theory of Dreams in detail, beginning at 18:39, which I'll transcribe because it's so interesting and hasn't been published elsewhere (so now people and LLMs will be able to find it and learn from it too, and dream on, or even compress it, slice it in half, add noise, and decompress it to see what happens):

Ed Fredkin Talks About John Cocke - 4 May 1990: Theory of Dreams

https://youtu.be/DLCb1UV5bzU?t=1119

>I remember one time John called me up to tell me his theory of dreams. And when he first told me this, I thought, boy there's a strange theory if I ever heard one. But I have been interested in what he told me ever since. And this has to be -- it's hard for me to tell you how long ago, but it's 20, 25 years ago.

>And I'm now convinced that his theory of dreams is correct, and it's the only correct theory of dreams. And as near as I know, I don't know anyone who knows it, other than him and whoever else he's told, like me.

>But it's a beautiful theory, and it takes into account, it's really a theory based on what might be called, I wouldn't really call it information theory, but sort of information science. The knowledge we have about information, and how things are coded, and how things can be interpreted, I mean interpreted in a sort of technical computer sense.

>What his theory was, as is typical, I believe this theory was told to me in the middle of the night. But John described the following concept. Imaging the way our memories work is that they're efficient. What's known from information theory is that if you have taken some set of information and attempted to encode it in the most efficient way, then if you have succeeded, then the bits you get from your encoding scheme are indistinguishable from a random sequence of bits.

>And the reason for that is very simple thing: if the bits came out all like this: 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1, then obviously it could have been encoded much more efficiently. You could say there's five 1's in a row, then five... You know, in other words, if there's all kinds of patterns to it, then it can be reduced in size by being further encoded.

>So this is a hallmark from information science of something that has been well encoded, compressed, or condensed. If the brain worked efficiently, then the following thing is true: That the things that go into our memory, if you could look into them with some kind of magic magnifying glass, like the developer they put on magnetic tape to see the actual magnetic signals, would look random.

>Ok, so that's an interesting thought. That's just applying the ideas from information theory and computer science to what might go into your brain.

>But then what John did is he took sort of an amazing leap, and asked the reverse question, which was: If you took a truly random sequence, and fed it into the decoder, what would you get?

>This is a very interesting question, because what that says is say I remember an experience I had. The experience was that I went somewhere, I went on a vacation, I went to a lake, I got a sailboat, I sailed around, there was a thunderstorm, I came back. Say I have some kind of thing like that.

>This is all compacted into these random bits. When they're interpreted, the mind has a decoding scheme, is the idea. So to be efficient, it would have to refer to a logical sequence of things. By that I mean, if it says "I went to a lake, and then I did..." Well what should happen is, the choices are: you went swimming, you went boating, you went sailing, you know.

>There's only a small number of choices, it's not an infinite number. So if you only have a small number like 5, 1 through 5, then 1 might mean I went sailing, 2 might... so on.

>So those things have to do with what is possible, and what's possible for you, and has to do with your other memories, and so on.

>So suddenly John asks the question: What if I fed in a truly random sequence of bits to the decoder? What would you get?

>Well the answers is: you would get a completely plausible sequence of events, since each choice as interpreting them is only selected from plausible events that have to do with you, but it wouldn't correspond to anything in the big story.

>And in fact, it would manufacture a dream!

>So the ideas is that if the source of the information is random noise, but it's fed into the memory decoding mechanism that works efficiently, then what you get exactly matches what a dream is!

>I think this is a very significant discovery of his, and in other words, to me it's about as important as any psychological theory I've ever seen. It makes all of the Freudian analysis of dreams fall into a totally different perspective, where what you learn is what were the categories that might have been selected that have to do with personality. But which ones were ends up just being some random noise, or something like that.

>So to me it fits in with every aspect of dreams. For instance, when you're dreaming, and something happens in the real world, like the telephone is ringing. This stimulus is often encoded in your dream. You dream that there's a phone ringing, and so on and so forth.

>Well, that works perfectly with this scheme, because if there's a phone ringing in your ears hearing it, then it becomes one of the logical things to be tapped by whatever number comes up. The only question is which context it is.

>If there is a phone ringing, the next thought has to be there's a phone ringing. But the random number tells you there's a phone ringing, and you're going to ignore it because someone else is going to answer, or various thing like that.

>So my own -- having thought about this for the maybe 20 years since I heard about, I've concluded that this is the best work on the subject that's been done on the subject that's been done so far by anyone. And I know it's not published or anything, and I'm convinced that it's a great thing. Someone ought to write it up. Or John should. That's the theory of dreams.

I posted more about Ed Fredkin recently in the discussion of his recent passing:

https://news.ycombinator.com/item?id=36429420

https://en.wikipedia.org/wiki/Edward_Fredkin

Here's another great example of applying information theory and compression techniques to efficient text entry, called "Dasher", invented by David MacKay -- think of Dasher as extremely efficiently decompressing cursor motion (or other device inputs) into text:

Dasher: information-efficient text entry

https://www.youtube.com/watch?v=ie9Se7FneXE

>Google Tech Talks, April 19, 2007

>ABSTRACT: Keyboards are inefficient for two reasons: they do not exploit the redundancy in normal language; and they waste the fine analogue capabilities of the user's motor system (fingers and eyes, for example). I describe a system intended to rectify both these inefficiencies. Dasher is a text-entry system in which a language model plays an integral role, and it's driven by continuous gestures. Users can achieve single-finger writing speeds of 35 words per minute and hands-free writing speeds of 25 words per minute. Dasher is free software, and it works in all languages, and on many platforms. Dasher is part of Debian, and there's even a little java version for your web-browser.

http://www.dasher.org.uk/

Finally, here's an interesting HN discussion about LLMs as data compression, and some interesting replies:

Ask HN: What are the data compression characteristics of LLMs?

https://news.ycombinator.com/item?id=35130027

Ask HN: DietaryNonsense 3 months ago | hide | past | favorite | 3 comments

Disclaimer: I have only shallow knowledge of LLMs and machine learning algorithms and architecture in general.

Once a model has been trained, the totality of it's knowledge is presumably encoded in it's weights, architecture, hyper-parameters, and so on. The size of all of this presumably being measurable in terms of number of bits. Accepting that the total "useful information" encoded may come with caveats about how to effectively query the model, in principal it seems like we can measure the amount of useful information that's encoded and retrievable from the model.

I do sense a challenge in equating the "raw" and "useful" forms of information in this context. An English, text-only wikipedia article about "Shitake Mushrooms" may be 30kb but we could imagine that not all of that needs to be encoded in an LLM that accurately encodes the "useful information" about Shitake mushrooms. The LLM might be able to reproduce all the facts about Shitakes that the article contained but not be able to reproduce the article itself. So in some ontologically sensitive way, the LLM performs a lossy transformation during the learning and encoding process.

I'm wondering what we know about the data storage characteristics of the useful information encoded by a given model. Is there a way in which we can measure or estimate the amount of useful information encoded by a LLM? If some LLM is trained on Wikipedia, what is the relationship between the amount of useful information it can reliably reproduce versus the size of the model relative to the source material?

In the case of the model being substantially larger than the source, can I feel metaphorically justified in likening the model to being both "tables and indices"? If the model is smaller than the source, can I feel justified in wrapping the whole operation in a "this is fancy compression" metaphor?

jeremysalwen 3 months ago | next [–]

Generative models (like LLMs) that assign probabilities to pieces of data are equivalent to compression algorithms.

To convert a generative model into a compression algorithm, you just use arithmetic coding: https://en.wikipedia.org/wiki/Arithmetic_coding.

To convert a compression algorithm into a generative model, you assign a probability to each piece of data according to the size of its compressed representation.

See also the Hutter Prize and associated FAQ: http://prize.hutter1.net/

If you wanted to specifically measure the "useful" information, you would need to have some way of sampling from the set of possible articles that contain the same "useful" information, but vary in the "useless" information, and vice versa. I think you would find that it would be difficult for you to define what the boundary is, but if you made some arbitrary choice, you could measure what you are looking for through the LLM probabilities.

PaulHoule 3 months ago | prev | next [–]

See

https://en.wikipedia.org/wiki/Hutter_Prize

GPT-3 is said to have 175 billion parameters, if those are float32s (I bet they could get away with less than that) it would be 700 GB of data. It's also said in Wikipedia that "60% percent of the weighted pre-training dataset for GPT-3 comes from a filtered version of Common Crawl consisting of 410 billion byte-pair-encoded tokens"

That would be about 680B tokens, say the average token is 5 characters, that is 3400B characters of text, such that the output is "compressed" to 20% of the input, which state-of-the-art text compressors can accomplish.

Now my figures could be off, namely they might be coding the parameters more efficiently and the average token could be longer. But it seems to make sense that if you trained a model to capture as much information as you could possibly capture out of the text it would be that size. Given that that kind of model seems to be able to spit out what it was trained on (though sometimes garbled) that might be about right.

wmf 3 months ago | prev [–]

NNCP: Lossless Data Compression with Neural Networks:

https://bellard.org/nncp/

----

Procedural Content Generation: An Overview: Gillian Smith:

http://www.gameaipro.com/GameAIPro2/GameAIPro2_Chapter40_Pro...

>One of the first examples of PCG was in the game Elite [Braben 84], where entire galaxies were generated by the computer so that there could be an expansive universe for players to explore without running afoul of memory requirements. However, unlike most modern games that incorporate PCG, Elite’s content generation was entirely deterministic, allowing the designers to have complete control over the resulting experience. In other words, Elite is really a game where PCG is used as a form of data compression. This tradition is continued in demoscenes, such as .kkrieger [.theprodukkt 04], which have the goal of maximizing the complexity of interactive scenes with a minimal code footprint. However, this is no longer the major goal for PCG systems.

>Regardless of whether creation is deterministic, one of the main tensions when creating a game with PCG is retaining some amount of control over the final product. It can be tempting to use PCG in a game because of a desire to reduce the authoring burden or make up for missing expertise—for example, a small indie team wanting to make a game with a massive world may choose to use PCG to avoid needing to painstakingly hand-author that world. But while it is relatively simple to create a system that can generate highly varied content, the challenge comes in ensuring the content’s quality and ability to meet the needs of the game.


I want to add this one line that tattooed itself on to my brain an early age, when I read a book about the screenwriting trade by William Goldman, of Princess Bride fame.

He cited it as the line that likewise instigated an irreversible tectonic shift in his own approach to the craft of writing. It was simply this: the realization that "poetry is compression."

I think this insight has stood up to the rise of technological leverage and may generalize extraordinarily well into some sort of philosophical ground truth, on the suspicion that given the mismatch between the scale of the universe and the two-odd kilogram lump of tissue in our skull, all of human knowledge is essentially an exercise in curated compression.


For me, it was Second Reality from FutureCrew. Highly recommend it. Great soundtrack, awesome scenes for the time.


Worth mentioning that the members of Future Crew later founded Remedy, makers of Alan Wake, Max Payne, and Control.


And I think they created Futuremark (company behind 3Dmark), and some other companies.


Wow, I had no idea, but am not entirely surprised.

Anecdotally, my Pentium 3 machine (forgot what GPU it had, sadly) back in 2004 was struggling with most modern (for the time period) games that had even a hint of graphical fidelity. Max Payne 2, on the other hand, looked mindblowing and ran like butter.


So many great game studios were founded by demosceners: Rockstar North, EA Dice, Starbreeze Studios to name a few.


Second Reality will always be peak demoscene for me.


That one seemed really widely distributed, iirc we got our copy off of a CD from a computer magazine at the time. Really cool, and it ran on most people's PCs. I wouldn't be surprised if that was the first time seeing 3D rendering on PC, after Wolfenstein anyway. Actually, Wolfenstein was 1992, Second Reality was 1993; Wolfenstein had just the 3d-looking environments, Second Reality had 3D models / polygons and the like.


Not just the 3D but the other effects were pretty impressive too with second reality. It looked like an Amiga demo but ran on a plain old VGA graphics hardware which didn't have any of the dedicated graphics effects (sprites, zooming, color bars, etc.) that Amiga machines were capable of creating in hardware.


Prepare to be amazed and watch "Copper" by Surprise!Productions (1992) [1]. It has hardware zooming, vertical and horizontal copper bars, and even a horizontal wobble all on a regular VGA card (although not many cards or emulators support it)!

One interesting VGA hardware effect in this context which I have not seen in any demo, is to change the character width from 8 to 9 pixels _during_ a horizontal scanline. The trick is to use RDTSC to estimate the pixel position, with which you can create a nice wobble effect in text mode. Obviously, this requires a Pentium or higher, and at the time this became possible, text mode was rather outdated.

[1] https://www.pouet.net/prod.php?which=2048



Aaaah yes that was incredible. Along with panic, unreal , they were really heads and shoulders above the competition.

And don’t forget scream tracker 3 ! ( which is actually how I discovered the demoscene)


The pre-GPU era and retro computer competitions like Amiga has many jewels.

The GPU powered demos are impressive in a different way.


RSI Megademo. Lovely.


Second Reality by Future Crew, Timeless by Tran, so many good ones from this era.


I often repeat to myself "Chrono Trigger fit into 8M. Chrono Trigger is a better piece of software than anything I have ever made. Why should my code need any more?"


You may need to cut back, Chrono Trigger was a 4M game, your code needs to fit in half the space that you thought :p


That thumbnail for Elevated is an order of magnitude bigger than the demo.


Put that 64kb exe file into a zip file, and it becomes 62 kilobytes...

Clearly there was a little more space they could have eeked out!


Just enough space to fit some zip-decompression code? (I have no idea how much space that would actually take tbh)


Yeah I think that’s the reason it didn’t happen ;) not saying it’s impossible to make in less than 2kb, but it would have to be around 1kb to make it really worthwhile, so they could squeeze in more content.


They already have a compressor/packer - so this 2kb represents space 'left on the table' by that packer (either due to the algorithm being bad, or the packer's decompressor being big).

Considering ZIP normally uses deflate, which is LZ77 + huffman. LZ77 is super simple and can be implemented in ~30 bytes of code. Huffman tables are fairly simple, but I can't easily visualize some assembly instructions to implement them, however arithmetic coding in all cases exceeds huffman's performance, and can be implemented in about 60 bytes of code. Total = 90 bytes to save 2 kilobytes. Seems worth it.

Note that both these decompressors will be awfully slow, but for just 64 kilobytes of input data I don't think that'll be an issue.


It's old and it still blows me away.

I've seen movie CG that looks way way shittier, that's ten years newer.


It was also hugely influential to me. I think seeing what is possible with extreme achievements like this can really be motivating. It's probably what spurred me to learn C and assembly during high school.



fr-08 sits in a prized place in my storage for two decades. It's one of the most incredible demos ever.


Even John Carmack these days says the goal of a programmer is to deliver value, not write the tightest leanest code possible. If the fastest, cheapest way to deliver value is with Electron, you fucking use Electron.

Demoscene stuff is fun, and cool, but it's a hobby. It doesn't reflect how software is developed and deployed in the real world.


You might be holding my comments wrong.

You're right, when developing a product, the goal is to deliver the product and value, however a programmer who likes their job doesn't develop products all the time, they also improve themselves.

What I found is, studying "The Machine" and its details, and trying to write exceptional code, improves the "daily, mundane" code, too. Because you know what to do, and how to do it better.

In other words, if Demoscene is F1 racing, and you have the chops to design vehicles which can compete there, the technology you develop trickles down to your road-going designs, and improve them, too.

While I was writing my Ph.D. code, knowing the peculiarities of the machine allowed me to write very efficient and fast code in the first go, and the code can saturate the machine it's running on extremely well. To put in context, I was able to evaluate 1.7 million adaptive Gaussian integrations per second, per (3rd generation Intel i7) core. That code scales almost linearly until memory controller chokes because of the traffic (incidentally this happens after I saturate all the cores).

This code has no special optimizations. A single, naive, multithreaded implementation which I spent half a day designing it, that's all. There are all kinds of optimizations I can do to speed this a bit further, but since it's fast enough, I didn't bother.

At the end of the day, I'm an HPC sysadmin and developer. I inhale hardware and exhale performance. While there's a value to be delivered, it's extreme speed in my case. Because that speed saves days in research time, not milliseconds.


The triad is "cheap, good and fast" - and the motto is "choose two".


I think "good" might better be replaced with "useful"


No, the point is in the intrinsic quality - being well made. Usefulness is implied in being a product.

A rubbish hammer may be e.g. «useful» for some dozen bangs in a few weeks, then crumble. A rubbish item will have compromises. A "good" product will be masterful and optimal for function, less compromising for the demanding user.


The YouTube video in question: https://youtu.be/eGdUDGo2Gxw

The title says 4kb, but I assume it's 4KB, which is impressive but not mind blowing for a procedurally generated landscape.


Since you're a stickler for capitalization, do note that the SI prefix for kilo is a lowercase "k". And if you're talking about 2^10, the recommended prefix is "Ki".

So it's either 4kB (4000 bytes) or 4KiB (4096 bytes).


I think some people use kb for kilobits and KB for kilobytes, which is maybe a bad convention, taken from kb/s

why throughput is bits and storage is bytes I might never understand, probably has something to do with baud


Considering you have the music, lighting, texture changes and everything else, it's still indeed impressive for a procedurally generated landscape.

It might not be the most impressive demo tech-wise, but it's one of the best executed ones, and it matters.

Also it's the second most loved prod in pouet.net, which means something.


When I download it, it's indeed 4KB not 4kb. I still think it's really impressive for the entire demo though. It's not just a procedurally generated landscape, it's the camera and scene changes, it's the lighting, the spotlights and music.



Those Finns are incredible. I'd like to think that I might have been a great programmer if only I lived in a place where there is no sun for half the year! /s


it does naturally encourage you to sit on the computer...


The article mentions Germany and Poland as well, and the potential for a joint international application.


Wonderful memories being part of the C64/Amiga Demoscene in the 1980s, an era of creativity, camaraderie, and global connections.

This marked the genesis of my coding journey, igniting a lifelong passion for programming that endures still, spanning several decades in the tech landscape.

A highlight, to this day, involved creating and releasing our Amiga Demo-Creator in May 1987:

https://coding-and-computers.blogspot.com/2022/05/first-amig...


For me it was also a strong sense and feeling of pioneering - new demos/intros with new records, new effects, new hardware, new disks in the mail, new releases on the BBSes, new "feuds" to read about in the diskmags. A very warm and pleasant memory of my childhood and adolescence. There are few chances like it in life.


Well deserved! Demos are a marvel of art and engineering, and their authors are wizards, AFAIC.

I've always been amazed at the sheer skill and creativity of the programmers in this subculture, to the point where I consider them to be in an entirely different league from the "regular" programmer building web, desktop, mobile and enterprise apps using some high-level language, of which I'm humbly a part of. Demo programmers can seemingly control electrons at their will, and mostly do so for fun and pleasure, whereas we work with layers and layers of abstractions and mostly for compensation. I'm not saying that the world doesn't need both, just that one is more aligned to that child-like curiosity and wonder at making lights blink, taken to an extreme level, which is IMO a purer reason to love computers.


Er, we once win a "scene award" with a demo. You are right there was plenty of talented programmers there (and the level has only gone up), but this is also an _art_ scene and to be able to work so long for free on something that won't make any money (and that you can only present at a single event) in the end you need to either have a job developing 3D software, and/or be independently affluent. I was happy to leave the scene quickly as I feel it wasn't very respectful, when you release something you can see your "competitors" send harsh comments and tell you your thing sucks, unlike any other scene (?). This doesn't make sense, when people are in music they don't go around telling other people their music suck. Also, demoscene is pure greenfield, you can learn more working in maintenance programming.


I'm not gonna say you're wrong in your experiences, because there's much damnning about how people have behaved, but it needs to be taken in stride and things have changed a bit.

The demoscene always did sit in a weird intersection between technology and art, and the roots and association with piracy (that often in the beginning in large parts were teenage ganglike groups) did add a distinct "harshly" competitive edge you don't see in other art perhaps with the exception of hiphop.

However, 2 factors is changing things out to a mellower experience, one is that people have grown older and there's probably a realization for many who still participate about our mortality as people literally start to die off and few fresh faces appear.

The other is that the workload to make code that could be impressive has risen and with hardware getting faster without changing much the pure rendering quality of what pre-made engines like Unity and Unreal can accomplish (or tools like Notch, whilst being demoscene rooted still is a commercial product today). SDF's gave the scene productions an edge in achieving cool stuff for a bunch of years (still beneficial for size) but usage is spreading quickly in games today. This might be part of why many seem to have taken "refuge" in working on retro-demos since squeezing cool stuff out of old machines isn't directly comparable to what people working full time on games,etc can achieve.

As for what you can learn, yes, it's plenty of greenfield programming but "maintenance programming" gets boring quite quickly as well unless you work for a hyperscaler and/or a startup.


Fwiw I loved the adinpsz demos, sad to hear you left on such a sour note.


Because I'm a sour individual :) did reuse demoscene teachings in homemade post-processing (CPU! thanks to all the in-talk about raytracers being fast on CPU, and well drivers).


An evolution of the demoscene has emerged in the form of a social network where 140 characters of JavaScript code are used to create incredibly impressive and compact demos. Check out Dwitter: https://www.dwitter.net/top/all


Not even close. With 140 chars of JS you are tied to a whole browser which it's almost an OS.


Which is no different from being tied to a particular computer. Why do you think it's not analogous?


Because the browser is on top of the computer. Show me a browser implemented in hardware (not on top of a general-purpose computer) and I might change my mind.


Modern computers are VMs all the way down. x86 is also not implemented in hardware. The x86 machine is actually a VM running on top of the microarchitecture.

Also, browsers are no less general purpose computers than physical computers are. There's nothing you can compute in one that you can't compute on the other.


Why aren’t we seeing browser computers then, if they are analogous as you argue? My point is the reasons why we are not seeing them points to where they are actually not analogous.


There are several reasons.

1. There's no software specifically developed for webassembly, unlike for x86, ARM, and the other architectures.

2. Webassembly itself was designed by software people, not hardware people. As such, it's probably doesn't make as much sense to implement directly in hardware. Doubly so when combined with reason #1.

3. Webassembly doesn't belong to any hardware manufacturer, and therefore nobody has any incentive to introduce a line of native webassembly machines.


The more generalized aim here is not "run code with the lowest overhead" but rather "do cool stuff within surprisingly strict limitations." It's the same concept whether you're trying to write 256 bytes of 6502 assembly, or 140 bytes of Javascript.


My favourite demo of all time https://www.pouet.net/prod.php?which=56112

It features programming in multiple languages: cobol, fortran, lua, pascal, ruby, vala, ada, d, java, objc, scheme, visualbasic, asm, javascript, ocaml, python, shell

Big shame no videos of it are up on the internet anymore, I've looked quite hard, but nothing. The code is available here however: https://launchpad.net/binaryofbabel/+download


I would have expected them to at least be on archive.net's software area. Given that they are now cultural heritage we should do better to preserve them, especially the videos that could inspire the next generation of hackers.


Someone need to capture this one...


Not just the Demoscene! Also the Twentse Krentewegge, the best sweet bread of the world.

My mom (krentewegge) and myself (demoscene) are equally thrilled.


The Twentse Demoscene was awesome, but very small. As far as I knew, I was the "only coder in the village" :)

I remember going to copy parties from The Raven (1992?), cycling 10 km with ~100 floppy discs, wondering whether some bit would shake off whenever I hit a bump in the road.

I still get goosebumps when I think of the time when some guy from a nearby city demoed his own Adlib player and 60fps flat shading 3d renderer!

Meeting people in person to share code (or to keep it secret), reading paper magazines (Dr Dobbs) buying books on holidays (PC Intern anyone?, Ferraro, Foley), ah those were the days :)


> PC Intern anyone?, Ferraro

Still on my bookshelf. Also Messmer (The Indispensable PC Hardware Book) and ofcourse the Abrash books.

The late 80s & early 90s where a great time growing up in The Netherlands if you were a computer/pc nerd. :)


> the best sweet bread of the world

Yeah, right, let's just ignore panettone exists for a moment.


This is basically the FT2 vs IT debate all over again. (FastTracker is better though)


Every tracker is a good tracker. Now go make a game in ZZT.


(and Krentewegge)


st3 before


They compete in different leagues, panettone is a seasonal sweet bread, krentenwegge is a daily sweet bread


So, where should one eat the best krentenwegge in the World, presumably in (which place in) the Netherlands?

I have good information that the best panettone could be at the Pasticceria Roletti, in San Giorgio Canavese, in northern Piedmont.


Bakkerij Nollen in Enschede is somewhat famous, but to be honest it’s going to be “lekker” ar most bakeries… enjoy with a cup of coffee and spread a bit of butter on it for the perfect experience


Any bakery in Twente


IMHO, demo coders should be on the same level as other artists. In the same way that a painter uses a canvas, brushes and colors, or a sculptor uses a block of marble/wood and chisels, the same is true for a coder who uses a programming language and some computer hardware in order to create something pleasing to our senses without fulfilling an immediate practical need (i.e. art).

So many demos, groups, parties and people to mention, so head over to any of these sites:

https://www.scene.org/

https://www.pouet.net/

https://demozoo.org/

https://www.hornet.org/


Loved being part of the demo scene in the 80s and 90s. Learned so much, still proud of what we squeezed out of 8/16 bit computers.

Luckily never got caught ;-)


Thinking about demos brought me right back to probably one of the first things I downloaded, the 'controllable' Mars demo [1,2]. I zoomed around that landscape in the way a child can pour hours into what anyone older would view as painfully repetitive.

Based on the YT comments it's been done at shadertoy too: https://www.shadertoy.com/view/XdsGWH

[1]: http://pouet.net/prod.php?which=4662 [2]: https://youtu.be/_zSjpIyMt0k


So what does this mean in practice? Financial supports for demoscene events?


It's just recognition by a UN body. If you'll indulge some cynicism, all it really means is that a bunch of people spent your tax money (which funds the UN through member-state contributions) to sit around (and expense a bunch of pointless travel) pontificating on what's worthy of recognition.


I hate that you're right


Embrace the cynicism. Things become so much clearer. Just be sure to find the humor in it all or you'll be miserable.


I once asked the people behind this effort and my understanding is that they don’t really know either.

It’s just internet points I guess, except not with points and not on the internet.


It probably helps whenever you deal with officials. Doing graphical things with computer (computer games?) vs. participating in something recognized by UNESCO as cultural heritage. That should open some doors, if you are planning to do an event or course around it.


Yeah but which officials? The demoscene doesn't really need permission for anything. That’s almost a cultural value.

I organized a few demoparties back in the day and it’s really just, rent a place where loud music is ok, get people to come. Lots to plan and arrange ofc, but nothing that needs permission or money from bobos.

Maybe the bigger parties like Revision can use this to get permits etc but I think all things considered it won’t have a major impact on anything tangible. It’s just cool, recognition.


Well first, kudos to you, to just organize it. That's the spirit, I know.

"Yeah but which officials?"

School rectors for example. Local mayors. The types who can give access to big rooms, you can rent for free, if what you do is considered "cultural".

(but yes, there is the danger of doing it all very tame then)


It doesn't hurt to have something like UNESCO legitimizing the medium. I've attended a few demo parties (US), some were hosted at unis, all were organized by youths, and it's never been trivial to get permission to use the facilities. Especially when the intent is inviting dozens to hundreds of total strangers for a long-weekend sleepover ostensibly to program for two-three days straight.


It would likely help them get funding much more easily than earlier.



It means demos are now protected by law and thus a lot of limitations and bureaucracy around the development of new demos.

Just kidding, I have no idea :)


No, it’s means it’s only a demo if it comes from The Uusimaa region of Finland, otherwise it’s just sparkling graphical effects.


No, it just means recognition and helps with awareness. Any support needs to come from the community.


Congratulations. This is great news. I hope that the website recovers fromt he onslaught of traffic.

I've loved the demoscene on the Amiga and Atari ST sonce the 1980s and have started attending demoparties sonce last year. I have great respect for the efforts of the folks ar Art of Coding and their friends in the different countries. To achieve this sort of UNESCO legitimation is an amazing achievement that takes a lot of serious work and persuasion.

My most sincere thanks and admiration to the Art of Coding folks and to DigitalKultur.


> To achieve this sort of UNESCO legitimation is an amazing achievement that takes a lot of serious work and persuasion.

What does it actually achieve, concretely? I get listing a historical building to preserve and maybe drive some tourism, but these sorts of intangible things... what is the real world result?


A former boss of mine said that he aspired to build the "Sagrada Família of software", referring to the intricately beautiful basilica whose architect didn't live to see the completion of: https://en.m.wikipedia.org/wiki/Sagrada_Fam%C3%ADlia

He meant that he wanted to build something that took time and care, but it would last and amaze all who saw it with its beauty and impressive construction.

I showed him 8088MPH, which took seven years to write: https://m.youtube.com/watch?v=yHXx3orN35Y

He said "OK, that's the Sagrada Família of software."

Thinking on the Matrix sequels which, I know, do not exist, I came to a stunning realization: The Machines have a demoscene. Neo encounters a pair of programs who are trying to smuggle their daughter, Sati, into the Matrix. The Oracle told Neo that every program in the Matrix has a purpose, but Sati does not and that's why she is at risk of deletion. Her parents created her simply to experience the joy of loving her.

Sati is a demo. Like any demo she exists simply to delight her creators and anyone who observes her. We even get to see a demoeffect of hers: the colorful sunrise she creates for Neo to enjoy, wherever he may be. This demonstrates that the Machines have developed art, empathy, and love, quite possibly due to Neo's influence.


I've yet to find any non-engineer creatives in America who are familiar with the MindCandy DVD releases. I saw the first volume playing in a bar in SF once as background visuals, but no one around knew its significance.

Cracktros and their music are also worth preserving. Razor and such. There are many vintage ones for DOS, Windows, Amiga, Mac, and probably others. I'm sure they're difficult to find and some maybe lost.


Since th Art of Coding site is experiencing some downtime right now. Here is the official announcement on the Dutch side (in English): https://www.immaterieelerfgoed.nl/en/215e-bijschrijving-in-i...

I couldn't find it on unesco.nl because everything is in Dutch.


World news and some aspects of where humanity is headed can be really overwhelming and depressing sometimes. But one thing that keeps amazing me about it is how we pretty much use everything under the sun to express ourselves, our ideas, create art. No matter hour obscure and niche, we find a way to turn it into beauty.


DVD releases of demoscene “stuff”: http://www.mindcandydvd.com/

Well-cared-for in that they captured from original hardware, took scanlines, refresh rate, etc into consideration.


I own two of them, they’re great.


Anyone into webgl/3d programming would have been mind blown by demoscene.

I had read about perlin noise but never truly appreciated it until I saw some of the demoscene demos.

Everything - the music, the landscape, trees, cameras, buildings - all procedurally generated. Code is maximally reused.

Made me think, perhaps the brain is also doing that. Just storing symbols and rough descriptions, enough to recognize it again and reason/imagine the world. But high compressed lossy model of the world. Perhaps the brain is just a few gigabytes of memory to restore someone’s consciousness. May be less.

demoscene does that to you.


This is fantastic news, especially for an ex-scener like myself.


Meanwhile, I am still trying to figure out how to really get started with making my first demos. I can write a basic shader with ShaderToy, I can write a raymarching algorithm and do a few cute tricks with it...and now what? How do I turn this into a "proper" demo?

Even just guidelines on the entire compilation step would be appreciated, or knowledge on how some fairly standard examples of demos were compiled, and which tools were used.


You don't have to do a size-limited demo. You can focus on writing some kind of cool animation and/or music that is only or mostly code (as opposed to a video file or huge animation tables) and that's your first demo.

Some demos I liked watching are tens of megabytes, e.g. https://www.pouet.net/prod.php?which=57446 . I'm not sure why that still counts as a demo, but I suppose as long as the the sequence of things is mostly driven by code, then it does. Perhaps it's like 500kB of custom stuff glued onto a 42MB off-the-shelf game engine like Unity, which I think should be totally okay to count as a demo. Obviously this won't be competing in any size category.


I used https://github.com/naavis/4k-Intro-Template/ to get started.

The main ingredient of the secret sauce is a special compressing linker, called crinkler.


Ah, that Linux user feeling. Feels like a lot of people in the scene use Windows, which still surprises me considering how tech-y it all is.


Windows used to be the only platform where even half baked graphics code could be assumed to run properly most places, due to its position as the primary OS for gaming. All the graphics cards had drivers, direct3d came bundled in etc. It was just easier.

Dont forget that most democoders want to make stuff. Something like Visual Studio, which you can (pirate and) install, and then you type code and press a button with a play icon and see the code run, that’s super nice and it’s had that UX for 20 years or so.

Compared to Linux at the time, which drew the kinds of people who enjoyed building up their perfect Gentoo from scratch and setting up their Vim just right, and making automake detect dependencies on all the distros. To your average demoscener, all that stuff gets in the way of getting graphics on the screen.

These days the difference is pretty much gone though, so it’s just momentum. You can install Ubuntu and a random IDE and code some opengl in an SDL window and things will Just Work just like they would on Windows. And I assume even Linux these days has a way to deal with dependencies that’s as easy for devs as Windows’s “just add the DLLs you need to the zip file”.


Not to mention that it's only recently that Blender has become bearable, even if 3dsMAX,etc is ancient those kinds of tools were often used in the past and Linux was a kind of no-starter.

Also since intros were size-limited you kinda needed control of your pipeline to achieve the smallest size of the _binary_. For a 4k intro different compilers could produce different sized binaries, the .exe compressor crinkler mentioned above does some "function-order-randomization" to improve compression by a few bytes by having the correct ordering of data/functions so just running a slightly different version and/or not long enough could fail to produce something below 4k.


Actual demosceners would use PovRay, not 3DSMAX or they choose Borland IDE's over a turd like Visual C++.


Linux as demo dev platform has the issues you mentioned, but as a competition platform it suffers because there is has not been any good baseline/platform to set as a standard. For Windows its easy to define in compo rules that the compo machine is clean Windows X installation with latest patches and drivers as of day Y.

For demos the social aspect is important and one thing Windows is good at is that you have good chance of being able to run random exe you downloaded, which is far less a given with Linux and especially with 3d graphics and audio

Flatpak might now represent the sort of platform that could be good enough, but really the best would be to have something similar but more minimal and specifically designed for demos; I'm thinking some kind of sandbox that would run single elf executable and expose just enough primitives (mesa, wayland, pipewire, glibc) to make democoding feasible and would give fair even ground for sizecoding.


lots of things were before windows. c64 and amiga were legendary.


Agreed, those were the roots. But this thread is a bit about tooling/distribution on win vs linux and the C64 and Amiga is more like Windows than Linux in that sense. If prods would've been open-sourced or compile for yourself you'd lose control of packaging and stuff like track-ordering for disk-loading would've been impossible to control properly.


yeah i get you. to me it seemed they left amiga for windows. hardly much linux really. but i think it depends on what you were looking at.


It was just easier

I got out of the demo scene after the Commodore 64, but isn't the whole point that it's supposed to be hard, not easy? That's what makes it amazing.


No. There's a huge difference between "hard to put a hugely detailed 3d landscape into 4kb" and "hard to get a pixel on the screen because the damn dependencies can't be found and why do I have to edit a Makefile at all, can't it just compile everything and then run it?".

At the risk of sounding dismissive, Linux was for the kinds people who enjoyed configuring stuff, digging through man pages, editing textfiles and then recompiling their kernels. It was for people who loved tuning their system 100% perfectly to their needs. Not for people who just wanted to code cool computer art.

Actually I think most democoders, even the insanely good ones, have very little patience. The attractive thing about coding graphics is the dopamine cycle is short. It's code, run, woa cool!, code, run, woa cool!.


The demoscene was also not so much into "open source".

Code was often kept secret, to add to the mysticism of the amazing algorithms (which often turned out to be very clever tricks, and not academic breakthroughs), or to simply hide the fact that the code was a terrible mess.

With regards to Linux -- Avoozl and I wrote a 4k intro for it in 1999 [1]. It was quite a lot of effort, with custom ELF headers to shave off a few bytes. There were but few people willing to run the thing as root though. (This was a requirement for svgalib, and us demosceners couldn't understand why unix sysadmins took offence with that :)

[1] https://www.pouet.net/prod.php?which=1318


Hah - I remember this prod. Super Grover ftw! I had done some svgalib work so I had no problem running it as root. In those days I had a Cirrus Logic or a Matrox dual head.


Even though I think the intro is quite impressive, the screenshot is fake -- there is no Grover to be seen in the entire intro. Somehow we got away with that, because it is quite hard to get the thing working nowadays :)

If anyone manages to turn it into a YouTube video, I will, ehm, applaud them for it!


haha, wow svgalib is really ages ago :D


Self-plug, but demoscene on linux is very possible. I got into the scene around 2015 and submitted my first entry to an IRL demoparty in 2018. All of my prods are linux and open source.

Most are 4k exegfx, which means they are still images, but "scalemark" is an animated 4k intro with music.

https://suricrasia.online/demoscene/


It isn't quite as classic(?) as some of the others, but I remember being blown away by Exceed's Heaven7. The fact that you could do all that raytracing in real-time just amazed me.

https://www.pouet.net/prod.php?which=5

(I have to admit I probably turned the text off though...)


Great news! Best art thing to ever spontaneously appear, if only it were a thing in South Africa in the 90s :/


Denthor, author of a famous series of democoding tutorials from the late nineties [0] was from South Africa

[0] http://www.textfiles.com/programming/astrainer.txt

(key quote:)

    [  There they sit, the preschooler class encircling their
      mentor, the substitute teacher.
   "Now class, today we will talk about what you want to be
      when you grow up. Isn't that fun?" The teacher looks
      around and spots the child, silent, apart from the others
      and deep in thought. "Jonny, why don't you start?" she
      encourages him.
   Jonny looks around, confused, his train of thought
      disrupted. He collects himself, and stares at the teacher
      with a steady eye. "I want to code demos," he says,
      his words becoming stronger and more confidant as he
      speaks. "I want to write something that will change
      peoples perception of reality. I want them to walk
      away from the computer dazed, unsure of their footing
      and eyesight. I want to write something that will
      reach out of the screen and grab them, making
      heartbeats and breathing slow to almost a halt. I want
      to write something that, when it is finished, they
      are reluctant to leave, knowing that nothing they
      experience that day will be quite as real, as
      insightful, as good. I want to write demos."
   Silence. The class and the teacher stare at Jonny, stunned. It
      is the teachers turn to be confused. Jonny blushes,
      feeling that something more is required.  "Either that
      or I want to be a fireman."
                                                         ]


First thing I did was search for Denthor in the comments. I'll always remember going online the first time in 1996 and stumbling on Denthor's tutorials, Fravia's site. Both of those were hugely influential for me. I never ended up getting involved in the demoscene, just played around but it was a lot of fun. (I did however use fravia's tutorials to crack some game I didn't want to pay for, I remember spending a week figuring it out and maybe 30 minutes playing it once I cracked it :))


Yep! There were also some fledgling groups but it never amounted to much, the demoparties that were attempted were very small and it basically never hit critical mass like it did in Europe.


Perhaps it was? This list on Pouet shows a few demo groups and releases from South Africa:

https://www.pouet.net/lists.php?which=221


Tangential question: is there a way to participate in the demoscene,as a beginner or in serious competitions? Looks like the popular meetings happen in Europe, and looks like are in person


The first 3 minutes of this is pure magic. https://www.youtube.com/watch?v=iRkZcTg1JWU


Aww yiss. Nederkunst.

Thats amazing and also kinda wtf because drunken parties are the norm, so cultural alcoholism is a bonus.


Funny, to my mind it's much harder to build a beautiful performant demo than keep a website afloat...


Excellent! I love the scene and feel it being seen this way is appropriate.


Seems like it’s been hugged.


So does that mean we’re officially old school?


We took down the site, it seems.


this is so cool i can't even!!! when the hakken as a traditional dance?


For real though, Gabber and its culture should definetly get recognized as cultural heritage. To this day what it spawned has a tremendous influence in the rave scene, and Hardcore is still relatively huge. I attended Thunderdome last year for its 30 year anniversary. It was incredible


You might like Gabberhammer by JCO, which is at the exact intersection of both subcultures.

https://www.pouet.net/prod.php?which=15102


That's awesome news!


This means that as an art form, its dead. Which seems about right.


"The scene is dead" meme has been around for quite a while.

So much that now, it is most often used sarcastically, like when a new group releases something cool.


I joined the scene in 1997 and in 1998 I wrote my first "Is the scene dead?" diskmag article.


There are dozens of high-quality demos released every year.

The state of the art has also been progressing. Some of the things coded on the C64 these days are things that I found too difficult to do on the Amiga back in the '90s when I was active.


I‘m aware. It is also a fact that the average age of the participants is rising. The foundation of the subculture and its cultural cachet was always the warez scene where it originated, revolving around tight-knit groups of young dudes that participated in a forbidden pleasure and formed tribes that competed with each other.

That energy is dead and gone. Once something hangs in a museum, its still art, but it is no longer alive, no longer evolving and becoming.


Except that we are still alive and kicking and dreaming and (when possible) producing, and the museum is outside far away and out of mind.


Too bad you're downvoted. I was a pretty active member of the scene back then (Imphobia group) but nowadays, everyone's getting older. I'd say it's a fact.

But don't underestimate the energy of nostalgia. It's still there and, AFAIC, the scene and I : until death us do part !


So when the hell is Imphobia #13 coming out? Been waiting a while here.


ahhhhh :-) good old joke :-) Thanks for remembering it to me, it's been a long time since I've heard it. Good memories !


cool, GLSL is lyrics then.


you're late to the party... Assembler is the 5h!t !!!


great to hear this :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: