FYI I downloaded the Fermi Paradox intro from that website (the actual executable one), and Windows Defender quarantined it, claiming it contains a Win32/Tiggre!rfn trojan.
I don't know enough about this stuff to figure out if it's a false positive, but there you go.
Some of the strategies that demoscene authors use to shrink their executable size (think packing, self-modifying code, etc...) are also incredibly common in malware.
Alternatively, stop using antivirus. The amount of stuff that gets flagged as false positives is incredible once you go outside of the applications written by big corporations and the like. It's almost as if they use a whitelist...
Also, seeing that some AV will alert on even the presence of an empty folder that happens to share the same name as actual malware[1] or a completely innocuous Hello World[2][3], it's hard to recommend any. That and the detection of cracks and keygens (which goes beyond "antimalware", IMHO) further strengthens my opposition against what is essentially censorware.
At first I thought the demo was broken entirely or was designed as some stress-test because Chrome churned between 0.6 to 0.9 fps. Never cracked 1.0 (yes, never passing one-point-zero).
Then I opened Firefox and not only ran, but never dropped below 30 fps, mostly hovered between 40 and 50.
If you have a nvidia GPU, then it may well be the fault of the latest drivers (390.25), which have been buggy, and it's especially notable under Chrome/Chromium (video/audio stuttering, problems with vsync, general slowness, high CPU usage, etc..).
I got solid 60FPS using Firefox Developer Edition on Arch, it would be interesting to see what the issue is for you (might be hardware, I'm running a Vega.)
Re: Vega — I think it's pretty clearly some sort of software issue — other people are having no trouble with mobile/embedded/iGPU hardware. This machine has an Nvidia GT 610 in it, which isn't high end gaming equipment. But it should be adequate for this demo.
You might wanna try activating the #ignore-gpu-blacklist in chrome://flags
But be aware that it can cause some collateral damage like broken websites (if you have it and it bothers you, you can simply turn the flag off again). For example, with my Haswell Intel GPU I had never any problems, but with my Radeon it results in some strange textures on some website. FF has no problems with neither of them.
Same here, 3 to 5 fps on Chrome and a consistent 60 fps on firefox 58. There's definitely something wrong with Chrome for linux. Try the Stripe docs page https://stripe.com/docs/api#error_handling , same issue. Buttery smooth scrolling on firefox but unusable on Chrome.
Seriously? I'm getting 32-36 on the iPhone X. I don't think it's resolution dependent, so I'm pretty interested in why the performance is so much less.
Fragment shaders (which this is) are quite resolution dependent: they are programs which execute at each pixel, so the input size to the algorithm contained in the shader is essentially equal to the resolution (it's not strictly, since other data can be passed in which the program iterates over—but it is fairly uncommon for shaders to make heavy use of loops or recursion from what I've seen).
I understand that, I'm quite acquainted with 3D programming and shaders. But I was under the impression that they were dependent on the resolution of the renderview, not the device.
It's definitely resolution dependent. On my Galaxy S7, the frame rate is 40-45 when the screen is set to 1080p, and drops to 20fps if I increase the resolution to 1440p.
I understood it to be dependent on the resolution of the renderview, not the device. Perhaps I'm incorrect, but the problem in this case was low power mode.
Not sure if it’s still the case, but there were limitations around JS optimisations in webviews outside Safari, so the question might actually be relevant.
Right. My point was that, given that Apple have historically limited the optimisations available to webviews, it’s not obvious that WebGL would be exempt of similar limitations.
I really admire the value this piece has as a composition. Or, the clouds, rings, and water terrain individually work as cool demoscene shaders, but flipping them on/off depending where the camera is gives it a great sense of scale.
ShaderToy is the new demoscene, in my opinion. It's cool to figure out what you can do running the exact same shader code for every fragment on the screen exactly once per frame.
The demoscene thinking is - I've been led to believe - that given most of the coding is done on assembly level, anyone with the binary already has the code.
Apart from old-school productions, there's very little assembly in modern demoscene. 64kB intros use mostly C++ (or Rust). Even for 4kB intros, assembly is not used very often (and the interesting code is in the shaders anyway).
Did this cause noise in anyone else's audio output? I have a GTX1070, running on Arch w/ Chromium. I am using the onboard sound card, not even the GTX for that.
If you're looking for a high res video out of this awesome shader, look at the intro of my recent Shadertoy Best Of video on YouTube : https://www.youtube.com/watch?v=7BB8TkY4Aeg
The democratization of non gaming 3D content and both powerful and cheap consumer electronics are long term driving forces.
I am both biased and hopeful since I am working on something very related, but my take on this is that the 3D part of the web will grow much faster than the non 3D part [1]. This growth will mainly be driven by non gaming 3D content that do not need high quality graphics to be relevent or entertaining.
VR and AR are better understood when we realize that they are just means to the end of consuming a wider variety of 3D content in a more engaging fashion.
The percieved lack of interest about VR is absolutely not about technical limitations (such as HMD weight, resolution, controllers, wire or whatever)[2] but it clearly is about the layman having no 3D content as relevant to his daily life as let say Facebook, YouTube, LinkedIn, Amazon etc. The first big VR company will launch a product that will be useful both in and out of VR and the web is the platform the most likely to host it.
[1] Most (say over 70%) of the web will stay 2D for a VERY long time though
[2] Most people who try VR HMDs enjoy the experience just fine (for the least). They just have no reason to try it again, and even less reason to pay money for that.
I was already sufficiently impressed... and then I realized that by default it's in "VERY_LOW_QUALITY" mode. Select "HIGH_QUALITY" using the #defines at the top of the file if your machine can handle it :)
the best way to do it would be with emscripten. You just need to make sure it only uses at most GLES3 gl calls, and also replace the sound/input/controllers apis with emscripten/web ones.
Remember that WebGL is also a security nightmare. Shaders are fed to the GPU driver. The driver contains a compiler and compiles the shaders into the GPU specific ISA. The GPU that runs that code is a PCIe device with full DMA access.
What could possibly go wrong?
(I'm aware that at least Chrome does some syntactic checks on the shader syntax)
GPUs can only access pinned memory that is intentionally mapped into their address space. Also, each context gets its own virtual address space on the GPU, isolated from other contexts.
There can still be issues, but it isn't quite as much of a free for all as the above comment sounds.
Fun fact, the Raspberry Pi's GPU can access everything. And to deal with that, the Mesa VC4 driver validates every shader to prevent reading other processes' stuff.
Isn't that because in that SOC they use unified memory where GPU and CPU memory is the same. This does not apply to most desktop computers or mobile phones...
I suppose this is more about reading another texture than the one you are supposed to use. GPU memory is flat, and there is no concept of process memory up there.
In the early days of WebGL some browsers leaked information via uninitialized GPU memory, so an attacker could potentially read texture data left behind by other processes.
Todays WebGL implementations take care to wipe new memory allocations with zeros before letting the untrusted script do anything with them though.
That's the same than starting a process and mallocing some memory; you will have the garbage of the previous process... Because you have no idea who owned that memory and what it was used to it would be hard to build something on top of that. This being said it's not a bad idea to zero stuff when you start to use them.
Marginally related: is there something native (and for linux/unix) equivalent to shadertoy? When debugging a sharer it's quite handy, but the web interface is just too laggy for me.
I'm currently using my own simple test rig, but I'd like something more refined.
Books might not be the best resource for Shadertoy-type stuff. Almost all of Shadertoy 3d shaders use a technique called ray-marching with signed distance functions. If you Google it, you should find good resources. Also, someone on Shadertoy made a very good tutorial using Shadertoy, which I think is kindof amazing... https://www.shadertoy.com/view/4dSfRc
There are other tutorial shaders on Shadertoy and I always try to make mine readable and heavily commented...
https://www.shadertoy.com/user/otaviogood
Someone else already mentioned the book of shaders (which is the single best introductory resource IMO) - aside from that, I've found that reverse-engineering existing shaders and reapplying the learnings to my own shaders has been very helpful. With time, you start developing an eye for which shaders are just one or two steps beyond your understanding. You'll also start noticing that certain users (such as @Shane) on the site are really good about commenting their code, while others treat it like a game of code golf.
When studying existing shaders, it's best to focus on the well-documented shaders that are a few steps beyond your current capabilities, rather than the ones that consist of hundreds of lines of single letter variables and incomprehensible math. As far as open source repositories go, it doesn't get much better than shadertoy (in terms of pedagogy) since you can easily tweak values and comment out pieces of code right there in the browser if you're trying to figure out what a certain line of code does. The in-browser editor makes reverse-engineering very efficient and reduces friction as much as possible, which is really helpful for this kind of dense mathematical code.
Once you get used to the whole process of reverse-engineering shaders, you'll quickly come to see shadertoy as the perfect place to learn how different visual effects and graphics techniques are achieved. I don't know of anywhere else on the web (except perhaps codepen) where you can so immediately go from viewing a visual effect in a gallery to messing around with the code in nearly the exact environment that it was created in.
One of the co-founders of shader toy is a legend in the field and has the most relevant resources for procedural generation and distance field ray-marching: http://iquilezles.org/www/index.htm
I'm honestly not sure if that's an attempt at sarcasm or whether you're really serious.
Isn't it just rendering two triangles with a fragment (pixel) shader executing purely on the GPU. I mean, why would that be any slower in a web browser than anywhere else? (Unless the shader compiler is pretty bad?)
Are pure WebGL fragment shader demos really significantly slower in a web browser? If so, why?
Viewed through the lens of someone that lived through PCs requiring programs written in x86 assembly and be the only thing running on the bare metal to achieve anything close to 60FPS for full-screen faux-3D in 320x200 in 256 colors and well.. yes, this is absolutely incredible that a damn web browser can do this stuff in a tab - and it's entirely due to how fast processors (including GPUs) have become.
I'm aware the shader is being throw at the GPU and that's the majority of the complexity, but the GPU is part of the incredible progress consumer hardware has made.
The browser being in the loop just furthers the impressiveness; there's a bunch of other software running on the computer while this is going on in a damn tab.
This[0] is a great book for people who do not know the history of gaming and the struggles devs had in early pc game dev when you had no gpu’s and very crappy graphics cards.
Yes, coming from the days of 6502 8-bit assembly with 64k of memory total for the operating system, the video memory, and whatever was left over for a program to run on the Apple ][+, this IS amazing.
Stable 60 FPS on high setting here with Firefox 58, Linux, open source drivers (mesa-17.3.5) and an AMD Radeon RX 460 (passive).
So its neither a Linux nor an open source driver problem. Maybe your hardware is in fact not up to the job or you need to install some updates ;-)
Btw. I also tested chrome and as long as I run it with default settings I get around 3-5fps, but when I activate #ignore-gpu-blacklist in chrome://flags it reaches 60 fps there too.
It's also raytraced/marched too, right? Reducing the resolution where possible might help with performance. I'm not sure if there's an easy way to do that via ShaderToy though.
So am I - Chromium on Arch. i7 Kaby Lake. Out of the box wrt graphics, I have made no changes at all. #ignore-gpu-blacklist is still disabled. I suppose I could get the Nvidia GPU fired up but it isn't needed for this.
That’s so incredibly impressive, and the frame rate is pretty stable too! I’d love to see a similarly detailed dive into a black hole with these tools.
It reminds me of two other recent demoscene productions, also shader-based:
Waillee, in 4kB: http://www.pouet.net/prod.php?which=71873
Fermi Paradox, in 64kB: http://www.pouet.net/prod.php?which=67113