Sorry for maybe asking a stupid question, but if i understand correctly, the nounce is just a secret as the private key. But the nounce is not needed for the signature check? So, it can be choosen random and then simply forgotten after signature generation?
After first quick google search, it looks like there is also the term nounce use here but it is deterministic and just counted up which does not seem to fit with the article: https://developpaper.com/the-nonce-of-ethereum/
Am I missing something?
The nonce used for ECDSA signatures is not the same as the nonce used for Ethereum signature. The term "nonce" is a general term that is used in any number of systems to mean a value which should only be used once, and might apply to any number of layers or protocols in a given system.
There are various techniques people use for them: a careful counter, a precise timestamp, a hash of the rest of the data, or a random number. Often you can choose as the user... as long as you don't use the same value twice; alternatively, your choice of nonce might be verified by your counter-party (as with Ethereum's account nonces).
The consequences of using the same value twice will also differ: your request might be rejected/ignored, you might be penalized or cause an error, it might expose your identity to a system where you were otherwise anonymous, or it might allow someone to calculate your private key. The high-level idea is what matters, not the specifics.
if you are interested in rendering, don't miss out John's talk at 5pm dallas time today: "Principles of Lightining and Rendering with John Carmack"
http://www.quakecon.org/event-schedule/
Should also be streamed on the quakecon twitch stream.
according to the keynet video, it is supposed to be a talk he has given internally at id before and was urged to redo it at quakecon.
"John will present a lecture-style presentation on the physics of light transport and rendering. He will discuss how light behaves in the real world, and the approximations and compromises that are involved in simulating the behavior with computers. Note: not for the technically faint at heart."
Hahaha pretty much agreed, it was right next to the Multitasking and Siri icons on their promotion page. That's what happens when you combine the "straight from the tube" green with pure black and tack on a purely resized Siri icon.
The devices with colored edges also say "iPod" instead of the carrier (which are now * in all new Apple stuff, probably to avoid showing carrier favoritism).
Anybody knows how other companies like universal's animation department (ice age, despicable me, ...) stand technology-wise? From the visuals, i always assumed pixar is setting the standards but now knowing that they just start to use unified raytracing, the gap might not be that big...
Pixar's generally ahead in terms of story, animation (they hand-animate everything) and look (shading and lighting), but in terms of pure tech, other companies like Weta, ILM and SPI are generally ahead of them as they work on multiple shows at once and per year.
SPI have been using full GI pathtracing with Arnold renderer for the last 5 years, and Blue Sky (studio which did Ice Age) has their own GI raytracer they use as well.
Also, Pixar don't actually have that big a renderfarm - they don't need it. Other places like Weta and ILM have renderfarms that are much bigger, but are used for multiple productions at once, and for doing things like compositing and fluid/cloth/physics sims.
Pixar have a lot of technology, but they come from an animation tradition rather than a studio that is concerned with photoreal CGI. Renderman has always been a system for "painting" the scene you want - it's an artist's toolbox.
It's only recently that advances in processing power have made physically-based ray tracing practical for film production - particularly with the take-up of the Arnold renderer by various other studios. Suddenly lighting becomes a matter of placing lights and letting the computer do the work rather than needing to carefully set up the correct impression of light in the way a painter might. So it requires quite a bit of change of approach from the artist, and you can imagine why there'd be a bit of a cultural problem introducing this.
Why does it matter that much what technology is in the backend? You lead the industry with results, not with the means to get to those results.
If I can make my webapp better (however you define "better") than my competitors' using PHP and MySQL, while they're making theirs using Ruby on Rails, MongoDB, etc,etc. Does the tech stack in the background matter, aside from making a nice article?
There's the obvious render time, but actually render time isn't that important - studios are happy to wait up to 30 hours for a 4k frame on the farm if that's what it takes for a shot. But they don't want artists waiting around, so they want very quick iterations and previews of what the artists are doing, as it's the artists who cost money.
This is why global illumination has taken off over the last 5/6 years (thanks largely to SPI and Bluesky showing it could be done), as although the render times are slower, it means lighting the scene (by phyiscally-based shading) is much quicker and you don't need as many hacks as you did with PRMan (light maps, shadow maps, reflection maps, point caches, etc).
You can literally model scenes with lights as they are in the real world.
On top of this, there's how easy it is to do very complex shots and change just bits of it - tools like Katana allow hugely complex scenes to be managed and rendered very efficiently, with very little work from artists.
Studios who don't have similar tools often duplicate and waste a lot of time doing things that should be easy.
For example, Weta on IronMan 3 wasted a lot of time doing all the different suits, as they didn't have a decent asset-based pipeline that would have allowed them to re-use a lot of shaders, assets for each suit.
I think it does, because the tech stack in the background allows for things that might not be possible for other tech stacks.
You can duplicate somebody else's webapp in your backend of choice, but you can't have true GI if your rendering engine doesn't support it, and while you can fake some of the effects, they ultimately won't look as good as the real thing (unless you're aiming for a different 'good').
Pretty bad analogy. In the realm of making 'pretty pictures that eerily realistic' the looks 'realistic' part is pretty significant... and usually it is Pixar itself that starts the promotional pieces of what new technology they have whenever they have a big new movie coming out...
Better tech could theoretically allow you to develop more movies at once by reducing the amount of specialization required, or simply develop movies faster or on a cheaper budget for the same results. If you are getting results that match your competitor but have to spend 10x as much time rendering it because you are doing it "the old way" then you are at a disadvantage even if your movies both do well.
Yes, when you go this route, it's very complicated. But good, user friendly frameworks have been built on top of these foundations, and those should be used.