Hacker News new | past | comments | ask | show | jobs | submit | 38leinad's comments login

Sorry for maybe asking a stupid question, but if i understand correctly, the nounce is just a secret as the private key. But the nounce is not needed for the signature check? So, it can be choosen random and then simply forgotten after signature generation? After first quick google search, it looks like there is also the term nounce use here but it is deterministic and just counted up which does not seem to fit with the article: https://developpaper.com/the-nonce-of-ethereum/ Am I missing something?


The nonce used for ECDSA signatures is not the same as the nonce used for Ethereum signature. The term "nonce" is a general term that is used in any number of systems to mean a value which should only be used once, and might apply to any number of layers or protocols in a given system.

There are various techniques people use for them: a careful counter, a precise timestamp, a hash of the rest of the data, or a random number. Often you can choose as the user... as long as you don't use the same value twice; alternatively, your choice of nonce might be verified by your counter-party (as with Ethereum's account nonces).

The consequences of using the same value twice will also differ: your request might be rejected/ignored, you might be penalized or cause an error, it might expose your identity to a system where you were otherwise anonymous, or it might allow someone to calculate your private key. The high-level idea is what matters, not the specifics.


if you are interested in rendering, don't miss out John's talk at 5pm dallas time today: "Principles of Lightining and Rendering with John Carmack" http://www.quakecon.org/event-schedule/ Should also be streamed on the quakecon twitch stream. according to the keynet video, it is supposed to be a talk he has given internally at id before and was urged to redo it at quakecon.

"John will present a lecture-style presentation on the physics of light transport and rendering. He will discuss how light behaves in the real world, and the approximations and compromises that are involved in simulating the behavior with computers. Note: not for the technically faint at heart."


have you tried paint.net? i really like it for doing pixel art.


someone saw the "ios in the car" icon? http://cdn.iflowreader.com/wp-content/uploads/2013/06/apples... To me, this is the most ugly icon I have ever seen. ever!


Hahaha pretty much agreed, it was right next to the Multitasking and Siri icons on their promotion page. That's what happens when you combine the "straight from the tube" green with pure black and tack on a purely resized Siri icon.


It looks like a dev did this one, not a designer :))).


anybody can describe what makes this better then spotify?


maybe a new ipod touch? does not have a speaker but only a camera at the usual location.


Ah, I keep forgetting those still exist.


The devices with colored edges also say "iPod" instead of the carrier (which are now * in all new Apple stuff, probably to avoid showing carrier favoritism).


Anybody knows how other companies like universal's animation department (ice age, despicable me, ...) stand technology-wise? From the visuals, i always assumed pixar is setting the standards but now knowing that they just start to use unified raytracing, the gap might not be that big...


Pixar's generally ahead in terms of story, animation (they hand-animate everything) and look (shading and lighting), but in terms of pure tech, other companies like Weta, ILM and SPI are generally ahead of them as they work on multiple shows at once and per year.

SPI have been using full GI pathtracing with Arnold renderer for the last 5 years, and Blue Sky (studio which did Ice Age) has their own GI raytracer they use as well.

Also, Pixar don't actually have that big a renderfarm - they don't need it. Other places like Weta and ILM have renderfarms that are much bigger, but are used for multiple productions at once, and for doing things like compositing and fluid/cloth/physics sims.


Pixar have a lot of technology, but they come from an animation tradition rather than a studio that is concerned with photoreal CGI. Renderman has always been a system for "painting" the scene you want - it's an artist's toolbox.

It's only recently that advances in processing power have made physically-based ray tracing practical for film production - particularly with the take-up of the Arnold renderer by various other studios. Suddenly lighting becomes a matter of placing lights and letting the computer do the work rather than needing to carefully set up the correct impression of light in the way a painter might. So it requires quite a bit of change of approach from the artist, and you can imagine why there'd be a bit of a cultural problem introducing this.


Why does it matter that much what technology is in the backend? You lead the industry with results, not with the means to get to those results.

If I can make my webapp better (however you define "better") than my competitors' using PHP and MySQL, while they're making theirs using Ruby on Rails, MongoDB, etc,etc. Does the tech stack in the background matter, aside from making a nice article?


Yes, it makes a lot of difference.

There's the obvious render time, but actually render time isn't that important - studios are happy to wait up to 30 hours for a 4k frame on the farm if that's what it takes for a shot. But they don't want artists waiting around, so they want very quick iterations and previews of what the artists are doing, as it's the artists who cost money.

This is why global illumination has taken off over the last 5/6 years (thanks largely to SPI and Bluesky showing it could be done), as although the render times are slower, it means lighting the scene (by phyiscally-based shading) is much quicker and you don't need as many hacks as you did with PRMan (light maps, shadow maps, reflection maps, point caches, etc). You can literally model scenes with lights as they are in the real world.

On top of this, there's how easy it is to do very complex shots and change just bits of it - tools like Katana allow hugely complex scenes to be managed and rendered very efficiently, with very little work from artists. Studios who don't have similar tools often duplicate and waste a lot of time doing things that should be easy.

For example, Weta on IronMan 3 wasted a lot of time doing all the different suits, as they didn't have a decent asset-based pipeline that would have allowed them to re-use a lot of shaders, assets for each suit.


> Does the tech stack in the background matter

I think it does, because the tech stack in the background allows for things that might not be possible for other tech stacks.

You can duplicate somebody else's webapp in your backend of choice, but you can't have true GI if your rendering engine doesn't support it, and while you can fake some of the effects, they ultimately won't look as good as the real thing (unless you're aiming for a different 'good').


When the output of the tech stack is the product the means that achieve it and the level of accuracy reached matter a great deal.


Pretty bad analogy. In the realm of making 'pretty pictures that eerily realistic' the looks 'realistic' part is pretty significant... and usually it is Pixar itself that starts the promotional pieces of what new technology they have whenever they have a big new movie coming out...


Better tech could theoretically allow you to develop more movies at once by reducing the amount of specialization required, or simply develop movies faster or on a cheaper budget for the same results. If you are getting results that match your competitor but have to spend 10x as much time rendering it because you are doing it "the old way" then you are at a disadvantage even if your movies both do well.


Many of us are here because we like knowing how things work, not just their end result.


Sorry, but the comparison is incredibly flawed. Your PHP or Rails code will still generate HTML in the end.


Pixar's focus has always been on renderer efficiency for cinematic storytelling not really for accuracy or 'simulation'.

So while I'd say they are still way ahead in efficacy they are at par or behind with respect to light simulation.


At least historically, Pixar wasn't just a studio but also sold their renderer to others as software.

http://en.wikipedia.org/wiki/RenderMan


At least super-historically, Pixar wasn't just a software company but a hardware company as well:

http://en.wikipedia.org/wiki/Pixar_Image_Computer


Ice Age / Rio / Epic are Blue Sky, not Universal.


i think it's James Cameron that sets the standards


i am not anti-java. not at all. but to be honest, creating this simple web service shows really how complicated some things are in java.


Yes, when you go this route, it's very complicated. But good, user friendly frameworks have been built on top of these foundations, and those should be used.


I also found that its definitely the best guide. A good companion to look at some things from another angle are also these tutorials: http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Tab...


sometimes I really get annoyed over here because of stupid comments like yours. keep it to yourself!


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: