Hacker News new | past | comments | ask | show | jobs | submit | sray's comments login

The Git book does a perfectly good job of this: https://git-scm.com/book/en/v2

I always recommended reading chapters 1-3 and skimming chapter 7 to people who are just getting started with Git.


The author of the post is aware of this. They explicitly say that is not the point of the question.


“Nobody [does thing] anymore” is a common euphemism for “very few people [do thing] now, compared to in the past”. The author is correct about this with respect to cursive, at least in the US. I think you know this as well, as you acknowledge your outlier status.


My favorite response to “How are you?” comes from a Russian former coworker: “Average. Worse than yesterday, better than tomorrow”.


I heard this phrase described as "Russian Optimism".


Except it’s not like a flea market in the parking lot. It’s more like Walmart had a flea market, but mixed all of the items in with their own, inside of the store, and made it difficult for a casual buyer to tell the difference.


I liked the article, but, as a game developer who does not specialize in graphics, I really liked one of the comments:

Joe Kilner - One extra issue with games is that you are outputting an image sampled from a single point in time, whereas a frame of film / TV footage is typically an integration of a set of images over some non-infinitesimal time.

This is something that, once stated, is blatantly obvious to me, but it's something I simply never thought deeply about. What it's saying is that when you render a frame in a game, say the frame at t=1.0 in a game running at 60 FPS, what you're doing is capturing and displaying the visual state of the world at a discrete point in time (i.e. t=1.0). Doing the analogous operation with an analogous physical video camera means you are capturing and compositing the "set of images" between t=1.0 and t=1.016667, because the physical camera doesn't capture a discrete point in time, but rather opens its shutter for 1/60th of a second (0.16667 seconds) and captures for that entire interval. This is why physical cameras have motion blur, but virtual cameras do not (without additional processing, anyway).

This is obvious to anyone with knowledge of 3D graphics or real-world cameras, but it was a cool little revelation for me. In fact, it's sparked my interest enough to start getting more familiar with the subject. I love it when that happens!


Game renderers do sample time too, for per-object motion blur[1] and sometimes full-scene blur or AA. To push the idea further, research has been done around 'frameless' renderers, where you never render a complete frame but sample ~randomly at successive times and accumulate into the frame(sic)buffer. At low resolution it feels weird but very natural, computed persistence of vision : https://www.youtube.com/watch?v=ycSpSSt-yVs . I love how even at low res, you get valuable perception.

[1] some renderers even take advantage of that to increase performance since you get a more human oriented feel by rendering less precisely.


Speaking of "frameless" rendering, I noticed during Carmack's Oculus keynote (https://www.youtube.com/watch?v=gn8m5d74fk8#t=764), he talks about trying to persuade Samsung to integrate programmable interlacing into their displays in order to give dynamic per-frame control over which lines are being scanned.

This would give you the same "adaptive per-pixel updating" seen in your link, though primarily to tackle the problems with HMDs (low-persistence at high frame-rates).


This AnandTech overview of nVidia's G-Sync is worth reading (meshes a bit with what Carmack mentioned about CRT/LCD refresh rates in that talk): http://www.anandtech.com/show/7582/nvidia-gsync-review

It's a proprietary nVidia technology that essentially does reverse V-Sync. Instead of having the video card render a frame and wait for the monitor to be ready to draw it like normal V-Sync, the monitor waits for the video card to hand it a finished frame before drawing, keeping the old frame on-screen as long as needed. The article goes into a little more detail; they take advantage of the VBLANK interval (legacy from the CRT days) to get the display to act like this.


Weird, I missed this part. Vaguely reminds me of E. Sutherland fully lazy streamed computer graphics generation since they had no framebuffer at the time.


Fantastic technique. Can’t believe it’s been almost 10 years since this video. Do you know if there are is any follow-up research being done?


I did search for related research a while back with no results. Tried to leverage reddit too, someone asking the very same was told this : http://www.reddit.com/r/computergraphics/comments/12gs2a/ada...


I always use this fact as a kind of analogue to explain position-momentum uncertainty in physics. From a blurry photo of an object, you can easily measure the speed, but the position is uncertain due to the blur. From a very crisp photo, you can tell exactly where it is, but you can't tell how fast it is moving because it is a still photo.

It's a good way to start building an intuition about state dependencies.


Welcome to Heisenberg's uncertainty principle[0] in the macroscopic world!

[0] http://en.m.wikipedia.org/wiki/Uncertainty_principle


Would it not be the observer effect instead?


True that pointing a camera to someone can change their behavior.


I hear it can even add ten pounds...


Another way of say of saying this is that a drum beat has no definite pitch because it's too short. It's exactly the same property of Fourier transforms behind the uncertainty principle.


There's also a broader picture -- that this effect arises because we use discrete, independent words to describe inter-dependent phenomena. The concepts of momentum and position are not good 'factorizations' of reality, so trying to talk about it in those terms leads to structural problems when you increase precision.

It's the same kind of problem you get when you try to talk about objective/subject morality, or whether science can prove things.

Another example I give:

The Sun revolves around the Earth -- True from Earth's perspective. The Earth revolves around the Sun -- True from Sun's perspective. They both revolve around their center of mass -- True in Newtonian model. The both follow straight geodesics in spacetime -- True in Einsteinian model.

Everything is true in some approximation. The idea is to increase your precision so that you can find truths that incorporate more information and thus provide greater insight and generality. If your language doesn't have enough 'bandwidth' to carry the information of -- that is, mimic the structure of -- your experiences, you have to develop more structured abstractions, or you'll lose clarity and expressive power.

So, for instance, you wouldn't have to prove that unicorns don't exist: you just have to show that providing more specific information doesn't result in a lack of clarity in a general model. A theory of three-legged unicorns offers no advantages over a model of n-legged unicorns because unicorns don't exist.

But now I've gone off the deep end...


I don't position or momentum are bad descriptions of a system. Also, it's not "factorizing" the system into the two parts, since they are related by a transformation (i.e. they are interchangeable, doesn't happen in factorization). It's important to have a good knowledge on the subject before you try to interpret meaning of it and specially when you try to extrapolate the situation.

I'm saying this also because the initial analogy wasn't so strong to begin with. A bunch of incorrect analogies can build the wrong picture of a theory which is hard to get rid of people's minds. Which is why it's good to reason through principles and not analogies (analogies are useful for other purposes: building bridges between the understanding of two different fields. I believe -- and they're made precise and clear in their shortcomings).


Position and momentum are bad description of a quantum system precisely because they are not independent. This is not really a statement about physics, but about language. It just happens to apply to physics because the purpose of physics is to develop useful language. And we make a better description in this case by saying things like 'position-momentum', using an actual equation, or by using a different concept like energy or wave-vector. All of which are better than position and momentum alone.

The extrapolation here really has little to do with quantum physics. I am also using QM as an analogy for linguistics. The theme is about how language captures information and how information-processing systems select output based upon linguistic structure. It's a very young and poorly understood subject, so I don't think it's very likely to get good theoretical work in a forum like this. These are primitive times, after all.


If you know the direction of the motion of a blurry object isn’t the location of the object on one of the leading edges? I thought the problem was more that you have no idea of the features of the object?


Indeed, motion blurring preserves almost all* information of a picture, given some assumptions (e.g. brightness stays constant, path is predictable).

*A linear blur of a 2D picture acts like a 1D sinc filter: information is completely lost only at spatial frequencies multiples of 1/d in the direction of motion, where d is the linear displacement, and otherwise attenuated.


Most movies use a 180-degree shutter angle, which means the shutter is open half the time. http://www.red.com/learn/red-101/shutter-angle-tutorial So you get motion blur for half the frame time, and no light on the film for the other half of the time. The Hobbit movies (at least the first one) used a 270-degree shutter angle, so even at half the frame time, they got 3/4 as much motion blur in each frame as a normal movie. That might contribute to the odd feeling viewers had. http://www.fxguide.com/featured/the-hobbit-weta/


I believe this was done by PJ for a couple of reasons. 1) Practical, increasing the shutter speed means either increasing the amount of light for scenes, using faster film stock ( or higher ISO on your RED camera ) or some combination of both. 2) This shutter speed was a compromise between the exposure time 180-degree shutter shooting at 24 -vs- 48 fps, and would still retain some of the blur so that 24fps screenings would appear relatively 'normal'


The RED camera is quite noisy. especially with the 5k sensor.


Indeed. I had to laugh though, because first I read it as a sound person would and wondered why you had made that comment here. The camera uses internal fans to cool down the sensor between takes and it is really loud, like a hair dryer. So when you start shooting that often means stopping because of some sound which wasn't obvious over the background noise of the camera cooling itself down. Not cool.


This.

The article wanders on and on, but is simply grasping at the much more learned aesthetic repulsion of motion blur.

24 and 25 fps (aka 1/48th and 1/50th) motion blur have defined the cinematic world for over a century.

Video? 1/60th. Why the aesthetic revulsion? While I am certain this is a complex sociological construct, there certainly is an overlap with lower budget video soap operas of the early 80's. Much like oak veneer, the aesthetic becomes imbued with greater meaning.

The Hobbit made a bit of a curious choice for their 1/48th presentation in choosing a 270° shutter. An electronic shutter can operate at 360°, which would have delivered the historical 1/48th shutter motion blur.

Instead, the shutter ended up being 1/64th, triggering those all-too-unfortunate cultural aesthetic associations with the dreaded world of low-budget video.

It should be noted that there are some significant minds that believe in HFR motion pictures, such as Pixar's Rick Sayre. However, a disproportionate number of DPs have been against it, almost exclusively due to the motion blur aesthetic it brings, and the technical challenges of delivering to the established aesthetic within the constraints of HFR shooting.


Not sure about how movies are filmed, but you don't have to shoot video frames at 1/FPS. That's just the slowest you can shoot. If you're shooting in broad daylight, each frame could be as quick as 1/8000, for example.

Shooting at the slowest shutter speed possible should make the most fluid video.


Worth noting that directors use high speed film to portray a feeling of confusion. The lack of motion blur gives that sense to the scene. E.g. the opening scene of Saving Private Ryan uses this effect.


The effects in the battle sequences of SPR are not a result of high-speed film per se, but the result of altering the effective shutter speed of the camera to reduce motion blur (switching from 180 degree shutter to 90 or 45 situationally). The scenes were still shot at 24fps. If you have a digital video camera with manual shutter control, set the framerate to 24fps and set the shutter to 1/200 and you have instant SPR.

This effect is now very common for action scenes in movies and also tons of music videos. Very easy to spot once you are aware of it.


You are actually speaking of the same thing, "high-speed film" similar to a "high-speed lens" doesn't actually affect the framerate, yet rather how fast it can produce an image from a certain light-source. It's simply more sensitive.

Now, you're correct that the actual effect is done by changing the shutter speed. but the loss of light is often compensated for by using a faster film, since using a larger aperture has more significant effect on the scene in the form of DOF.


They dropped frames too, right? Isn't it a combination of high speed frames at maybe 20fps?


I don't think so. That effect is the extreme sudden movements of the camera. It makes it look stuttery.


As far as I know, the "gold standard" is shooting at half the inverse of the FPS (eg. 1/60s exposures for 30 frames per second). This is how film cameras traditionally work, the so-called 180-degree shutter: http://luispower2013.wordpress.com/2013/03/12/the-180-degree...


I'm pretty sure that 24 FPS footage with a 1/24 second shutter speed would be completely unusable except as an extreme blur effect.


On the contrary. That's desired. It makes he motion more smooth. Lots of photographers with high end DSLRs have been asking for 1/24 when shooting at that framerate.


The corollary is that it should be possible to produce a movie-like quality in games, by over-framing and compositing a blur between frames. The result would have actual motion blur and update at say 30 fps, but without the jerkiness we normally associate with that frame rate.


Sure, if you can render at so 120 hz, or more and composite the 4 frames together into your single frame you will get a single improved frame. but even at 4 renders a frame you'll still get artifacts, I imagine you'd need at least double to make it worthwhile, But even that only gives you ~8ms to time step and render the entire scene. Minus composting and any other full-frame post effects. And hitting the ~16ms required for a 60fps is already pretty difficult.

Now, in video games we do have methods to help simulate inter-frame motionblur. The most commonly used is to build a buffer of inter-frame motion vectors, this is often generated with game state, but can also be re derived by analysis of current and previous frames to some affect. Then you do a per-pixel blur based on the direction and magnitude of the motion vectors. Which often works to good effect.


You wouldn't need to render the whole frames. You'd get even better results from randomly sampling over a number of frames to contribute to nearby pixels. In effect you're trading less work for more noise, and noise is kind of what you want here (so long as it accurately reflects what is happening).

So, you could render 16 samples, but only do 1/4 of the pixels on each so you'd get better motion blur for only a small increase in work over doing 4 temporal samples. The extra work corresponds to a bit of bookkeeping and the tweening of 16 frames instead of 4 and you'd still be rendering the same number of samples overall.


Each game frame is a snapshot taken with an infinitely small shutter duration but displayed for 1/30s or 1/60s (vs one movie frame, which has a shutter duration of, e.g. 1/48s and displayed for 1/24s).

So over-framing game frames will not produce motion blur, it'll simply merge two still images together. You need to simulate motion blur (usually as a post-process). This of course takes more time to render, potentially lengthening the frame times.


I don't see any theoretical difference. Provided our sampling rate is sufficiently high, merging together "snapshot" images should give exactly the same effect as motion blur.

Though in practice it would be difficult to render more than a few snapshot frames between the display's refreshes, and with a low sampling rate there would be noticeable errors, particularly if you take a screenshot of a fast-moving object.


That's not entirely accurate. It's true that a camera will capture the image during a certain interval of time instead of a definite point in time (obviously) but the length of that exposition time is not necessarily connected to the framerate.

For instance if you have a digital camera where you can select the framerate (pretty common these days) and if the exposition time was simply the frame period, it would mean that the image at 30fps would be exposed twice as long as 60fps and the resulting picture would look very differently.

Of course you can mitigate that by changing the aperture and other parameters but in my experience in practice you can select the exposition time and framerate independently on digital cameras. With very sensitive sensor and/or good lighting you can achieve very small exposition times, much smaller than the frame period. If you're filming something that moves and unless you want to be blurry on purpose you probably want to reduce the exposition time as much as possible in order to have a clean image, just like in the video games.


...and it will look very awkward if there is motion in the frame if you're not shooting very close to 1/(2 * framerate). There is a very small tolerance window, outside of which the picture will look mushy (if your camera lets you shoot very close to 1/framerate) or jerky (< 1/(3 * framerate)). Controlling exposure, if you want to maintain a constant aperture for depth-of-field reasons, is done using neutral density filters (including "variable ND" crossed polarizers) and adjusting the sensitivity/gain/ISO, not shutter speed.


> So if you manage to keep tabs on the industry through your colleagues and you're excited about what you're working on at work, then the personal projects are much less important, but you need to demonstrate that.

I disagree. If passion is an important quality to you, then, as the interviewer, you need to ask questions that reveal whether or not the candidate has that quality. It's absurd to ask about personal projects as a proxy for asking "are you passionate about programming" and then to expect the candidate to guess your true intentions and answer accordingly.


I think you hit on a much better question! Just ask "are you passionate about programming?" and then ask them to defend their answer. If they code arduino robots in their spare time or simply love solving people's problems at work they have to convince you either way. That's so much better than trying to gauge their passion with a presumptive question.


The problem with phrasing the question that way is that it's way too open-ended (and likely to make the candidate regurgitate the same BS from their cover letter, which is not the point). An interviewer's job is to make it as easy as possible for the candidate to prove themselves. If they don't have side projects, that's fine--there are other questions that will hopefully provide similar information (ie. "What project have you worked on that was the most satisfying for you personally?"). But the side project question has a high signal/noise ratio.


1. There's no indication that account belongs to the Bill Nguyen in question. Nguyen in one of the most common surnames in the world, and Bill is an exceedingly common first name.



The best part of the article is the response from Avalanche, the studio that developed the original game. Rather than try to shut the project down, they praised the authors and invited them over for a visit. Awesome! When companies act like that, it makes me want to support them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: