In case anyone wonders about that reverse-for loop in updateCircles():
Iterating over an array in reverse allows you to remove items in the same iteration (at the very end of the loop). If you delete an item, the items which follow are downshifted by one which means that their indices also change.
However, this isn't a problem if you iterate in reverse, because you already took care of those items in previous iterations. The item you'll handle next will stay where it is.
This also works nice with unordered lists (or "bags").
I don't know if the OP reads this, but one very minor note about the core of the update loop. From the POV of the circles, it is effectively:
for each circle:
if circle is penetrating:
apply collision impulse
remove penetration
integrate forces (gravity) to update velocity
integrate velocity to update position
for each circle:
render circle
Because interpenetration is resolved before integration, the integration step can cause further interpenetration before rendering.
For this case, the effect of this is probably invisible (I couldn't see it). With slower moving objects, higher gravity, or less elastic collisions, this can cause objects to 'sag' or appear springy. A simple change in order helps:
for each circle:
integrate forces (gravity) to update velocity
integrate velocity to update position
if circle is penetrating:
apply collision impulse
remove penetration
for each circle:
render circle
A very minor thing, but something to remember when doing physics animations.
I do not understand. Since this is executed in a loop, for the circle it looks like ...-penetration-integration-penetration-integration-penetration-integration-... in either version. Why does it matter between which steps the circle is rendered?
In the original version it's possible that a penetrating object gets drawn, in the modified version penetrating objects are always resolved before the drawing happens.
So the move might cause an overlap, which we'd draw on screen. Having objects routinely interpenetrating a little (by a frame's worth) can appear springy, as if they aren't made of rigid material.
Trying to keep the narrative of explanation in the same linear order as the code makes for a surprisingly difficult to understand item. Would be neat to see more "traditional" literate programming techniques applied here.
All of that said, really cool, and seriously thank you for sharing!!
I wonder if the weaving in literate code is one of the things that could only have been invented to deal with Pascal. It is useful to be able to rearrange the presentation even in JavaScript, but I doubt it would be enough of a pain point for people to invent weaving in an alternate history were literate programming was born on more flexible languages.
I wrote a literate-programming compiler to deal with JavaScript programming (any language works, but js is the one I use the most); it use markdown as the language in which the code is embedded.
For me, prototype setup and async stuff seemed to naturally lead me to want to differ in the ordering between how I write and how the program is laid out. I find it quite liberating to not care what order I write the code in, dashing off a substitution section to deal with later.
I know functions can be used to chunk out code in different orders, but I like the interplay of having it ordered in the writing in a sensible fashion while still getting the compiled view of it in the programmatic ordering.
Plus my version allows for a lot of finely tuned hacking on the language. That is, one can reduce a lot of boiler-plate setup with processing small chunks of code.
Having said all that, I would not use Knuth's original version for js. I think what was particularly attractive was a simple language like markdown for embedding code in a natural and simple way.
The weaving steps seems especially useful for web programming, because of the different languages. For the narrative, you could show HTML, Javascript, CSS, and Python backend code mixed together. The weaving puts them all into separate files for execution.
Thanks for the experience report! I'll check out your repos.
To be honest, I was mostly thinking of Haskell when I wrote my comment. (Semi-) Literate programming is quite popular in their world, and Haskell gives you great flexibility out of the box to order your code. (And even there, having a more flexible order can help, but it's not as urgent.)
qznc makes a great argument for weaving different languages.
This looks glitchier than the original for some reason, probably just sheer numbers. Balls bouncing off the bottom of lines that are moving away appear to jump.
would be interesting to plot the probability of a ball going to the right direction against the number of lines, when lines are moving clockwise or vice-versa.
Mostly modern game engines use an entity system[1], which is a form of database (global data, I guess, in as much as any database is).
The main loop then steps through each component (of which physics would be one) and lets it update the entity data for all entities.
This pattern took over from class/inheritance based architectures about 10 years ago.
There have been some suggestions that Functional Reactive Programming is a better approach. I've used it in some prototypes, but I couldn't scale it to engine-size, and I'm not aware of it having been used successfully for a full game engine.
So, this approach is pretty much it. You keep a database of state, then you run a system (the physics system, say) to update that database.
Reminds me of a simple physics system I wrote a couple of years ago [0]. After writing a cloth simulation [1] I figured that all I needed to make them into collidable boxes was to have each dot collide with each line. I went in very very small steps, iterating each thing to ensure I understood it. The code was super verbose and I learned a lot from this about linear algebra, cross products, vector projection etc. I always planned to straighten the code out and do a proper writeup.
I think this is awesome but I think it's a little too spaced out. I like the side-by-side display (gives the code an approachable narrative) but I wish I had to scroll up and down (to remind myself of what's going on) less.
Nice code. Thank you. Are those ball-to-line collisions energy preserving? I believe they are...
However it seems there might be an edge case, where moving a colliding circle until it no longer intersects the line, might cause it to skip past another line. Because of this, the lines should not intersect each other.
Sorry to ramble but I love this sort of stuff. Collision detection is covered well in many game development texts.
Yeah he should be getting the starting & ending position of the ball during the tick, and determining whether that line intersects the other line. If it does, the intersection point should be found and the segment that goes past the line should be reflected over the normal, and the ball placed at the new endpoint of the reflected segment. And, of course, the velocity vector needs to be changed as well.
But... I guess this won't be the first time someone posts bad physics code on the internet and poses it as a good example.
The challenge of the more accurate method you describe is balancing how much CPU is used to find the point in game time where a collision occurs because it is almost certainly at some point between your tick times. You have to perform the calculation at many units of time across the tick delta with the significance level you care about, with more calculations necessary as velocities of bodies increases.
Of course, this is what physics engines do, and why you should probably use one instead of reinventing the wheel.
In terms of showing how a game loop and really basic game physics works, however, this is a pretty clean and understandable example.
Hmmm, you appear to have a high opinion of your ability to determne 'good physics code'. Unfortunately, just a high opinion, not a high ability.
The OPs code is fine. And was written by a she, not a he.
Your suggestion is what is sometimes called 'continuous' collision detection. It is needed in some cases, particular when the OP approach might miss collisions (less often for when knock-on-collisions are missed, since those are rarer). But that approach is not 'better' - it is far more time consuming (vastly more for 3d rigid bodies with angular velocity), and has a visual benefit in some cases (not in the OPs case - or at least you wouldn't see the difference).
But, of course, both approaches are very approximate and only intended to give the right feel. If you wanted to be even more accurate you would process your collisions by subdividing updates around the collision (a good pool-ball simulation does this, to model proper breaks). And you'd have to use different integrators than the 1st degree Newton update.
This process can keep going as far as you like. For physics systems I've worked on for modelling the behavior of rigid components in MotoGP bikes, the kinds of approximations you allow are very different to the physics engines I've built for commercial games. Neither are 'better' or 'worse' - a good programmer understands the requirements of their domain . To that extent, the OP's approach is fine, and powers the vast majority of the physics in current generation video games (most engines do support continuous collision detection, but it will be switched off for most rigid bodies in a scene, for performance).
You're right, there's a gradation of compromise, and I wasn't fully polite in my comment (nor did I think to attempt to determine OPs sex... then again, I'm a guy with a "girl's name", so who's to say "Mary" is necessarily a girl? aaaanyway...). I'm not suggesting that all software must compute integrals in order to have convincing physics. However, when I see code like this I involuntarily get itchy:
while (trig.isLineIntersectingCircle(circle, line)) {
physics.moveCircle(circle);
}
Thanks for that, sorry for my impoliteness in response. HN is sometimes frustrating in its density of people with Dunning Kruger problems. I am sometimes one of them.
You bring up a slightly different issue now, which I'm not sure if you are suggesting the problem is with the unbounded loop or with the serial strategy for resolution.
On the former, I agree with in the narrow (interpenetration resolution isn't usually guaranteed to converge), but taking
while (has_interpenetration()) {
resolve_interpenetration();
}
and adding a limit
for (int i = relaxation_steps; i > 0 && has_interpenetration(); --i) {
resolve_interpenetration();
}
gives you code that is in most game physics engines, in some form.
On the latter, modern game engines do some steps globally, rather than just running through each rigid body (often collision detection, interpenetration resolution, resting contact resolution, but not normally collision resolution or integration). But the OP approach of running everything in series wasn't unusual 15 years ago when game physics systems started to become ubiquitous. Before companies like Mathengine and Havok perfected LCP approaches, folks like Ipion and FastCar were doing impulse-based calculations that were heavily serial. So for a very simple JS experiment, it didn't strike me as a problem. I'd have written it that way too, if it were me.
The collisions currently reverse the velocity of the circle relative to the line by subtracting to cancel and then subtracting again to negate.
I found a slightly nicer-looking effect can be gained by making the factor ~1.8 to simulate 20% energy loss in collisions.
Obviously, any magic numbers in the code are not relevant for its purpose of demonstrating a simple mathematical concept in a literary style. I'd love to see more examples of this kind around HN as it's a quick way to learn/refresh the odd concept.
I could have really done with this a few months ago. I was trying to make animations of electrons flowing through wires of variable thickness, and really struggled trying to find examples online. I ended up having a three hour maths lesson with my brother over irc, to get my head around vector maths.
One of the "not good" explanations of this was probably mine (and I agree!) written many years ago [1]. I wish I could remember why my code ended up so much more complex.
Same person that wrote gitlet[1], as seen on HN a few weeks ago. Personally I'm extremely happy to see HN so supportive of well commented and documented code.
I find it far more readable. Anyway, next to where you clicked Annotated Code there's a link for Raw Code which has the comments in-line if you prefer that. You can use an IDE or web editor to get the font colors nice.
Do you also prefer footnotes and sidebars inline when reading articles?
These are standard typographical practices. I do set my comment color to super light so comments don't distract me too much when reading code, but I wish there was a better way.
Not the parent, but I prefer footnotes/sidenotes when reading physical texts, but prefer inline commentary when reading on the computer screen. Jumping up and down to read a footnote (either manually scrolling or with a hyperlink) or having the sidenotes take up a large percentage of the screen is disorienting for me.
It helps if the inline commentary is differently colored or in a separate box.
I also use light colors for comments when reading source code, but I personally prefer this over having them as footnotes or sidenotes.
That is why side notes are so nice: you don't have to scroll down, you just look at the side bar. And if you just want to read code...hide the side bar! Horizontal real estate is not a big deal with 80 column code widths (you could even have two or three screens of code on one monitor still). Heck, you could render the comments with a proportional font to save even more H-space (fixed-width fonts are extremely archaic).
Programming should move out the typographical stone age into something a bit more modern.
I see where you're coming from regarding ease of hiding/unhiding comments, but I think it'd be quite hard to do correctly in user configurable environments.
In the article, I'm assuming the left column takes up a bit less space on the author's browser viewport, but it takes up about 45% of mine, while a bit of the code trails off the page.
The reason I think it works so well with physical books is that the dimensions are known and not user-configurable, so it's easier to guarantee that the sidenotes look appropriate. With free-form comments that could range from a one-line comment to large paragraphs to explain one line of code, I don't think there's a way to make sure they would look reasonable in all cases. (I could be wrong)
A good example of easy-to-read in-text notes (in my opinion) are those in Fred Hebert's LYSE [1]. They could be sidenotes or footnotes, but instead they're in-text, which makes them easier for me to read in a browser, regardless of screen size, but also pretty easy to skip over since they're color-coded and in their own box.
I think it's more of a medium difference than a book-typography-is-more-modern one. When scrolling through a web page or code, I mostly find the single-column in-text notes/comments easier to use, though I see why some people might prefer sidenotes.
Inline comments are a part of the code's core layout, so you get into trouble when you make them disappear (as is the trouble with most folding systems).
It is like parenthetical statements in prose: you can't really make them disappear, they are right their in your face. It is considered bad style if they aren't extremely brief, otherwise you should use a foot or side note.
My biggest pet peeve is docblocked-to-death code, like
class Widget
{
/**
* @access public
* The name of the widget
*/
public $name;
/**
* @access private
* The widget's material composition
*/
private $composition;
// ... yada, yada ...
}
Even with syntax highlighting on, it can be a pain in the ass to see the code through all that comment noise. I guess I should take the time to configure my folding more.
I love the look of the annotated code. I think it would be particularly useful for documentation.
Are there any tools that can generate this from comments, or perhaps an Eclipse or Visual Studio plugin that shows this type of documentation along with the code?
Very cool. Interesting bit: The outcomes of the falling balls seem to differ every time, which corroborates the fact that any minor change in initial state results in vastly different outcomes.
~~The simulation clock is based on the actual frame rate achieved via requestAnimationFrame().~~
Edit: Actually, it looks like the physics portion takes constant (but implicit) time steps each frame, but the time that a circle enters the simulation is based on the actual frame rate, which means when a given circle appears, the lines could be in a different location from previous runs.
It's generated using http://jashkenas.github.io/docco/. I think it's possible to use comments, but often it's "literate" source code, e.g. markup & code blocks.
I believe the OP is using Docco.js (which has been ported to many languages, e.g. Rocco for Ruby, Shocco for Shell, Pycco for Python, Gocco for Go, etc.)
There's different ways to configure the output and layout...the Docco homepage uses a single-column format, whereas the OP uses the double-column format.
The manual function signatures are a bit annoying. In particular, why don't the author's comments include the parameters?
I wonder if docco has an extension to create these automatically.
I much prefer the docstrings, and the new type annotations in python, which make this much nicer.
Mostly useful, however a lot of the comments are unnecessary noise. And some could be eliminated with better naming, intermediate variables and refactoring.
Iterating over an array in reverse allows you to remove items in the same iteration (at the very end of the loop). If you delete an item, the items which follow are downshifted by one which means that their indices also change.
However, this isn't a problem if you iterate in reverse, because you already took care of those items in previous iterations. The item you'll handle next will stay where it is.
This also works nice with unordered lists (or "bags").