Hacker News new | past | comments | ask | show | jobs | submit login

You saw them a lot in old games (the kind likely to be written in assembly), because not even C, let alone a general-purpose scripting language, was acceptable for performance reasons for quite some time, yet there has always been a need for greater abstraction and expressive power. Look at any Japanese RPG of the 80s and 90s, for instance, and there's probably a simple bytecode VM (or state machine, if you prefer) for scripts that move characters around during story sequences, one for displaying text in menus and dialogs, and so on. Look in an arcade shooter, and there's probably one for defining enemy movement patterns, etc. Tiny bytecode DSLs not only boosted expressive power for programmers, they also allowed non-programmers to script behavior. 6502 assembly might be out of reach for non-programmers, but `SETANIM PLAYER1 PLAYER_WALK_TICK; WAIT 20; SETANIM PLAYER1 PLAYER_WALK_TOCK; WAIT 20; LOOP;` is not. If you only have to remember a few simple rules to get the behavior you want, even learning to hand-assemble bytecode is not out of the question for a dedicated professional.

Nowadays, you'd probably be better served by embedding a general-purpose dynamic language in most of these scenarios, but hand-rolled bytecode can still win in certain situations that need high performance and/or low latency. Imagine, for instance, one of those fancy "danmaku" shooters with hundreds or even thousands of bullets on screen moving in intricate patterns. You just might want a language more dynamically manipulable than C to script the behavior of those bullets, but using a dynamic language that furiously generates garbage and demands collection pauses for something that must happen 60 frames per second on so many objects is probably going to cause hitches eventually that will get your player killed (in the game, at least). You'll then be subject to a very harsh but valid criticism: "I played games like this on my PlayStation back in the day with no slowdown, this guy must be a crappy programmer if his 2D game stutters on my monster PC." Sure, it's great that you don't need to break out the optimized assembly to do a Breakout clone anymore, but at the same time, you should feel bad if a 1Mhz 6502 provides a considerably more reliable experience than your game on a modern x86.

A certain popular 2D shooter series made by an ex-Taito arcade game programmer uses a bytecode DSL for its intricate bullet patterns for this reason. It was probably an easier decision to make when the series started in the late 90s, but today it lets the author be productive without adding wildly variable latency into the mix that would kill the gameplay. People should not forget that running smoothly even on poor hardware is a real feature. Even in 2014, I'm sure there are plenty of people out there still using Pentium 4s and Windows XP. If you make the only new games/apps/whatever that run well on these computers, you have a captive audience.

At the same time, I think you could extract 99% of the performance benefits of the bytecode approach if your language had a well-defined heapless, stackless "tasklet/microthread" subset that centered around mutating existing objects and making certain limited kinds of allocations (allocating from a memory pool and manually "freeing" by marking the object as unused later). Knowing that a piece of code will never generate garbage and that a garbage collection cycle will never happen during its execution makes a huge difference.




Thank you for your answer. I do realise why it was done in the past, I was wondering about the present. Also what you're describing in the last paragraph sounds a bit like Andy Gavin's GOOL - http://all-things-andy-gavin.com/2011/03/12/making-crash-ban...


"At the same time, I think you could extract 99% of the performance benefits of the bytecode approach if your language had a well-defined heapless, stackless "tasklet/microthread" subset that centered around mutating existing objects"

Oh hey, I think you just described asm.js


> Even in 2014, I'm sure there are plenty of people out there still using Pentium 4s and Windows XP.

Don't forget low-end mobile devices.


Low-end mobile devices these days have 80+MB of RAM available to an application and the downsampled assets that I assume you're using on them 'cause you're a pro leave you with a pretty large chunk of RAM to work with.

I sorta doubt there are many games targeting these devices (and, more particularly, the people who still use these devices in 2014) in the first place, but even fewer which are pushing their headroom so hard that, if they decide they need what amounts to a scripting language, they can't fit one in off-the-rack.


What's your market? Is it the U.S. or… India? if you put the population of each next to each other, which would you describe as "most people" ?


If you can point me towards a smartphone, with significant market share and usage numbers in 2014, that doesn't have 128MB of RAM and, like, an 800MHz Adreno or something, I would really like to see it. Because that's my defnition of "low end" and I've literally never seen crappier than that. (I'm targeting phones with at least two cores and 768MB of RAM and up, because I'm lazy and I don't think anybody with lower would be buying my game because it doesn't fit the profile of titles that make money on such devices, but I do have a couple paperweights around for idle testing.)

My market research has shown that when you get below those specs you start getting into dumbphone/J2ME platforms, where you have other constraining factors.


it is not always about whether you can at all. Sometimes it's about power budget- That is, battery drain, etc. Run your lua engine and you have less budget for particle effects. Run all the things and your game drains the battery in 10 minutes.

It's perfectly legitimate to not care. Writing a from scratch bytecode vm is a fairly extreme last resort solution.

But on the other hand, the bytecode VM, appropriately scoped, could end up being easier than embedding lua. That is, if what you're doing is only barely more than loading and using data- not even quite crossing the boundary into turing completeness.


The power delta between AngelScript or Lua and your own interpreter is gonna be pretty small, though. It's a cost you pay once (in a modern implementation) to interpret into...a bytecode VM. And you totally might be a wizard beyond my ken here, but I strongly doubt it'll be easier to implement a bytecode VM of your own than calling, like, five functions. =) I'm not denying that there are cases where such a thing is valuable, don't get me wrong, but don't reinvent what you don't have to. And probably should not be the first thing you're reaching for in your toolbox. Or second.


You're probably right and it's totally irrelevant in this day and age with cheap memory and cpu power.

I can't help but remember fondly on the idea of the bytecodeVM that "Out of this World" used to draw vector graphics, and think "That. is. neat".

or the power and flexibility the SCUMM vm afforded story writers.

It's fetishistic and unrealistic I suppose, to think it would be cool to do some amazing game on some extremely primitive slow low power cheap machine that anyone could buy for like $10. The whole fantasy usually ends when you realise that the cost of displays is well beyond the cost any actual computer hardware you may consider.


Eh, angry birds runs on Lua. Your point was?


Angry Birds is not a terribly involved or demanding game, and the hard part, 2D physics simulation, is handled in C by the Box2D library anyway. It requires no real-time control, you just aim the bird and watch the reaction. You could half or quarter the framerate and add enormous interframe jitter and the game would play exactly the same. If you tried to do the same thing with an demanding twitch action game, it would quickly become unplayable.


Lua's been fast enough to do what you describe for years on gear much slower than even bottom-end smartphones, but if that's too iffy for you, AngelScript is a sub-megabyte runtime when run on Android via the NDK. It's historically been performant enough for use on the PS2 (with its ballin' 300MHz MIPS CPU and 32MB of RAM). It's statically typed, has a very well-defined garbage collector you can pretty easily avoid ever invoking once, and as a simple transform language generates very little (possibly even no? I don't remember) heap allocations. It will also take literally ten minutes to set up for the sort of use case we're talking about here. (If you want to bind a big honking C++ library to it, then things get interesting, but we aren't talking about that.)

You still write your perf-sensitive code in C++, of course. Just like if you embed Lua. Because you aren't dumb.

munificent hit on the one use case where this makes a ton of sense to me--when you're a basically-embedded environment, he was talking about a platform with four megabytes of RAM--but that hasn't been a serious use case in games on popular platforms for a while now. Which is why I'm confused why it's in a book that seems to basically be for novices, at least without some "you should rethink doing this unless" asterisks.


AngelScript seems relevant to my interests, I'll have to check it out.

I'm sure Lua is "good enough" for a lot of tasks in most games on most hardware, I was just trying to hint at scenarios where the GC overhead could be painful enough to warrant a custom approach.

>but that hasn't been a serious use case in games on popular platforms for a while now.

The DS with its 4 megabytes of RAM was still a relevant platform not that long ago. The 3DS doesn't have a whole lot of breathing room either; 128 megabytes of RAM, who knows how much of which is available to the game and not reserved for the OS.

>Which is why I'm confused why it's in a book that seems to basically be for novices, at least without some "you should rethink doing this unless" asterisks.

I haven't read too much of this book, but I got the impression that it was for programmers of other disciplines that wanted the skinny on game development practices, not for total novices. Somewhere between "Game Programming for Teens" and "Game Engine Architecture." This chapter doesn't seem that out of place to me.


> 128 megabytes of RAM, who knows how much of which is available to the game and not reserved for the OS

The OS reserves 32MB for itself, which leaves you with a bigger working set than a bottom-shelf Android phone. =) I get what you're saying, but I don't think anybody's going back to the halcyon days of fewer megabytes than I've got fingers.

> I got the impression that it was for programmers of other disciplines

It felt like something I'd recommend to a CS student, tbh. Maybe that's just me.


>The OS reserves 32MB for itself, which leaves you with a bigger working set than a bottom-shelf Android phone. =) I get what you're saying, but I don't think anybody's going back to the halcyon days of fewer megabytes than I've got fingers.

Ok, 96 megabytes. When you set aside RAM for everything else in a 3D game (the main binary, sound/mesh/texture caches, etc) there's not a whole lot of space left for a scripting language's heap, especially when you consider that most garbage collectors require that the heap be significantly larger than the working set in order to achieve reasonable performance. Again, not saying Lua wouldn't work just fine (any 3DS devs in the audience that would care to comment?), just that it's by no means impossible to bump against that limit. For comparison, here are the slides to a 2010 presentation that demonstrates how devs were having real performance problems with (PUC-Rio, not Mike Pall) Lua on the 360 and PS3:

http://www.slideshare.net/hughreynolds/optimizing-lua-for-co...

>It felt like something I'd recommend to a CS student, tbh. Maybe that's just me.

Well, I wouldn't be surprised if the average programmer is actually working below the level of what we expect a CS student to know /me ducks


I think--not sure, I haven't tried--that I'd be a lot less worried about that RAM budget on that phone, actually. That phone almost certainly doesn't have the GPU for a game rolling with nontrivial models/textures. Like, just as an example, I don't think you're running even a downscaled, super-optimized, everything-in-C++ version of the Angry Bots demo that comes with Unity (it's on Google Play if you want to check it out) without stripping out actual functionality to the point where I think you sort of miss the point of the game because I think the GPU would give up the ghost. So I feel like on this low-end hardware you're basically talking about fairly lightweight 2D games because it can't push better, so I'm not sure I'd really worry about big honkin' meshes or textures. Is that unfair? (My own libgdx game doesn't use more than 60MB of textures at its peak, and I'm doing a lot more stuff in my JRPG than I'd think almost anybody would try to do on that sort of bottom-of-the-barrel hardware.)

Also, yeah, maybe I'm overshooting the average programmer.


I think the point is you don't generally need something with the extreme power and flexibility of lua to simulate comments like yours. A simple bytecode with 5 or 6 instructions should do. And it'll be performant enough to run on a casio calculator watch at full speed!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: