> A: It’s probably boring for me to say, but I just can’t beat the drunken cat bug... That was the one where the cats were showing up dead all over the tavern floor, and it turned out they were ingesting spilled alcohol when they cleaned their paws.
I think that bug explains very well just how deeply complex Dwarf Fortress really is. Drinks can be spilled. Some drinks have alcohol. If cats step in something it sticks to their paws. Cats clean their paws, causing them to ingest what's on them. Enough alcohol will kill a cat. Put together: dead drunken cats.
Dwarfs trying to clean their inner organs (dwarf wounded, doctor closes the wound, dirt stay inside)
Undying children in the moat water (for years... just swimming there...)
Killer carps (there was a long time during which carps were really overpowered because constant swimming was buffing them up really good, dwarfs getting close to water sources were eaten by carps)
Catplosions (Tarn loves cats, cats reproduce, too many cats kills DF performance)
Catplosions (Tarn loves cats, cats reproduce, too many cats kills DF performance)
This was a particularly insidious one because the usual strategy of culling excess livestock doesn't work out when applied to dwarfs' pets (pets can't be designated for slaughter, and other means of making pets die will make their owners upset). With most animals, you can avoid the pet adoption issue by just not marking them as available for adoption, so you wouldn't have to worry about a dogsplosion, sheepsplosion, etc. Cats do not become pets through that system. Instead, a cat adopts a dwarf.
Note that the obvious strategy of not having any cats at all doesn't work well because cats are the most/only easily-available HUNTS_VERMIN creature, so you need some around to protect your food stockpiles.
A somewhat effective solution is to use male cats for stockpile protection, and keep female cats caged (which IIRC stops both adoption and breeding) and only let one out at time (cage any resulting (female) kittens immediately). There's not (yet) any simulation of evolutionary pressure to adopt a owner quickly, so you can just prefer kittens that do so for the next breeder when the current one dies of old age.
Gelding (neutering) the male cats also works, but is less sustainable if you don't have a reliable source of replacement cats from migrants or trade caravans for when the current ones die of old age. Spaying is not supported AFAIK.
You can also just remove the ADOPTS_OWNER token via raws editing.
I. M.> John, the kind of control you're attempting simply is… it's not possible. If there is one thing the history of evolution has taught us it's that life will not be contained. Life breaks free, it expands to new territories and crashes through barriers, painfully, maybe even dangerously, but, uh… well, there it is.
H. W.> You're implying that a group composed entirely of female animals will… breed?
I. M.> No. I'm, I'm simply saying that life, uh… finds a way.
Verily, the dwarf would pick up a cat to take it to the slaughterhouse, the cat would adopt the nearest dwarf (the one carrying it) and then the dwarf would slaughter the cat.
And get sad because his pet died. And then eventually start tantruming and a tantrum spiral would begin.
It's so crazy when you try to make dwarves feel great all the time and then the simulation comes up with some completely insane causal chain that messes up everything.
I once tried to isolate a miasma problem with a wall. Which works fine, unless the dwarf tasked with building the wall decides he needs a break, sleeps on the floor and then gets grumpy because he just slept on a hard surface in a smelly area.
This is why I always spent an inordinate amount of effort on my dining hall - get that thing fancy enough and the dwarves could endure quite a bit before they flipped.
Since no one else has mentioned it, I feel compelled to mention thermonuclear catsplosions.
My understanding of the legend is that a player began experiencing a catsplosion, and wanted to find a way to get rid of the cats. So he started tinkering with the game a bit, and eventually tried setting the blood temperature to a very high number. This killed the cats, but also had the unfortunate side effect of setting them on fire. And the cats were multiplying faster than they were dying, so there was a massive, expanding fireball of burning cats.
If this legend is wrong, I'd love to hear another version.
TIL this was a bug and how it was a bug! Wow! Back when I played DF it really seemed like it was simply accepted folklore that the carps in DF were really strong and the common advice was don't build your base too close to rivers. Having not played in a long while, I had always thought this was intended.
I had a roommate in college who loved to play World of Warcraft.
At some point, we got into a discussion over art style and realism. He made an observation that still sticks with me.
"The style sets expectations. When something breaks a real world law of nature or logic because of a bug, it doesn't feel as jarring, because the style is already cartoon-y."
It made me think about just how malleable expectations are with regards to game systems. Train a player to expect realism, and breaches become infuriating. Train a player to expect the unexpected, and the unexpected becomes intriguing and fun.
I think it’s more about internal consistency. If a world hasn’t said something about it (or hasn’t shown a derivation of it — e.g. gravity presumably exists, because all objects shown have been affected by it), then you’re free to do whatever (e.g. introduce magic as a mechanism). But once done, it must continue to hold true — else the rules are bullshit, and we expect nothing to behave in a manner that isn’t arbitrary.
Even fortnite has a kind of logic to it, haphazard as they may be (though it’s also so loosely defined, that I find it completely uninteresting — it’s a dumpster fire of cosmetic items with no real theme or nuance; this is because its more a modern shopping mall [ an arbitrary context for social groups ] than a game system).
It’s why simpsons can revert (nearly) all damages every episode (that’s simply the rule of the world), but if it tried to violate that rule and persisted those changes in any meaningful way, it would feel like complete nonsense. (They do however do it for self-referential jokes and such, but these aren’t persistence so much as temporary anomalies) At best, they’re allowed to forget something exists altogether.
> Dwarfs trying to clean their inner organs (dwarf wounded, doctor closes the wound, dirt stay inside)
I'm curious if the fix was to just remove the dirt from the simulation or if Tarn actually went ahead and simulated infections (or more realistically, used an existing infection mechanic).
I live in a city with a lot of stray cats and lemme tell you, "catsplosion" is pretty realistic. We narrowly averted one by getting the ones that live under our house fixed.
Reminds me a little bit of something strange I saw in Rimworld - all of my dogs were developing liver cirrhosis!
It turns out that my dogs weren't alcoholics - it just happened to be that beer was the only food source they had zoned access to, so they were drinking it out of hungry desperation, and while it gave them enough calories to live on, it also gave them cirrhosis.
One time a raider broke into my animal pen and started attacking my animals, but got shot down by a turret before they could cause any lasting damage. I didn't pay it any mind, but that raider had brought in a good chunk of Yayo (the game's cocaine analogue) that had scattered around in the pen when he died. By morning I had a herd of inebriated megasloths hunting down my colonists en-masse.
Hops outside of beer is toxic to dogs, it causes malignant hyperthermia. It's a problem for brewers who leave their hops where the dog can get at them (usually spent hops in the trash or a compost heap).
Beer brewed from hops doesn't cause hyperthermia, it's harmful to dogs because they experience the same effects of ethanol poisoning that humans do (e.g. cirrhosis), at much smaller doses.
Unfortunately it remains unexplained (by the article) why this is considered a bug. It would be unethical to test, but this seems like perfectly cromulent behavior one might actually see in real life.
When you look at an item listing and you see something like “Mead”, that is truly all the item is —- it isn’t a cup of mead, it’s just a vague amount of the liquid mead itself, as if your hand was the only thing keeping it from hitting the ground.
But there are containers that can hold your liquid. You have mugs and goblets that hold one quantity of “Mead”, giving the impression that one count of mead is like a generic serving size. You also have barrels and pots that hold stacks of “Mead”.
Creatures are kind of like walking containers and have their own detailed inventories. Among the things you’d expect to find like armor, weapons, and books, you might also find a “coating of tears” on a crying dwarf, or perhaps a “spattering of blood” on a murderous elf.
They’re not just static inventories for the fun of a story, creatures do interact with them and use them. Dwarves covered in a vomit item will (hopefully) put any available soap in their inventory and use it to clean themselves in water, for example.
Cats are simple and just clean themselves with no water or soap needed. The catch with them is that they ingest whatever they have cleaned off of them.
So, putting all this together: The problem was that cats pick up a whole “serving size” of alcohol and proceed to clean themselves, ingesting the entire serving. The bug surrounds the vagueness of liquid sizes.
And it was fixed accordingly! Cats are still vulnerable to the effects of self-cleaned alcohol, but the strength is now proportional.
My favorite feature in Dwarf Fortress is that all eyelids automatically clean their associated eyeball, just so that players don’t post about how their dwarves have vomit on their eyeballs.
IIRC the actual bug was that cats licking their paws made them consume an entire tankard's worth of beer with each lick, which made them drunk (and dead) with just a few licks.
My experiences with cats (at least in my own life) is that cats prefer not walk into an area with a sticky floor if they can avoid it.
Clearly there's a bug here where the cat will keep wandering around on the sticky floor, and then keep consuming the alcohol off it's paws.
The obvious fix is to add a feature wherein different creatures have preferences about where they go next, and then use that to have the cats avoid the bar floor if they can.
As a bonus you can include stuff like "Cats don't like hanging out in crowded areas" so that they'll also stay out of the main hall when the army is gathering to muster forth, etc.
Good suggestion. It would occasionally be nice if the _dwarves_ would avoid stepping in things and tracking it all over the place. The only reason that the cats were in the tavern in the first place is that pets follow their owner* around.
The bug was a numerical error that would cause cats to drink something like the equivalent of a pint of beer for every lick, and therefore die prematurely. He talks about it in this video:
The actual bug here was a mistake in handling of liquid volume. The cats were ingesting a tankard’s with with each lick, and they’re smaller than dwarves, so that killed them.
I read about a cat who used to visit a pub and learned to lick beer from under taps. The cat got banned from the pub for its own good but I don't believe it was anywhere close to poisoning.
That's not even (necessarily) a bug: whether convicts can win/hold elected positions is a aspect of applicable law (albeit a weird one), vampires (IIRC) have high social stats and skills and so are well-postioned to win elections in general, and dwarves (and for that matter humans) don't necessarily update their opinion of someone just because they were convicted of a crime (especially if they don't hear about the conviction, which (knowledge transmission) I recall was added to the game modeling at some point). So this is like a headline "Bob Smith Convicted of Murder; Wins Mayoral Election Anyway", which seems weird-but-possible in real life.
Yeah exactly. Also top tip is if you actually catch a vampire in DF a known thing is to imprison them in a room with all your switch levers in and a desk and chair so you can make them your accountant. They don't kill anyone that way and can do useful work.
Space Station 13 is another game with a lot of serendipitous emergent gameplay but with a slightly less steep (but still insane) learning curve and multiplayer antagonist fun.
It sounds like the actual bug was about quantity. Walking on the tavern floor resulted in 1 entire serving of beer on the paws, and cleaning resulted in ingesting that serving.
Not sure what's in the code, but I think each cat (which is some sort of entity by itself) is composed by body parts, which at the lowest level owns a bunch of attributes/components. Would that make sense?
The bug in this case was the minimum alcohol value a cat could like from its paws was above the cats LD100, so if they licked their feet when soaked in beer they would die.
That's what intrigues me, Not played the game but this must because there were a list of things which could get the cat killed right? Then how come this is an unexpected bug?
The bug was the quantity. Cats that walk around in a place with spilled beer will indeed lick themselves clean, but the amount of beer ingested is tiny.
Imagine getting drunk by licking beer residue off your hands.
Making the item system polymorphic was ultimately a mistake, but that was a big one.
Q: Why was this was a mistake?
A: When you declare a class that’s a kind of item, it locks you into that structure much more tightly than if you just have member elements. It’s nice to be able to use virtual functions and that kind of thing, but the tradeoffs are just too much. I started using a “tool” item in the hierarchy, which started to get various functionality, and can now support anything from a stepladder to a beehive to a mortar (and pestle, separately, ha ha), and it just feels more flexible, and I wish every crafted item in the game were under that umbrella.
We do a lot of procedural generation, and if we wanted to, say, generate an item that acts partially like one thing and partially like another, it’s just way harder to do that when you are locked down in a class hierarchy.
"Tool" is still a class, with perhaps very generic polymorphic methods (e.g. do_default_action() ). The problem is not polymorphism per se, but rather about having a deep class hierarchy aka lasagna code.
My policy: OOP is like salt. Use little and that's great. I only allow a single inheritance layer, and ideally no inheritance at all.
I've never heard this as an acronym, but I use it in practice! For me, it's an extension of "three or more, use a for" that made sense for all repetition and not just loops.
That’s great, and jives with my experience. You probably don’t know the problem well enough to come up with the right abstraction until at least example #3.
I love this. And I like it's also consistant with the findings of the natural semantic Metalanguage.
NSM would be the minimum element set of concepts shared by all human cultures, which can be used to derive the rest of concepts, and it's heavily based on anthropological field studies.
And it includes the numbers one, two and many, (also few, some & all). So that indicates some special meaningful distinction for us in that jump from two repetitions to start abstracting away.
But the single responsibility principle and the idea that you should divide your code in components where the lower components should be abstract by the point of view of the higher ones are almost never wrong.
Not by coincidence, those last two are consequences of how people think. While KISS is kind of a physics law.
Nah, the problem is OOP polymorphism that conflates too many separate things. If you have a language that can actually express "implements this interface", "has this member", and "delegates this interface to this member", then you don't need traditional "extends" inheritance at all; sometimes you want to do the exact equivalent (and you can), but most of the time you want to do something narrower.
Kinda - an instance of an OO subclass is a decorator around an instance of its superclass, so if you're using a language which has a good way of expressing decoration then that's usually better than using inheritance.
But as the other reply said, patterns are a language smell - the language needs to support that style and make it idiomatic. Otherwise, if you give people the choice between doing it right but verbosely and doing it wrong but more quickly, guess which one they'll choose. https://www.haskellforall.com/2016/04/worst-practices-should...
I'd say that that's an example of structural or duck typing (as opposed to nominal typing).
In my experience it is nigh impossible to design-pattern your way out of a language simply being unable to express what you need it to express without incurring a severe performance penalty (anathema for game development) or resorting to code generation (which is itself prone to ending up as a Gordian knot).
Another good rule for OOP: Objects are fine, object graphs are a killer. Graphs are what get you into "all I wanted was a banana" territory. Objects should either be atomic, agrigated by simpler structures like maps and arrays, or if you absolutely must have objects pointing to objects, be sure that they form a tree and not a graph such that each object need only know about the things below them which can be encapsulated at a single point.
Even a single object gets you into the "all I wanted was a banana territory. "
I have a gorilla class and a banana method on the class, can I ever use banana without the gorilla? No. The simple act of having a class screws it all up.
Make the banana a pure function, or one that is defined in terms of a polymorphic parameter that it mutates.
func banana(x: animal){...}
banana(gorilla);
The way to do it via OOP is base class methods and importing the method to all children via inheritance which is just bad and universally reviled among even practitioners of OOP.
class Animal()
{
banana()
}
class Gorilla() : Animal {
}
Technically there are slight fair advantages and disadvantages to the OOP way when you don't account for inheritance and polymorphism but all that is gone once you do account for them.
You still don't necessarily need to model the graph at the object level though. You could model your graph as a set of pairs, ie edges. If performance was more of a concern you could do it as a map where each object was a key and the values were a list of the connected objects. You could also separate out the value from the graph behavior and have a GraphNode object that wraps each value object.
The bigger point though is when you have networks of objects that are relying on each other to perform functions, that's where the problem is, and where you start to rely on mocks and stubs for testing, which really should be considered a smell in its own right. Your better off making the interface between objects values rather than having hard references between objects as described in this talk: https://www.destroyallsoftware.com/talks/boundaries. Once the interfaces between your objects are values instead of references you're free to connect your objects together in any way you need and are not stuck with the fixed graph as originally designed.
> Once the interfaces between your objects are values instead of references you're free to connect your objects together in any way you need and are not stuck with the fixed graph as originally designed.
Absolutely, same can be said about relational data representation for example.
Funny thing, it's just the object model in play that causes the problem. Something more similar to Smalltalk (or Objective-C) works pretty well for things this dynamical, though you do need to factor out the properties and messages reasonably well still.
The trouble is C++ and similar object models poor support for composition and implicitly penalizing uniform object structure.
(E.g. call multiple base methods in a subclass - pain ensues. Add virtual calls to the mix, it gets really iffy.)
Even Python and Ruby with explicit mixins but the old model get hairy.
This is one of the things I like about Go. Just structs with methods. You can embed them but it seems to hedge against deep nesting and creates simple code
Then how would you model composing behaviors in Go. Say for example I have a representation of Food and a representation of Animal. I then want an ability that will "animate" things, so I can animate Food to give it the behaviors of Animal.
I'm not being critical either, I'm seriously curious how somebody would implement this behavior in Go. Like you say, just struct with methods is a very appealing mental model since you have fewer moving parts, but how does it deal with a scenario like this which is very common in game dev?
We can make both Food and Animal statisfy the "Animateable" interface.
Go also has a syntactic shortcut which is similar to inheritence, but more explicit. If you include a field in a struct without any name, it is considered "embedded", and you can call methods on the field directly, without referencing its name, which ends up looking the same, at the call site, as inheritence.
But everything is still explicitly laid out in the struct -- it's just a simple struct, no inheritence shenanigans.
I think interheritence is a mistake, but interfaces are the bees-knees. Even in C# (which is my main language these days, 'cuz that's what you use if you're making a game in unity), I just use interfaces, not implementation inheritence.
(And even interfaces I don't use much -- but when you need it, you need it!)
Thanks very much for the reply. Here's more of a clarifying example:
The idea I was going with Animateable was not animation (poor wording on my part), but effictively giving the properties of a creature to a thing, ie magically animating it. So I have an ability in-game that lets me turn anything into a creature, thus merging the Creature interface with whatever other interface this has. From what I'm getting from you, that means every single thing in the game must implement the Animateable interface.
Now lets do another one, I want a Metalic property. Say these are things that when hit by lightning will give off lightning damage. Then there's a curse called Midas that will let me turn anything metallic.
So I have a Wooden Hammer object which I midas curse mixing in metalic, then a wizard animates it so that it has the properties of a creature, and I have a creature that when I hit with lightning gives off sparks that I can also use to hammer nails.
Point being, in this world you're basically requiring every single thing to implement every single interface which sounds like a nightmare, where with more of an ECS style system, you just merge in the new behavior and call it a day.
So caveat, I've never actually used an ECS system, but I've read about the pattern, and I would say it definitely seems better suited to the kind of situation you're describing than interfaces (or inheritence).
(I would say interfaces/inheritence are useful for simple high level abstraction of basic concepts with multiple implementations (here's an interface for a compression algorithm, or a RNG, or whatever), not so useful for trying to model complex game-world relationships. And to me, the useful part is the interface, whereas inheritence mixes in two unrelated concepts: implemementation-sharing and interface.)
But you don't need either of those for ECS, right?
If I understand it correctly, ECS at its core is basically an optimization of the struct-of-arrays datastructure. E.g., something like this is a super-basic ECS:
struct World {
Comp1[] Comp1
bool[] HasComp1
Comp2[] Comp2
bool[] HasComp2
...
int NumEntities // all arrays have this length
}
I'm a big fan in general of struct-of-arrays vs collection-disparate-structs-with-pointers. Besides being way faster, it tends to make code clearer, too. ECS seems like a more efficient and convenient evolution of that pattern.
At least from what I've read. No real experience. Take with 13 grains of salt. :-)
P.S. If you feel like it, I'd be happy to explore some concrete code (you first!). It's always very interesting to me to try to explore all possible ways to express some concept in code.
Go uses duck typing. "Interfaces," such as Food or Animal, declare sets of functions that must be implemented in order to qualify as an object of that type. So a "Food" thing might need to implement "beInjested()" and "spoil()", and Animal "run()" and "die()". If, for type "Chicken", there are "beInjested(Chicken)", "spoil(Chicken)", "run(Chicken)", and "die(Chicken)", then a Chicken can be used anywhere a Food or Animal is wanted.
edit: You cannot, however, explicitly declare that a "Chicken" is supposed to be a Food and/or an Animal. And you can't directly inherit implementations; to make Chicken directly use a generic function you need to do something like implement "baseSpoil(Food)", and then have the body of "spoil(Chicken)" explicitly call "baseSpoil(self)".
As mentioned in another reply, there is a shortcut. You can declare that the Chicken structure includes (anonymous) Food and Animal fields. Then, if you call "spoil(myChicken)", Go will automatically replace it with "spoil(myChicken.Food)".
I am not a Go programmer but I would assume that each behaviour of Animal would be a struct with one or more methods. Those behaviours are added to the agent to make it an animal. Composition. If you want your Food item to walk like a duck and quack like a duck you just add Walk and Quack to it. Presumably some other system would take care of identifying those behaviours and calling them as needed.
Yeah that's the one downside of not doing OOP. OOP is the only way to do GUI stuff.
Nobody on the face of the earth has ever used Functional Reactive Programming. It's a made up concept. In fact it's also definitely not one of the concepts that inspired the most popular pattern in React.
I am unaware of a single functional reactive UI library or framework that does not rely on other people's, most often object-oriented (or even *gasp* procedural) libraries to actually put anything on the screen.
Of course it's relatively easy to build shiny clean wrappers around the dirty work that others have had to do on your behalf.
Actually there is NuclearJS a Flux (React) implementation in JS designed around immutability, FP and also reactive programming. As anything pure FP, it's ridiculously difficult/awkward to get started but it really shines in some areas. Namely complex UIs, state transitions and interrelated data "objects". The first 2 can be solved with proper OOP (it's obviously not nice but it solves the problem), but I've never seen a framework ironing out the nastiness of relational data so well. The idea is to combine multiple data objects using lambdas. It's incredibly elegant and so general concept is supposed to be the "Functional Lens".
Unfortunately it's discontinued, probably rightly because it's definitely not for teamwork. But really, it doesn't depend on anything OOP in the strict sense.
You're right you are unaware. This is not an insult and I'm not trying to offend you. This is a factual statement. You truly are unaware as you stated yourself, and I can prove it to you.
First off FRP is actually a term. Functional Reactive Programming. It is 100% a functional paradigm that gasp by DEFINITION cannot be procedural. So you truly didn't know what you were talking about here: https://en.wikipedia.org/wiki/Functional_reactive_programmin...
Second there are many languages/frameworks that use this paradigm. React partially uses this paradigm. But I will list two popular ones that strictly follow it... just note that there are more. Much more.
Take a look at elm. Elm is a fully functional UI library and language that is 100% pure, functional and has no OOP. Believe it or not ELM is not some toy, it is production ready and actually measurably faster than react. This language is what popularized the FRP pattern which is partially used by React today. See here: https://elm-lang.org/examples/mario
Note the load time and latency on the mario game. React doing the same thing will be slower.
Another language is ReasonML. ReasonML is essentially a language that compiles to the browser and has one to one correspondence with another functional language... OCaml.
ReasonML is Created by the creator of React, Jordan Walke. Coupled with React as framework, ReasonML is essentially the ideal GUI paradigm that Jordan would recommend everyone to use in the ideal world. However due to the fact that everyone is use to javascript, Jordan instead as a first step, ported FRP concepts over to a language called JSX (essentially JavaScript mixed with html) and is slowly nudging the world in the direction of GUI programming using the functional style: https://reasonml.github.io/
Make no mistake the creator, of the most popular GUI framework in the world is a supporter of the pure functional paradigm, and he is heavily and successfully pushing the functional style programming of GUIs into the mainstream.
OOPs being the dominant paradigm for GUIs has, in the past five years, become a false statement.
You're right on the wrapper part though. Assembly language is essentially a procedural language so every functional thing that exists on top of it, is a wrapper. But I mean this is a pointless observation.
You mention React and Elm, which - as I said previously - both rely on other people's actual GUI work (the various browsers' rendering engines + DOM, the various platform-specific UI SDKs, etc) to actually put anything on the screen.
I can't speak to ReasonML, as I have not used it, but my impression also is that it takes advantage of other people's GUI libraries considering that I've heard Revery compared to Flutter (and Flutter outsources its rendering to Skia as well as relies on rather imperative RenderObjects beneath the functional reactive layers). Please correct me if I'm wrong.
I would be very interested indeed to see a GUI framework built from the actual ground level up in an entirely functional reactive manner. Do you know of any?
React is a remarkably well-focused library that can work just fine without DOM, itself having no relationship with or dependency on DOM or the browser whatsoever.
If you want to render into DOM, you’ll need to use another library, ReactDOM (notably not a dependency of React).
However, rendering to DOM is not the only way to use React. Ink[0], for example, allows to create command-line program interfaces out of React functional components and JSX. It depends on React, but not ReactDOM.
Whether React internals rely on OOP and to what extent I don’t know, but I can attest that after embracing functional components in a somewhat complex app I’m working on I haven’t written a single class.
You don’t have to use map() if you don’t like it; and you’re free to pick a functional implementation from Ramda or somewhere else and use that instead, React really couldn’t care less.
Could you try writing a few sentences instead of three words to make a proper counterpoint? My point was that React core can be used (and sometimes forces you to use itself) in a functional manner without DOM or the browser, counter to what you have claimed; the ball’s in your court.
This is something I've struggled with in the past as well.
I have a player, NPCs, enemies, and wild animals. Should they have any class hierarchy at all? If my code matches the game world, what happens when I was to add golems and let mages turn castle's into entities that act just like an NPC would? Should my class design be so tightly coupled with the elements of my world? I don't think so but what alternative works best?
Can you create a base class for all living things and derive from there? The base class owns a bunch of common components such as HP, Animation, Sound, etc, and each subclass has components that are specific for itself (for example, you cannot "Use" a player, but should be able to "Use" a wild animal, so here goes a component for that action).
That's very interesting, because I had observed exactly the same when I tried to implement a rogue-like in Java some years ago. For example, I had to decide whether there should be different subclasses for spell books, the different weapon types (e.g bows vs swords), drinks, etc. or just one big Item class. Closely related to that, another decision I had to make was whether object or character properties should be implemented as class members or as entries in a hashmap (where the latter is a class member). At the end, I had the feeling that I was implementing my own class/object system on top of Java's one. I guess in other languages, like lisp, this is not really an issue.
IIRC I already wrote that here on HN, but I think implementing a simple rogue-like is an excellent exercise to get familiar with any programming language.
Definitely components are the way to go. These have gotten very well known recently in the game dev world thanks to entity component systems, which is what I would call "heavy components," but you can also do "light components." Meaning, you don't need to organize your entire system around an ECS in order to take advantage of some of their benefits.
They make changing behavior dynamically extremely simple. Instead of needing to hardcode classes for each different type of thing players may want to create in your world, you just assign and unassign components. Give a rock the Moveable component and now it moves. Remove the PlayerControl component from the player and put it on, I don't know, an orc -- now you've implemented body swapping. They're even more useful in a game like traditional roguelikes, where you don't have to worry about animating all these dynamic states.
I've actually never thought about how a component system might fit into a more traditional business application; it's an interesting thought experiment but I'm not quite sure it would be a strong benefit.
Mattias Johansson uses basically this example in his argument for composition over inheritance. You have humans, robots, and dogs all fitting into your nice class hierarchy. But now you need a robot dog that breaks everything.
I think inheritance makes the most sense when your problem domain has been known for decades - like airline reservations. But for new blue sky projects - you always seem to wind up in these snafus that break your rigid class hierarchy.
is a good read on that... or rather, its a spot to start thinking from.
It ends with:
> A nun of the temple whispered to Jinyu: “This problem has several solutions, but I dislike all of them.”
> “Therein lies its value,” whispered the abbess in reply. “For we are all of us doomed in this profession: our designs may aspire to celestial purity, yet all requirements are born in the muck of a pig-sty. I trust that this monk can succeed when the stars align in his favor, but when they do not, how will he choose to fail? By cowardly surrender? By costly victory? By erroneous compromise? For it is not he alone but the temple that must bear the consequences.”
There are also a set of "topics" hidden with a mouseover at the bottom that can waste a day away without difficulty.
> You have humans, robots, and dogs all fitting into your nice class hierarchy. But now you need a robot dog that breaks everything.
I like the way you put that. Though humans and dogs are 85% similar at a DNA level, so makes sense for them to inherit from a common parent. And my guess is robots and robot dogs would be ~85% similar at a building block level, so dog would inherit from robot.
I think go easy on levels of inheritance but have really strong root classes that most things inherit from.
Heh, I actually had that wrong. I probably should have watched the video instead of going by my recollection. He has a mix of robots and animals. Same concept.
In a multiple inheritance world yes. But I think most C++ devs decides that led to a big mess in large systems. Supposedly one of the best things about Java was no multiple inheritance.
> We no longer have the problem of trying to fit “a wizard can only use a staff or dagger” into the type system of the C# language. We have no reason to believe that the C# type system was designed to have sufficient generality to encode the rules of Dungeons & Dragons, so why are we even trying?
Good point! Skimmed the blog posts, seemed to have useful enumeration of techniques with considerations.
Hickey nails this one in his talks. When you make something a class, that's NOT an abstraction, that's a concretion. You make a Tool class, you haven't abstracted what a tool is, you've made a fixed decision about what it is. For games that want this level of complex interaction between components, entity component systems are the way to go.
I would argue that if you have a "Item" class then it is the set of all concretions/specializations.
By creating a tool class you haven't extended the set, you've instantiated a subset of the set of items that contains the items that are tools but nothing else. It's obvious why it is so difficult to represent a tool that is also e.g. a consumable item. The set of items that are both tools and consumables is a superset of the set of tools and set of items. You can't represent it through a second layer of inheritance because it only lets you create a subset of the items that are tools. You obviously have to use multiple inheritance for something like this because it will let you form the set of items that are both tools and consumables. Of course, multiple inheritance is very messy so you should try to avoid it.
> For games that want this level of complex interaction between components, entity component systems are the way to go.
Entity component systems are still in a half-baked state, with everyone rolling their own slightly different conceptual model. They're solving a problem, but not solving it well.
One promising direction is getting rid of entities. Components need to link to entities and other components anyway, so one may as well treat entities as zero-size components. Old writeup (not mine): https://github.com/kvark/froggy/wiki/Component-Graph-System
> One promising direction is getting rid of entities.
>
> Components need to link to entities and other components anyway, so one may as well treat entities as zero-size components
You are essentially describing ECS here. In ECS, entities are nothing but a "handle" without any additional data, normally just an integer used to link common components together.
What op links actually describes a different issue than what is in the post.
What I got from the article, instead, is the following:
If entities could be IDs to which components are connected, or containers holding components. Either case has disadvantages.
If entities hold components, then iterating over components means iterating over entities, which slows down all systems. A rare type of component still degrades with adding entities.
If instead components link to entities, and you want multiple components to interact, then these interactions slow down the system because of multiple lookups that need to be done.
Author proposes to use a graph structure between components, getting rid of entities-component duality. Proposes to use pointers which allows calling into any component and process then all required components.
I am not entirely sure if this would solve all problems, but this seems to be the proposal. It seems setting up the graph dynamically would be quite the task, though. For example, I don't immediately see how to guard against cyclic behavior and too many calls into a single component etc.
> Author proposes to use a graph structure between components, getting rid of entities-component duality. Proposes to use pointers which allows calling into any component and process then all required components.
This is far from ideal, and is equivalent to a very naive implementation of ECS. Having the user have to use pointers is not ergonomic, and the random access of secondary components via those pointers destroys the caching benefits of data-oriented design. Having a memory pool don't automatically give optimal cache usage, only if you have very little data.
Most modern ECS frameworks solve the issues you mention by allowing systems that "query" more than one component at a time, not requiring a lookup. For example:
This is supported by lots of mainstream ECS libraries already. It doesn't require lookups inside the System's hot loop, doesn't require random memory access via pointers, and also doesn't require managing/cleaning up those pointers.
The nice thing about keeping Entities as opaque handlers is that a "System Iterator" is able to abstract away the issues mentioned and implementation becomes a mere detail. An ECS library could even use the implementation the GitHub wiki link suggested, since ECS is currently abstract enough to allow it, because of the opaque handlers.
Good to know. Just to be clear, I was not endorsing the article. As mentioned, I have doubts about using a graph structure of pointers for related reasons, most of all because I think handling relational paths will amount to more overhead than any ECS structure.
Nevertheless, I wanted to highlight that the quoted article clearly states that the proposal is not an ECS and is in fact superior to it.
> Good to know. Just to be clear, I was not endorsing the article. As mentioned, I have doubts about using a graph structure of pointers for related reasons, most of all because I think handling relational paths will amount to more overhead than any ECS structure.
I think this is more of a language paradigm problem. Game programming is very much tied up in C++ and C# for historical and performance reasons, ECS is a way to try and graft what is entirely natural in a more value-based language into OOP mechanics and C++/C# type systems.
ECS is so common and easy in lisps that it doesn't even warrant its own name.
The whole talk is worth watching. It's basically about the higher level problems of business app development, how to address them, and how Clojure does address them.
Prefer delegation over inheritance was one of the first OOP lessons I learnt. I read it in a C++ blog around 2004, it was explained very well with geometric shapes examples.
The only time I used inheritance was while implementing a job execution framework. It fit the pattern nicely.
this is a pretty good series of posts but I was rather surprised at where the series ended, with more (hypothetical) OOP abstraction instead of less. trying to fit game rules into a language's type system is a common thing for novice programmers to attempt, because it seems like the perfect logical application of these type system rules you just learned about when learning the language you're using. I kept expecting the article series to get around to reducing the system down to something like a Player class with a PlayerClass enum and member (and ditto for Weapon), then branching logic based on that, instead of trying to pack it into the type system.
Its the fragility of large inheritance hierarchies. They work well for very rigidly defined real world structures but not so well in most real-world usages.
I came to the comments to say exactly this. I think like any ideological stance, it's extreme and there are exceptions to the rule, but for games it's great for a lot of reasons.
Tarn was wrong about this though:
> Using a single object with different allocated subobjects is almost certainly worse for cache misses
It depends a little on the context, but having object pools is exactly the right thing to do when you're trying to scale. It allows you to group together similar data, reducing cache misses when batch processing. You can even cache-align your primitives, or even run SIMD and multi-core jobs to batch update an entire set of attributes that belong to multiple objects.
Thanks for that great series! It reminded me a lot of this demo for O'Doyle Rules, a Clojure rules engine library, wherein the author demos a dungeon crawler style video game written top to bottom using only the rules engine for logic https://youtu.be/XONRaJJAhpA?t=1577
He also goes on to show a text editor written entirely in the rules engine (which he uses to develop the game in), really cool stuff!
The Baader-Meinhof phenomenon has hit me hard. I read an excellent blog post yesterday on this exact subject. Well worth a read, even if Rust isn't your thing:
In the LPMud LPC language objects could inherit from multiple other objects, so they could combine behaviors. I haven’t really seen that in other languages.
Sorry but a lot of the complaints from the article and what you linked are... well weird. Look, I'm no grand ninja guru wizard programmer, but after a decade of programming on and off as a job... wtf are you all smoking? Theres nothing to preach but to check your hubris. A majority of problems stem not from OOP or whatever language being used, it's from over abstracting. This is mostly due to trying to pre-build for a scale that will 99.8% never happen or to account for some wild potential esoteric function in the ether that'll never happen as well. There's some weird dick measuring contest out there on the internet that I wasn't invited to where everyone is trying to out over complicate each other. They never stopped to properly learn any real design patterns, so their classes end up all over the place. "Its OOP's fault!" And hell, sometimes you used a hammer when a screwdriver was more appropriate. No big deal, we all make mistakes in lines of design logic. It ain't OOP's fault you made an oops.
Exactly ! Apart from the "Mental model" and "superior ?" organisation of your "objects erm entities", there is a definite "cpu execution" advantage as well. Having the different system "execute" over items which is basically just data, helps keeping the data local and in the cache.
IMO that's one of the best ways a single programmer can spend his career. No weird requirements, no deadlines, no nothing, nada. Just one's passion and a product. Whether it is successful is irrelevant.
Kudos Mr. Adams for making the achievement and moves gaming history.
Going back to the interview, I found this line (and the logic attached) interesting:
>Making the item system polymorphic was ultimately a mistake, but that was a big one.
>When you declare a class that’s a kind of item, it locks you into that structure much more tightly than if you just have member elements.
I guess when the game becomes moderately complex then ECS or something similar suddenly makes a lot sense.
I have a passion for programming but I need something to drive me. I'm useless when I try and make stuff on my own. But if I've got someone telling me "I need a system that does X", I just get highly motivated in delivering it. Like I need requirements.
Damn you sound exactly like me...when I work for my own projects they usually die off quickly once I figure out how to do it (without actually implementing it but I'm pretty sure it can be implemented in this or that way).
But if it comes from a friend, or a colleague then I'm super focused on it until it's done.
It's almost as if I do projects to show off to other people or I like to serve other people.
Part of the issue is that it is difficult to come up with problems worth solving.
I think Joel and Fogcreek is a great example. He started out with the premis that you don't need an idea for a successful software company, you just need to find great devs and create the best working conditions and then success would come. This was a novel idea at the time since Apple was the only FAANG that even existed and this was their "beleaguered" period. As should come as no surprise to anybody, they ended up creating...bug tracking tools and a kanban board, albeit quite good ones.
Another thing that helps, since the problem is not your own but somebody elses, you don't get bogged down trying to figure out the best way to solve it for yourself, you just solve it for somebody else without all the emotional attachment to the problem. This lets you instead be emotionally attached to the process.
Point being, I think you're selling yourself short.
Neither. It's a sign of time spent navigating an information gradient. Nothing more, nothing less. Encyclopedic knowledge of obscure topics isn't good or bad, just a data point as to what is capable of holding your interest.
Yes if you do it for yourself you rarely thank yourself for doing it. In a way that would be absurd, because giving thanks to someone means you are giving something to someone. But you can't really give anything to yourself because whatever you give you already have.
So it is quite natural that we prefer to do things for others. It makes you feel great when others thank you for it.
Same here. I'm motivated by being on a team that I don't want to let down, and getting a pat on the back when I deliver something great.
I think it's because the early stages of a new project, where you're doing a ton of googling and head-scratching, are kind of painful for me. I need that team or client or business owner counting on me in order to push through that part.
Then I can roll on my own when I get to the fun stuff (for me) - creating features with well-understood tools, and refining and organizing code.
Yeah, I'm the same. I got into programming because I like solving people's problems. The hardest projects for me are the ones where I'm stuck in a cave for months working on a greenfield project that no one is using yet.
I feel similarly, but I think working on a product that is used like this game is might be enough for folks like us. There are probably lots of features requests and bugs to focus motivation and the human usage driving emotional meaning of the work.
It can happen in the jobs with weird requirements and deadlines though.
I do think there is a huge hazard of working for oneself that is not getting paid and may literally starve. The guy who made HolyC comes in mind but he also has mental issues.
Everything about it is wonderful (I really mean that) except the fact that he's sponging off his mother to do it. I can't help but feel that it's a drop of vinegar that spoils the whole cup of milk
Why would this spoil the milk? Why is it called "sponging"? The middle class safety net (something I grew up without) is one of the greatest sources of freedom and investment generally available. Multi generational homes and properties are historically pretty normal and a net family positive for holding onto wealth.
Edit: I was wrong about how much he is relying on his family - didn't realize how much Patreon support the project now has compared to when I last looked. My bad, should have checked. I still agree with the below in the abstract but I retract my criticism in this case
I can get with that take when it comes to Patreon but it's harder to demonstrate ok-ness-with-it with family. Let's say she's really not all that jazzed about it, actually. What is she gonna do, throw her son out on the street? A lot of parents don't have it in them to do that even if they think they would be justified and it would be the best thing for their child. An adult shouldn't put their parents in that position. He should be making sure his mom is provided for, not the other way around
Dwarf Fortress consumed hundreds of hours of my life in high school, I have so many fond memories of it. Every year or so I come back to it and I'm always surprised that they've managed to add another mechanic or feature that just makes the game feel even more like its own little universe. After enough time in the game there really is a moment like that scene in the Matrix - "I don't even see the ASCII anymore. All I see is dwarf, plump helmet, magnetite ore."
That said, I've always wondered if Dwarf Fortress would be a more smooth experience if it had more developers or was just open source (understandable that it's not though since it's basically Tad's passion project). The biggest headache was always the lack of multithreading, since your fortress really starts to chug once you pass maybe 150 dwarves or do anything exciting with fluids. Regardless, it's amazing what one developer with a great idea and an enthusiastic community has been able to do with the game.
> I don't even see the ASCII anymore. All I see is dwarf, plump helmet, magnetite ore
It's even more than that - I can spend ridiculous time in legends mode just reading facts and events going from one personality to another trying to "feel" the world. It's like from these pieces of trivial information a bigger picture emerges, partially consisting of the facts and partially of random connections my brain made. It's an amazing experience.
Even if it were open source I doubt there would be enough impetus to implement multithreading.
It would literally be easier to completely make a new game from scratch with async and threading designs taken into account, instead of trying to adapt an existing monolith.
Async and multithreading are complex and introduce many subtle bugs. It's not so easy to just move to that from a single-thread event loop.
I would hope there would be low hanging fruits. Pathfinding for example causes huge huge issues late in the game, same with the fluid calculations. Of course we have no idea how Tarn implemented these things, but I would have to guess if you need to do pathing for 50 entities in a single frame, you could probably parallelize them?!
It happened to me with Nethack/Slashem and interactive fiction games.
Once you get absorted at night by imagining your surroudings upon reading the game actions, the game feels scarier and more "real" than any current 3D adventure game.
I've been working on a project for a long time. It's not even yielding income, though I hope it will. A friend referred to it as a passion project. That hurt for some reason. I can't explain why.
Yeah I agree with this, I would feel hurt too. For some reason the word "passion" has this hidden connotation for me which makes me feel like the project isn't serious, isn't making money, isn't popular, and is just eating up all my time as it is something I constantly obsess over.
In the case of Dwarf Fortress, I would say for him it is "the sole project he's been working on for 20 years now, which takes up $large_number of hours per week, and which has been sustaining his life now thanks to a fan base which have proven themselves reliable contributors of financial support. And he really enjoys writing the code too"
> For some reason the word "passion" has this hidden connotation for me which makes me feel like the project isn't serious, isn't making money, isn't popular, and is just eating up all my time as it is something I constantly obsess over.
This is spot on. If only you would articulate my feelings for me all the time, that would be great!
People do their best work when they care about it. I'd measure success in quality of work rather than monetization. I'll never be impressed by someone with a boat but the person meticulously building what they love will get a beer from me.
Because, increasingly, doing your job is a mind-numbing, passionless activity? Most people retire as soon as they have an opportunity, and if they don't it's because they need more money. Imagine a world where people often take pride in their jobs and keep working past retirement age because they enjoy it. Is that fantasy or science-fiction?
I think you're interpreting his meaning in saying that incorrectly.
You're not an employee for somebody else (correct? It's your own project) - it's not a salaried or hourly position where somebody else is currently signing your paycheck. You obviously don't hate it (I hope? If so, not sure why you wouldn't have canned it yet)
Thus, this is what most people would refer to as a passion project. You may have a different meaning of that term in your own mind, but I think, on average, most people would refer to what you're doing as such, and not to be malicious in doing so. I wouldn't let it hurt you, I'm sure that's likely not a friend's intention.
Yeah for sure, he didn't mean anything by it. I try not to be super sensitive, that makes interacting with people impossible. But sometimes it's hard :).
I have a similar sensitivity around my primary vocation: I'm an artist with a day job, like many others, but since my day job is programming and I take it seriously, a lot of people think of my art as a "hobby." Which it very much is not.
FWIW I have found that three things help me deal with this sensitivity:
1) Take it as motivation to be more outwardly "professional" about my art. Improve the website, be more active on social media, try harder to exhibit, even (ugh) sell things.
2) Remember that most people are just saying that out of ignorance: they don't have a mental model for anything outside of "work for the Man" or "mess around in free time."
3) Have examples ready if someone needs an explanation, and also as inspiration for yourself. For instance: was William Carlos Williams[0] a "hobby" poet?
Yeah I just found out this year the William Carlos Williams was also, by the way, Chief of Pediatrics at Passaic General Hospital. What if people referred to his poetry as a hobby?
I suspect that Thibault is making money from it because he is actually deploying a web service, and the value there is coming from the network effects of its multiplayer user base.
Even if I deployed his code tonight, there's no guarantee users would switch over to my LeeChess clone :)
In the case of Dwarf Fortress it's just "download my exe and please donate if you like it!"
Your comment is true in a literal sense, but there are other games with very similar mind-boggling complexity that exist and thrive thanks to being open source. SS13 and CDDA jump to mind.
I think those take a different kind of person as a leader - the “Tarn + bro” person would need to be better at delegating and management of others vs implementation of his design himself.
Often if you have an idea but aren’t doing the low level implementation it’s harder to understand why something should change vs doing it yourself - in the latter case you just change the implementation and the design at the same time.
I love the idea of Dwarf Fortress and I think the internet purest mission is to disseminate works of passion such as this, not to sell me ads instead. That said, I can't get past the ASCII interface -- I'm a huge fan of IF games (which used to be called "text adventures" in the olden days) and I can deal with spartan UIs, but for real-time strategy/sandbox games, I absolutely need some sort of graphics. Tiles, at least. The same happens to me with Nethack, which fortunately does have graphical tilesets. I'm glad to read Toady One is working on such a UI!
Something I found insightful about TFA was this:
> Q: With your ~90 side projects, have you explored any other programming languages? If so, any favorites?
> A: Ha ha, nope! I’m more of a noodler over on the design side, rather than with the tech. I’m sure some things would really speed up the realization of my designs though, so I should probably at least learn some scripting and play around with threading more. People have even been kind enough to supply some libraries and things to help out there, but it’s just difficult to block side project time out for tech learning when my side project time is for relaxing.
This is interesting. I constantly feel the temptation to learn new tools, new languages, new stuff. I get sidetracked by the tech. But the key to successful games seems to be designing them and sticking to the work of making them work no matter the tech or language. If Toady had kept playing with programming languages and frameworks instead of sticking to his actual project -- creating a game -- maybe Dwarf Fortress wouldn't exist, or it wouldn't be as featureful.
>I love the idea of Dwarf Fortress and I think the internet purest mission is to disseminate works of passion such as this, not to sell me ads instead. That said, I can't get past the ASCII interface -- I'm a huge fan of IF games (which used to be called "text adventures" in the olden days) and I can deal with spartan UIs, but for real-time strategy/sandbox games, I absolutely need some sort of graphics.
If you like the idea of FW but can't get around the interface try Rimworld! Easily one of my favorite games of all time! Its like DF but with a Firefly theme and actual graphics.
I read everything about this game I can get my hands on. I don’t fully understand why I find dwarf fortress so intriguing. It’s such a pure passion project… that actually made it.
It's the programming equivalent to the people who turn their houses into model train worlds. People dabble in it, or make a few toys of their own, but its rare to commit so hard.
I am the same. I also started to play a few times, but not knowing the mechanics and not having enough time/motivation to learn them in detail is frustrating. But there’s a nice alternative:
https://youtube.com/c/kruggsmash
Watching someone else play Dwarf Fortress can be surprisingly entertaining. Just start one of his series from the start.
Hah, this is like EVE Online for me. I love reading about the espionage and cloak and dagger and pure insanity, but other than a brief toe-dip...oh no I will not play it.
If DF is too daunting try RimWorld. It's based mostly on DF, but has graphics and is considerably more approachable. It's also heavily modded and with mods can get just as complicated as DF can.
I've put in probably over a thousand hours on it, and have played it over the years as new releases come out. I've gotten to the point where I usually hit FPS death (too large a fortress that it overloads the CPU) even on the harder starts and with dfhack to help.
The draw for me was the steep learning curve that rewards you with logical complexity when you finally understand it. The lore that your fortress generates, as well as the random stories, is just icing on the cake. It's definitely not for everyone though since the UI requires additional programs like DF Therapist and DF Hack to be manageable still.
I love reading about it, but have played for maybe 30 minutes. The game doesn’t intrigue me, but the building of it does.
Same for Minecraft and similar. Figuring out the software and the cool way it came into existence is the problem to solve, actually playing it is (perhaps incorrectly) predictable details and so boring.
Its worth the effort and likely easier now than before. The myriad ways that a fortress can die amazes me. Its easy to get a fort that can survive invaders but ultimately your fortress will die. The number of times I have been nonplused by a new fortress failure is amazing. Those crazy dwarves keep finding new ways to destroy themselves.
Your reply is somewhat self-contradictory. If games are interactive simulations by nature, why is DF so unusual? There is a trend to call many modern games "immersive sims" and somehow they don't produce such stories.
In my opinion game developers have mostly stopped trying to simulate things. The focus is on clothes, rigid crafting systems, skill trees, storyline, cutscenes, vehicles, item collection. It's sandboxes, not simulations. Does Witcher 3 have emergence? Does GTA?
Compare that to Bullfrog, a game company which made simulations almost exclusively. In the 90's "simulation" was a game genre.
> If games are interactive simulations by nature, why is DF so unusual?
It’s an honest attempt at making a thorough simulation. Most games attempt only shallow rules and simulations — nearly everything that happens in Witcher 3 is encoded precisely, with few knock-on effects (because there’s nothing underneath the immediate effect + visualization — what’s modeled is precisely what you see); leading to the lack of emergent behavior. You’re dealing with a fairly rudimentary and static system. GTA is more flexible, but ultimately nothing follows any particular logic that doesn’t revolve around the player. More notably, in both games, if the player doesn’t exist, the world can no longer reasonably operate.
Simulation is inherent to game design, but very few games actively work towards it, as you’ve seen.
The simulation games of the 90’s were in the right vein — they faltered for practical reasons. As a result, they often make for thorough simulations that are dishonest — the internal logic of the simulation is violated for reasons of hardware limitations, UX simplicity, fun, etc. Tornados happen because it’s fun. DF is honest in the sense that it is only compromised by Tarn’s inability to implement something (and practical impact — it’s not worth trying to model atoms when fluid dynamics will suffice).
I don’t mean that DF produces a “realistic” simulation (as a climate scientist might) but that he produces a uncompromisingly logically consistent one (as a fiction author might)
> There is a trend to call many modern games "immersive sims" and somehow they don't produce such stories.
For what it's worth, "immersive sim" is more a sub-genre of first-person action game (games with a heavy focus on world-building, player choice, and stealth, heavily inspired by the games Thief and Deus Ex) than a descriptor. It fit better when it was first introduced in the early aughts, when those games were notably more interactive than the more static worlds of fast-paced, action-first, twitchy shooters of the era (Quake, Half-Life, Unreal, etc).
I absolutely agree with this, but it's an unpopular opinion that draws fire. Actual ambitious simulations are rare which is why they stand out. Rain World is one of my favorites. Rimworld is ok but the creator openly admits he doesn't want to go as deep as DF.
So many projects in this game. I had one which involved trying to build a base, capture a dragon and start a breeding program. Which failed - so I ended up abandoning, starting a new base on top of a dragon lair, capturing said dragon, leaving the base, then starting an adventurer who carried the caged dragon in a mine cart across the map to the original base.
It did not work. The version didn’t allow for the new adventurer to become a resident of the original base.
Futzed around with dfhack to force residence permissions and so on, ended up breaking the game and changing the species or some other weird bug.
But I did learn that adventure mode mine carts are ridiculously fast. As in survival is highly unlikely fast. There was a lot of saving and reloads due to the difficulty of banking turns at speeds that outran cheetahs.
In case anyone is interested in trying out DF, I recommend using the "Lazy Newb Pack". It provides a nifty GUI that lets you use different texture packs and change some settings that can improve enjoyment (cap population, remove aquifers, etc.)
To me what was most surprising about Dwarf Fortress, given the complexity, is that Tad didn’t use git or any other code repository until more recently.
I mean using SCM became the norm in maybe last 10-15 years ? When this was started I don't think using SCM was as ubiquitous as it is today, not to mention on a solo project. If you've been hammering away since then I can see how you might have missed it.
> I mean using SCM became the norm in maybe last 10-15 years
more like 30, at least in a lot of the industry. Even just looking at open-ish systems, people were excited about SVN in what, 2000?, because they had been working with CVS for years, if not a decade or more, at that point.
I've seen this post brought up as relevant regularly up till 2010.
You may have been lucky to avoid the dredges but SW industry in the 2000s was a copy-paste fest (not in the "copy from stack overflow" sense but as in "copy paste instead of creating a function"), PHP with SQL in templates and string concat queries all over the place, MVC was a revolution. I'm not saying this was everywhere just that it was very common.
I'm sure some wise guy will come up with retort that MVC was used in the 70s and what not, but my point is a lot of the industry discovered it with Rails and Django.
The way that we teach programming these days and languages/tools/frameworks just make some bottom tier mistakes from the past impossible (or very impractical). People often ignore this context when evaluating modern framework
In the Windows world SCM was not generally available until SVN was released as a File Explorer extension. This was sometime in the 2000s. Git took several years to become viable on Windows.
I am absolutely stunned that people think that source control only came around recently in this thread. CVS came out in 1990, RCS... 1982, SCCS... 1972. Visual Source Safe was commonplace in the 1990s, as terrible as it was.
There may have been small teams that didn't use SC in the 1990s, but this has absolutely been a basic best practice for decades.
IIRC, Microsoft's TFS (Team Foundation Server) also came on the scene in the early 2000's. I first used SVN around that time, before TFS became the standard at Microsoft shops.
My first employer out of university kept their code in Visual Source Safe, Microsoft's first attempt at a VCS.
It was really, really bad.
My second employer used CVS. Tags were fun back then -- each file was versioned separately so committing multiple files while a tag was in progress might result in only some of your changes making it into the tag. My big innovation there was to add a validation step to our build machine to ensure that the tag matched the state of the branch once the tag had finished, and add the tag to the build version number. Presto: we can actually see which code was built into that day's full build :).
The first large project I contributed to was GIMP in the mid-1990s and that was under CVS. I think the oldest personal repositories I have are a little later under Subversion, so yeah, certainly my personal projects in the era where I first used CVS to work on GIMP did not have version control.
SCM was commonplace by the mid 1980s at the latest. My CS program used CVS to submit code in the mid 1990s and as a network engineer we used RCS in 1990.
I could have misread, but in a less recent interview, I think he has been using git within the last two years. The interview where he mentioned that he didn’t use any source code repo was 4 years or older.
Well, if you deeply understand the code and reasons behind it (or it's superbly documented esp. with tests), the tool does not bring much beyond being a convenient backup or checkpoint system.
And especially if you don't have to work with a team.
I feel that a free and open tool that allows you travel through the 4th and 5th dimensions with your code is very useful, especially when everyone tends to forget what they've written several months ago. It's also a tool that forces you to document your changes.
Is it possible to climb a mountain without equipment? Sure, but it's a lot more dangerous and it'll take you longer to reach your goal.
You can create as many branches as you want and do all the experimentation you want.
When I stopped caring about committing and branching (something I’ve too long associated with pushing code, like reaching milestones) and started doing both as much as possible I really felt lots of freedom.
Instead of overthinking I can create parallel different approaches and see where they lead to. It’s amazing.
Don't underestimate all the useful information a complete git log contains, even for solo developers. You can look years back and find out exactly why a change was made.
I invite you to try using Git as much as possible for two weeks. After a week I started using Git for branching out of my main branch to test several different ideas and then compare them at once with other developers - colleagues or friends. I think that moved the meetings and discussions with my teammates up a level.
I have used Git when working with FOSS projects, and I tried using it for my own but it never took. The sort of workflow it is built for just isn't how I like to do things when I work on my own.
I do a lot of that in my solo project. I would add to your list "write a commit comment". It's like keeping a journal you can go back to. It gives me time to stop and think what have I done, what needs to be done next.
I don't much read my git-commit-comments but I think it is useful to write them.
I don't usually branch, at least at this stage of the project. Probably yes when I go to production when there is a need to fix bugs in the released version(s) without having to put all the latest code into it.
Even without branching commits are a good way to save the latest known good state I can easily go back to if things didn't work out.
That's a lot of commandline I don't normally have to bother with. And if I'm not living in the commandline then I only end up running git when I reach some milestone I want to back up... which I can do just as easily with 7zip without installing anything I don't already have, and I can copy archives to my NAS without having to set up some remote repo.
You can try and convince me until you're blue in the face dude, but I've tried it and I just don't like it. End of story.
When I'm hacking away on a personal project a huge amount of value in git (probably moreso than any other feature) just comes from having "what's in the repo" and "what I've changed today" (uncommitted changes) visible as diffs.
The exact same thing happened to me, and I kicked myself hard for it. At the time I used source control at work (TFS and SVN; git wasn't yet ubiquitous), so I didn't really have an excuse.
I've used source control for all personal projects since, and it's in no way a burden. In fact, there have been another couple of instances since the first, where I would have lost a bunch of work without it.
Or any other is the point. I mean even locally I depend on commits as save points to just allow me to go crazy and experiment with the code. I mean you can just copy the codebase elsewhere as backup but using git or mercurial is easier than that.
Not saying you're wrong, but to explain "how is that possible over 20 years without ever blowing things up": he says he uses visual studio since the beggining (the full blown ide).
Since he's working alone, he can easily use the local backup feature of it, which allow to easily rollback to automatically made copies of files, diff and cherry pick between current and backups, etc ... It's basically done automatically for you.
As long as you save your project regularly and don't need to merge code with anyone, it can go a long way.
No it isn't! It's so easy and effective for keeping track of your work and backups.
If you want the absolute simplest way to use git, just setup an alias to do:
alias gcmp="git add -A && git commit && git push"
Now all you need to do is run gcmp every time you're ready to log the new state of your codebase. How could it be simpler?
git only becomes more complex as your needs become more complex. At which point I'd recommend using something like https://magit.vc/ which makes complex git operations far easier. The CLI can only take you so far effectively unfortunately.
I can't imagine working on any personal project without git. How often have you been working on something for a while, noticed part of it that was working before was now broken and had no idea why. If you'd been making regular commits to git you could go through recent/pending changes and work out was was different trivially.
Git is also incredibly simple on single person personal projects, just git init, git add, git commit and some way to view the history and pending changes (either the command line, or most editors have a built in viewer).
Change history, if nothing else. That alone brings a ton of benefit, from being able to view earlier versions, to recovering from mistakes or lost code.
If you have a remote repo then, you can also have offsite backups. If you never branch, it's still massively worth it.
Basic git-fu (init, add, clean, commit, pull, push, branch, merge, stash, log) goes a long way to make your life easier. You can leave the fancy stuff for later.
There are even git repos of 99% done .gitignore files for most needs [1], which is most of the setup work for a solo project.
While Git is not without it's complexities, but as a developer who must use it with teams anyways, I find there are very simple workflows with almost no cognitive load that I'm able to use for my personal projects. Not only does this give me snapshots of my projects from any point in time, but backups as well since I use a free GitLab account as well.
Yeah sure, but for someone who's workflow started 20 years ago without Git, it shouldn't be at all surprising that they kept not using Git for a long time.
There's really no reason to be surprised at all about this unless you're the kind of person who never did any programming before, like, 2010 and lack the imagination or knowledge to understand how programming could be done without it.
I can't disagree with that. We must all make wise decisions with endless competing "priorities" and limited time.
As someone who is on the other side and believes in the value of continuously growing one's skills, I want to convince people that Git has value in learning and using. More tools in our toolbelt makes us more effective craftsmen. Which the DF developer did get around to doing...
I agree with this. I've been using cvs as my source control system of choice as a solo developer on my own projects for years and haven't switched to anything newer. I use git at work, but it's not worth the hassle for me at home. The workflow with cvs for me is basically "cvs update" (in case I need to know what changed) and "cvs commit". I rarely use branches. I religiously make one small-ish fix per commit and at least compile it, though I always run smaller things.
If I switched to working with a team I'd definitely use git. But for just me, cvs is all I really need. The most complicated thing I do is probably tag a release or revert to a previous revision of something, which is pretty rare.
For lone developers in 1998 it's more likely they would just occasionally make a copy of the code somewhere. That's still more or less how I work on my own stuff.
Df is just like OpenTTD. Both are like Chess, easy to start, fun to play casually but it takes years to master. Great games, complex if you want to and a time sink if you don't keep an eye on it. Have had many hundreds of fun hours in both games
I agree with the spirit of your post, but Dwarf Fortress and Easy to Start do not fit in the same sentence in my opinion. I mean, the game is legendary for it's arcane user interface and the vast number of things that can go wrong even for experts.
Dwarf Fortress is something like vim, where usually on the first interaction people don't even know how to start the game, let alone do anything in it.
Chess is easy to start, the rules fit on a post it note basically. Dwarf Fortress is difficult to start, and even more difficult to master.
I love Dwarf Fortress and play it quite actively. I've been using Vim as my primary IDE for many years. That said, I wouldn't compare Dwarf Fortress to Vim.
Vim is mainly difficult to get into because the model itself is so different from what people are used to and there's inherent complexity. But, quirks aside, it's mostly very consistent and learning a few patterns will get you very far.
In Dwarf Fortress you need to fight not only incredible complexity, but also so very much bad UI/UX and inconsistencies between what should have been identical actions. A great example is searching a list of items. This is done with q for query. Or sometimes s for search. And in some cases f for filter. Or even better, the way k, v, t, q, are all just needles variants of "look at thing under cursor". And none of them work for some activity zones such as pastures, then you need i.
As much as I love DF I'm very much looking forward to the Steam version, mainly for the UI overhaul, which is so far looking really promising.
When I first started playing DF it took me all day reading the wiki...
Next week was spent on trying to survive the first winter...
As a 15 years vim user though - vim has a much more gradual but also much higher learning curve, I'm still learning new tricks in vim on a weekly basis.
I've played a fair amount of Dwarf Fortress and I'd never describe it as 'easy to start'? The learning curve is notorious and for most people involves watching a lot of YouTube tutorials and copying actions.
I'm hopeful that the upcoming Kitfox Games version makes it very accessible to lots more people.
This seems to be the opposite of what people have described in the past. With the start being incredibly hard as you basically need to follow a wiki page step by step to work it out. But after a bit you can find a method that pretty much makes the game unlosable so you have to start implementing your own restrictions and artificial difficulties.
I would second this and highly recommend using the Starter Pack[0] previously known as the Lazy Newb Pack. It contains DFHack, tilesets and just really streamlines the user experience.
You can use dfhack to run the game, which provides some niceties [0]. There are graphical packs; many people like Phoebus. You can also use an external program like Dwarf Therapist for dwarf management, which becomes necessary once you have a lot of dwarves.
This is a little bit of a hijack but if I wanted to start coding games as a side project, where would I start?
Which platform? Mobile, PC, console? Any good introductions on the subject of solo game development? I know I can google this, but I trust HN users more than the Google algo.
I started doing game dev in DOS after reading Tricks of the Game Programming Gurus by Andre LaMothe in the 90s, so keep that in mind for the following advice.
Depends on what you want to do. If you want to code games because you find the programming aspect interesting, start small by writing your own versions of some simple games. I personally wouldn't recommend any frameworks or libraries other than (maybe) SDL. Implement everything in the simplest way you can think of that will actually work and only go back and refactor if you need to, that's the time to look up how other people have solved that problem [0]. Resist the urge to over engineer. I might be biased but I say target PC first. Windows specifically, but Linux isn't much worse as long as you never plan to deploy the thing. This is because these platforms are incredibly open and there is a lot of information and tooling available. After you get a feel for it, and have a good idea of what you want to make next, start incrementally branching out in directions that interest you.
If you have a good idea of the kind of game you want to make and want to start making it with as little friction as possible, then your best bet is to find an engine that is already well suited to that kind of game and learn just the things you need to in order to make it happen. Again, you'll want to start small regardless of what it is you actually want to make, just ensure that you're always moving toward that goal. That is very much not my path, so I have little other advice.
[0] If you look it up first without trying it yourself, you won't have a good understanding of the problem space. You'll end up believing in the commonly accepted answer as dogma and severely limit yourself.
My father got me that book when I was getting more seriously into game programing and it's an absolutely wonderful tome. It has a lot of very good points about not just programming but also game design and how to achieve interesting and seemingly intelligent NPC behavior with a few simple tricks.
My parents also got me that book as a teenager. I learned C from that book: I knew Pascal which we were taught at school and once I realised that you just substituted BEGIN and END with { and } it was easy to learn the other aspect.
I really regret that I don't have it any more for the sentimental value it held.
I see archive.org has it in their library [0], though it is is marked as "not for borrow"
I'm not much of a game programmer, but lurk /r/gamedev. The common advice is to start small - think toys rather than MMOs. From there, it's said one should follow what they're interested in, and focus on follow-thru, not tacking on features to their dream game that is supposed to compete with Skyrim. Console development is always going to be more trouble than the more open platforms of computers and phones. Myself, I've always thought it'd be pretty fun to make a menu-heavy game with just web technology, which then of course CAN be played anywhere.
There are lots of engines out there that can take care of things for you, or act as full fledged studios, like Game Maker. Some prefer to start from scratch of course. Again, the idea should be follow what you're interested in so that you can actually get something done.
The very first game you should make is a Pong/Breakout/Asteroids/Galaga/Frogger clone. It seems simple, but you need to have a surprising amount of systems: graphics, audio, controls, collisions, game states, menus, scoring, user interface.
The engine and language does not matter for this first game. The only thing that matters is completing one small game.
After that, you'll have the level of knowledge to make somewhat informed choices about project #2, where you can expand and innovate. You can use hobbyist engines like Godot and have more control, or more professional engines like Unity or Unreal if you want access to those tools and asset libraries.
Breakout is a great candidate for a toy game. As is snake. I often make Breakout clones to familiarise myself with new systems.
If you're looking to get things done fast I think Unity is an excellent choice. After coming from years of working with custom engines and Frostbite, Unity feels like such a breeze.
As for r/gamedev I'm quite active there myself and while I like it and it's gotten a lot better recently it still has an issue where it's largely hobbyists and inexperienced indie developers. Their uninformed opinion will often drown out more qualified voices. So take information there with a grain of salt.
Why don't you start with an idea and self-awareness of your strengths?
You might be good at puzzles. Or stories. Maybe visuals ain't your thing and you can write a text based game - there are great engines for that. Maybe you're a great dev and can start hack your own Dwarf Fortress and keep on doing it for the rest of your life.
Gaming and tech behind is so varied that whoever you are, you'll find something that plays to your skills.
I second Godot. You can make cross platform/mobile games with it. The IDE itself is built with the engine which is rather cool.
Fair warning, the syntax is python inspired but not compatible, which most of the time that will trip you up.
My second suggestion, is for your first project to develop a top down 2D (instead of a classical side scroller), using itch.io assets. To get a nice jump feeling for a side scroller 2D game, you straight ahead have to start using a state machine and fiddle a bunch, no hand holding with that.
I'd go with Unity, targeting desktop for simplicity. There's an abundance of high quality tutorials (a respectable amount made by Unity) and the learning curve is gentle, imho.
I think the first thing to figure out is: do you want to make games or game engines? Or is there a specific part of game development that you are particularly interested in? Back in the year 2000 I was very into Level Design. Back then the FPS genre just took off and I happened to stumble into a "Worldcraft.exe" in a folder in the CD of Half-Life.
I would suggest you start by just coding a text-based game. Yeah, you heard me right. Maybe a text adventure or whatever. A D&D game printing out damage as logs. FTL clone, with text descriptions only. Anything. You can create a surprisingly engaging game just with basic standard input and output - even multiplayer games (the MUDs were basically this). Print a basic ASCII map, and now you can do Nethack.
The reason for this is: we all want to make beautiful AAA games. But if you have no clue where to begin, it means that you need to develop your intuition for the game logic first – otherwise, you would probably know what to look for :)
If you start by downloading Unity or similar, now you'll be bogged down trying to learn all its systems (without a clear understanding of _what_ you need to learn and what you can ignore, for now). You'll also be bogged down by the need for assets. Sure, you will have a full blown 3D engine, but it's still incredibly boring when all you have is a bunch of cubes or premade assets, so you are right back to square one. Only with more complexity. A lot more - the more visually complex the game, the more code you will have to write that's only concerned about visuals. Getting a character to, say, swing an axe and make it look right and that it is actually hitting something involves a surprising amount of work. Yeah, you could use existing sample games nowadays, but that isn't really teaching you much.
Then it depends on how much background you have. If you were, say, a front-end developer with any experience, you could use that to add some basic visuals. Think Tetris. You can do a lot with very rudimentary tools, as long as everything is kept simple.
At some point, you might _need_ to display graphics (maybe that's the whole point of your game idea). I say "might", because Dwarf Fortress, which is the subject of this thread, never really did. In which case you have some more decisions to make. Is it a 2D game? Maybe use something like Pygame, Löve (for Lua), etc.
At some point you'd be looking into Godot, Unity or similar. And guess what, you could take your basic text-based game, and re-use parts of it as the brains for your game.
Don't get into the game-engine building rabbit hole. It's very fun if you are into that, but know you are unlikely to release anything by going that route. Ask me how I know.
Platform: start with whatever machine you use for development. Presumably you know a lot about it, don't start a side-quest :) Specially, avoid consoles for now. Cross-platform development is getting easier than ever - sometimes all you need to do to start running your game on mobile is to click a dropdown. But that simplicity is deceiving, there's lots you'll have to learn about other platforms. Stick with what you know, until you are comfortable.
I'll let others provide reference material, my sources are outdated as I'm past my Gamedev.net days (for the time being).
Dwarf Fortress was a staple of my childhood, even if I only understood maybe 5% of the mechanics going on at any given time (even back when I used to play it). That was part of the appeal though: if you could learn how to do something in Dwarf Fortress, it felt like an accomplishment. Learning to dig better bases is a several-hour research project, and simple tasks like brewing beer can proliferate into any number of different problems. This kind of trial-and-error problem solving is probably responsible for getting me into development.
Nowadays though, I mostly play Rimworld for my colony management fix. I love Dwarf Fortress, but I could never comfortably learn it's mechanics in a lifetime (let alone several). Even still, the emergent, chaotic gameplay of Dwarf Fortress should be picked apart by any budding game developers. Even after 15 years of playing PC games, Dwarf Fortress still feels the most "next gen" out of them all.
What's surprising to me is Dwarf Fortress author uses Visual Studio Community, rather than the paid version of that IDE:
> "In enterprise organizations (meaning those with >250 PCs or >$1 Million US Dollars in annual revenue), no use is permitted beyond the open source, academic research, and classroom learning environment scenarios described above."
Surely there has been years where his revenue has exceeded $1 million.
Sometimes the rule is to not editorialize, sometimes you get slapped for it, other times the hall monitors step in to change the title after the fact because they don't like the article's actual title. They want you to remove clickbait numbers so maybe that's what happened here.
I always wanted to love DF but I could never get into it. Perhaps it was the ASCII.
I found Rimworld to scratch the same itch and have sunk many hours into it. I feel like it is spiritually very similar, even if the depth of simulation is not as deep as DF.
It’s amazing that he’s doing the same thing for those long. I think that’s the definition of passion, I really want to find some projects/startups that make me fee like that as a SE. That’s the dream, hope he keeps it up!
> When you declare a class that’s a kind of item, it locks you into that structure much more tightly than if you just have member elements. It’s nice to be able to use virtual functions and that kind of thing, but the tradeoffs are just too much.
Aka, OOP is a bad design even when you take the benefits of its virtual method system.
I really can't agree. Rimworld might be more accessible, but it lacks DF's depth of simulation. Even if you ignore details in DF that are generally unimportant (e.g. all the intricate details of a dwarf's personality and the precise genetics of their facial hair), Rimworld's lack of 3D and fluid simulation alone make it vastly simpler.
But what really annoyed me about Rimworld, and why I never really got into it, was how "gamey" it feels.
One element is, the mechanics of random events, which feel extremely arbitrary. I'm not talking about random animals coming onto the map, that's little different from a legendary beast turning up in DF, but things like a solar event causing all your batteries to blow up for no reason. And random events seem to happen every few minutes. Whereas in DF, pretty much everything that happens, happens for a reason. There's randomness, but it's not completely random. If your fortress is wealthy, it will attract invasions. Forgotten beasts actually exist on the world map and have a history. That sort of thing.
In addition to that, crafting and combat are extremely simplistic compared to DF. In DF, bootstrapping an economy which can actually craft everything you need from scratch is actually fairly difficult and requires significant planning and investment. Many things require multiple stages of processing, and it can be very hard to find the particular raw material you need. In Rimworld, you pick up a few raw materials and can manufacture an assault rifle in a couple of minutes. And that assault rifle doesn't need ammunition, and is hopelessly inaccurate beyond 10m or so, for some reason. Compare that to DF where when a dwarf shoots a crossbow it actually does a 3D simulation of the bolt's trajectory. And in the background it's also simulating things like the bolt's temperature, and when it hits the target, it determines what happens based on the density and strength of the bolt's material and the material of whatever it hit.
Yeah, I've gone through phases of almost loving RimWorld, but it really lacks the depth and cohesion of Dwarf Fortress' simulation.
There's a sense of something missing, and RimWorld tries to fill that void with its random event generator, the "storyteller system". A better game would at least give you ways to interact with them - to predict solar flares or prevent electrical shorts - and indeed there are mods which help with this. But it just underlines the shallowness of the simulation, how these events are truly random and not connected to the game world in any meaningful way.
Rimworld is a better game but not really the deeper sim (I feel like it worked on distilling a lot of what made dwarf fortress compelling while making it much more accessible, and part of that is simplifying).
Rimworld is phenomenal… I absolutely love it, but I wish it had z-levels and the fluid sim of DF. Nothing is more fun than accidentally flooding your fortress with magma!
> A: It’s probably boring for me to say, but I just can’t beat the drunken cat bug... That was the one where the cats were showing up dead all over the tavern floor, and it turned out they were ingesting spilled alcohol when they cleaned their paws.
I think that bug explains very well just how deeply complex Dwarf Fortress really is. Drinks can be spilled. Some drinks have alcohol. If cats step in something it sticks to their paws. Cats clean their paws, causing them to ingest what's on them. Enough alcohol will kill a cat. Put together: dead drunken cats.