The way in which Sega ultimately approached its console hardware business was actually similar from start to finish; you just have to look at the pre-Mega Drive consoles to see that the Mega Drive itself was the shining exception within a strategy that was very "spray and pray".
First there was the SG-1000 and the SC-3000 computer in 1983. Then there was a mostly cosmetic update, the SG-1000 II. Then there was the Sega Mark III and Master System; the Mark III had a variant release with an FM sound chip. All of these releases happened within a span of four years, 1983-1987. Throughout these releases there was a heavy focus on arcade ports, and Sega struggled with marketing the console as its own kind of experience.
When the Mega Drive came out in 1988 it was a big enough leap to be a stable target for a few years, and then Sega reverted to their previous ways. To the extent that Sega "got" their console business, it was a case of a few teams in various departments and subsidiaries that bucked the trends.
I think it helps to consider this a problem of "slow vs fast".
Society has displayed ways of sheltering and hibernating through tulmultuous times and subsequently developing some kind of response.
Chief among this is the reuse of the old. Of course you can build new quickly; that's what Andreesen calls for. And it's easy, as these things go: Hand some money and labor to someone who wants to bark orders and throw their weight around and they'll get a thing made, like Ozymandias building his monument. History always provides such people.
But reusing old successfully is the thing you need crafty witches and wizards for, and they usually only reveal themselves when a dragon shows up and needs a talking-to.
In this case, the dragon is that tendency to push information towards a model of legibility by the state and for the populace to in turn aim to be inscrutable, a back and forth that has occurred throughout history. Sometimes this shapes spatial life, as with the story of medieval taxation based on the number of windows in the house. At other times it uses political theory and precedent to assert rights. Here we have the opportunity to be inscrutable by a rather direct escape from the norm, simply using some less popular alternative.
This is a crisis mostly in the sense that we still crave to have a popular, inclusive, fast-moving discussion while being inscrutable to power, and you can't square that circle so easily. Rather, you have to look towards gradual redefinitions of reality and possibility to counter normalization. This is necessarily a slower process than simple surveillance and seizure.
With respect to the Web, it's clear enough that it was built with holes in it, and much of the resulting stack was further distorted in turn. Why? Because it was a new thing - and evolved defenses as it went along.
But now it is an old thing, and as a popularizer of concept has succeeded wildly. The concept is what we'll probably use, and the specific tech only in parts.
What ad-hoc code generation lacks, as a rule, is type constraints and structured expression(syntax and semantics). If it has those, it is bordering on being a complete compiler.
But if your base environment doesn't have notions of types or structured expressions either, which is mostly true of assemblers, then you are free to use macros to program at a higher level. The expressive power you gain by leveraging macros doesn't have a real downside because the core language is already so limited that it won't be more legible or maintainable to do it by hand.
I'm pretty sure there's room for smaller verticals these days. It's been demonstrated many times that if you have the best front-end to the problem space, and you add some services on top, everything under it can be totally commodified but you'll still get customers.
From there the strategy would depend on whether you want to stay small or not: To get bigger, you'd start going deeper into the open stack to scale things up and provide a wider array of services. If you stay small, your organization will necessarily be more focused on interfaces and compatibility while maintaining that top-end UX. In both instances there are plays for open source, but with different characters; the big company will tend to code-dump an enterprise toolchain, the small one will primarily be a contributor to a foundation project or open some of their internal interfaces.
> It's been demonstrated many times that if you have the best front-end to the problem space, and you add some services on top, everything under it can be totally commodified but you'll still get customers.
I'm getting to 35 now and I had an experience that felt very similar to yours in my coastal magnet school. Lots of academics, lots of extracurricular stress, kids of immigrants who pushed for more, more, more. The guy who sat next to me in German committed suicide.
As the years have gone by I've realized two things:
1. The other behaviors are there, but they get filtered and harder to notice. Many of the situations you see in dramas are just...less dramatic, and more ordinary in real life. People don't want to bring it up, they want to stay in their routine, and when you have a ton of structure(as is the case in these schools) things move on too quickly to reflect on anything or become self-directed, so it gets repressed. The media version goes out of its way to highlight it, in contrast.
Many years after I graduated, the popular physics teacher at my high school was caught fooling around with the girls and doing favors for them. Apparently this had been going on for many years. You never would have known. He was a good teacher.
2. At a young age, even if you've encountered these things, you aren't necessarily sensitive enough to accurately judge what is happening to you or to others. Since teenagers struggle with this they start looking for easy ways to provide themselves with an identity - and media is happy to supply you with a stock identity that is, in rough approximation, true to you. But of course, they are all a mismatch on some level. And when young people socialize they are often prone to projecting on each other in an unhealthy way, drawing boundaries and defining characters out of thin air.
Which, if I were to turn that into advice, it would be: Stay focused on the ordinary stuff. Keep a diary so that when you notice something, it gets recorded and you can reflect on it and challenge it. It's the one thing that is most missing when you get caught up in feelings of urgency.
Yes, this argument is one shared with some arguments for minimum wage. When a sector of the economy can set wage freely, it often races to the bottom on price competitiveness, neglecting automation and training improvements; businesses that attempt to raise their quality get pummelled by the higher costs and risks relative to competition. A rising minimum wage standard therefore encourages modernization of the work environment.
With UBI a similar effect is had on the demand side: If you assume a higher base of income, then consumer credit, payday loans, friends-and-family-favors, etc. become less of a necessity for low-income workers. Higher-income workers with long-term debts like mortgages and student loans become free of their debt more quickly, and face fewer consequences if their income takes a hit. The workforce is therefore disentangled from a set of predatory financial interests that chain them to needing their current job and to stay in the good graces of their financial backers. Even a very small amount of UBI will create a substantial reduction in poverty traps, domestic abuse, and labor mistreatment.
For similar reasons I have recently been entertaining the idea of government as employer-of-last-resort. Essentially, have the government spend lots of money on creating businesses designed to employ the unemployed, with good working conditions and pay, and don't immediately focus on making the operation profitable. This would, I hope, create positive pressure on wages and conditions in low-skilled jobs: no stream of desparate people they could rely on, and an alternative employer raising the bar.
I wasn't thinking of COVID-19 when I contemplated it though. It's not a new idea of course.
What tends to be absent in the simple analysis that leads to these kinds of hammer-to-all-nails solutions is a full enumeration of the classes of errors or intractable problems in the domain, and how the proposal does or does not address each.
This blogpost does not even attempt such an enumeration, but instead appeals to aesthetics.
FWIW I went through a period of using map and filter a little bit, went, "that's nice", and then resumed using for loops, sometimes with comprehension syntax to sugar it up. Both methods basically cover ways to specify simple iteration and selection. But I know which one is going to port better across the majority of environents: the plain for-next, and if a complex iteration is called for, the while loop. It puts the data allocation where I can see it, and it maps to the single-threaded computing model that is still the default today.
If I want a more complex query, I am going to start wanting a more expressive query language than either imperative or functional selection and iteration can provide by themselves. Left outer joins do not come easily to either method. Neither does constraint logic programming. You obviously can implement those things, but it isn't blindingly obvious, and when your problem grows to need a very broad expression of selection and iteration, it is those kinds of tools that you really need to aim for.
Some time ago I realized, while standing in the midst of the Oakland Museum of Art and Digital Entertainment, surrounded by old game boxes, this:
"Most of these games are all marketing."
That is, if you look at the art, and look at the back of the box bullet points, you'll see something like "Go on an epic journey", or "Choose who lives or dies", or "Build an empire to stand the test of time".
To the extent that these games express these things, it's through clever rearrangements of stock tropes: Your typical murderering and looting game protagonist, able only to communicate down the barrel of a gun, is now justified through the plot and given many new backdrops so as to make the journey "epic". A scripted choice is added here and there, but not everywhere, to make "choosing who lives or dies" feel consequential, but without ascribing particular meaning to the choice either(since all choices should be gratifying for marketing purposes). Empire-building is signalled through various reports of legible progress in gaining territory and developing cities and armies, but nothing resembling the actual political structure or dynamics of an empire - the fantasy is simply one of a "rise and further rise".
And so in playing these games, you get an aesthetic impression, but not something with a solid grounding to it that you would spend time thinking about afterwards or relating to your real-world experiences. When a speedrunner sets out to conquer these sorts of games they look for software vulnerabilities that short-circuit the impression of what is going on and attack the underlying data model and logic.
In that way, video games have been pushed through industrialized practice quite a ways away from the natural state of games as a tradition, which is to fully and honestly explore simple concepts. You can't speedrun basketball, because you're playing within the laws of nature and against opponents who do the same. But if you go to market a basketball video game, you are trading on the impression of basketball, not its reality: and so licenses for professional players, superlative simulation techniques, etc. come to the fore.
So as I see it, games like Candy Crush are further extensions of industrialization: The game concept is simply a tool for the marketing framework, which in this case has been designed towards metrics-optimized microtransactions and customer retention. If a particular level is failing to retain players or to induce a purchase, it gets reworked until the metrics line up.
Despite all this, good work in games does tend to shine through. Nintendo's franchises, for example, are all built on "honest explorations" of their basic themes, and the play concepts tend to have something intrinsically interesting going on. And the breakthrough indie hits usually have this quality, too. The games that get buried, in contrast, usually aren't achieving the same degree of cohesiveness and direction - even if they're huge AAA productions.
The distinction is in quantity and types of assets. In 2D you usually have some bitmap and vector assets, collision data, animations, and scene structures to tie it all together. In 3D you have all of those things plus the texture/model/material distinctions, lights, skybox, and all the other details that come in as you add more rendering capabilities.
However, you can absolutely make 2D games that need lots of kinds of assets. RPGs happen to be one of them, in fact: The incidental details of adding inventory, NPCs, abilities, dialogue, etc. does add up. Every little message and description, every item's properties. You can ship a game by hardcoding much of it, but that's not going to scale with any substantial team. You need real data management, build processes, etc. Where assets interact with each other you get incidental complexity of the Dwarf Fortress bug kind, so where you can, you add static checks. The engine is built in tandem with the asset pipeline, in effect.
So - whether it's 2D or 3D, what actually matters is the assets. That's why even in the early 80's you had vector games with 3D effects in them; they just took approaches that simplified the assets and the resulting scenes.
In academia many end up going back to Huizinga's "magic circle" [0] which he was writing about in the 1930's, before any video games were around(although many immediate mechanical precessors were in evidence like pinball and slot machines).
From the perspective of a game creator or player, what tends to be of primary importance is the information conveyed through the game, which can be primarily aesthetic(a pretty picture) or deal with specific themes and principles. In this light, introducing the magic circle is of some importance because of its clarifying property: the information is exploring these concepts, and not some others that you might be interested in.
And so it goes with many opinions about games, too: if it doesn't cover the topics they want in they way they like, players reject the game. Some players need to see violence and power struggle, others need cozy reassurance.
First there was the SG-1000 and the SC-3000 computer in 1983. Then there was a mostly cosmetic update, the SG-1000 II. Then there was the Sega Mark III and Master System; the Mark III had a variant release with an FM sound chip. All of these releases happened within a span of four years, 1983-1987. Throughout these releases there was a heavy focus on arcade ports, and Sega struggled with marketing the console as its own kind of experience.
When the Mega Drive came out in 1988 it was a big enough leap to be a stable target for a few years, and then Sega reverted to their previous ways. To the extent that Sega "got" their console business, it was a case of a few teams in various departments and subsidiaries that bucked the trends.