Carmack gives updating in a loop as the one exception:
> You should strive to never reassign or update a variable outside of true iterative calculations in loops.
If you want a completely immutable setup for this, you'd likely have to use a recursive function. This pattern is well supported and optimized in immutable languages like the ML family, but is not super practical in a standard imperative language. Something like
def sum(l):
if not l: return 0
return l[0] + sum(l[1:])
Of course this is also mostly insensitive to ordering guarantees (the compiler would be fine with the last line being `return l[-1] + sum(l[:-1])`), but immutability can remain useful in cases like this to ensure no concurrent mutation of a given object, for instance.
You don't have to use recursion, that is, you don't need language support for it. Having first class (named) functions is enough.
For example you can modify sum such that it doesn't depend on itself, but it depends on a function, which it will receive as argument (and it will be itself).
While your example of `sum` is a nice, pure function, it'll unfortunately blow up in python on even moderately sized inputs (we're talking thousands of elements, not millions) due to lack of tail calls in Python (currently) and the restrictions on recursion depth. The CPython interpreter as of 3.14 [0] is now capable of using tail calls in the interpreter itself, but it's not yet in Python, proper.
Yeah, to actually use tail-recursive patterns (except for known-to-be-sharply-constrained problems) in Python (or, at least, CPython), you need to use a library like `tco`, because of the implementation limits. Of course the many common recursive patterns can be cast as map, filter, or reduce operations, and all three of those are available as functions in Python's core (the first two) or stdlib (reduce).
Updating one or more variables in a loop naturally maps to reduce with the updated variable(s) being (in the case of more than one being fields of) the accumulator object.
> It's all fun and games until you realise you can't run a consumer economy without consumers.
If the issue is that the AI can't code, then yes you shouldn't replace the programmers: not because they're good consumers, just because you still need programmers.
But if the AI can replace programmers, then it's strange to argue that programmers should still get employed just so they can get money to consume, even though they're obsolete. You seem to be arguing that jobs should never be eliminated due to technical advances, because that's removing a consumer from the market?
The natural conclusion I see is dropping the delusion that every human must work to live. If automation progresses to a point that machines and AI can do 99% of useful work, there's an argument to be made for letting humanity finally stop toiling, and letting the perhaps 10% of people who really want to do the work do the work.
The idea that "everybody must work" keeps harmful industries alive in the name of jobs. It keeps bullshit jobs alive in the name of jobs. It is a drain on progress, efficiency, and the economy as a whole. There are a ton of jobs that we'd be better off just paying everybody in them the same amount of money to simply not do them.
The problem is that such a conclusion is not stable
We could decide this one minute, and the next minute it will be UN-decided
There is no "global world order", no global authority -- it is a shifting balance of power
---
A more likely situation is that the things AI can't do will increase in value.
Put another way, the COMPLEMENTS to AI will increase in value.
One big example is things that exist in the physical world -- construction, repair, in-person service like restaurants and hotels, live events like sports and music (see all the ticket prices going up), mining and drilling, electric power, building data centers, manufacturing, etc.
Take self-driving cars vs. LLMs.
The thing people were surprised by is that the self-driving hype came first, and died first -- likely because it requires near perfect reliability in the physical world. AI isn't good at that
LLMs came later, but had more commercial appeal, because they don't have to deal with the physical world, or be reliable
So there are are still going to many domains of WORK that AI can't touch. But it just may not be the things that you or I are good at :)
---
The world changes -- there is never going to be some final decision of "humans don't have to work"
Work will still need to be done -- just different kinds of work. I would say that a lot of knowledge work is in the form of "bullshit jobs" [1]
In fact a reliable test of a "bullshit job" might be how much of it can be done by an LLM
So it might be time for the money and reward to shift back to people who accomplish things in the physical world!
Or maybe even the social world. I imagine that in-person sales will become more valuable too. The more people converse with LLMs, I think the more they will cherish the experience of conversing with a real person! Even if it's a sales call lol
To say that self driving cars (a decade later with several real products rolling out) has the same, or lesser, commercial appeal than LLMs now (a year/two in, with mostly VC hype) is a bit incorrect.
Early on in AV cycles there was enormous hype for AVs, akin to LLMs. We thought truck drivers were done for. We thought accidents were a thing of the past. It kicked off a similar panic among tangential fields. Small AV startups were everywhere, and folks were selling their company to go start a new one then sell that company for enormous wealth gains. Yet 5 years later none of the "level 5" promises they made were coming true.
In hindsight, as you say, it was obvious. But it sure tarnished the CEO prediction record a bit, don't you think? It's just hard to believe that this time is different.
I would much rather work than not work. Many other people are the same. If I don't have a job, I will work on my free time. I enjoy it. I don't have to work for a living, but I have to work to be alive.
There are many people like me, and we will be the ones to work. It won't be choosing who has to work, it will be who chooses that they want to work.
It's our only conclusion unless/until countries start implementing UBI or similar forms of post scarcity services. And it's not you or me that's fighting against that future.
Looking at the source, it seems pretty easy to find Java programs that wouldn't compile correctly, but of course a fully general Java to C++ converter would be a huge undertaking. I guess with OK test coverage of the generated C++ code (which Firefox certainly has), editing the Java code remains doable.
Some laws are much easier to evade than others, and some laws have especially bad side effects when evaded. (A law about e.g. requiring a certain type of government certification before a large construction project can be undertaken, for instance, would be far harder to evade than this one, and "they can just do <x>" would probably be a bad objection)
Using non mainstream sites instead seems like a realistic way people are going to skirt this law, and those are very hard to regulate without other unpleasant side effects on Internet freedom. I think it's a reasonable concern to raise.
You can do high density without block housing! Many European cities are good examples of this. Paris, for instance, has very high density because it's packed with mid-rise (~7-8 floors) buildings[1]. Another good example is the section of Manhattan between Midtown and Lower Manhattan, which is also primarily mid-rise and very dense.
Apart from that I do think it's true that block housing separated by large parks (as shown in your picture) doesn't work well; it's discussed in e.g. Jacob's _The Death and Life of Great American Cities_.
I agree re: European examples but my argument there is many of these cities developed over long time periods with stable growth in population, stable industry etc, whereas now it seems like the idea is to simply explode urban housing capacity asap, which leads to the block style or high rise glass and steel.
and of course we can't forget that many places are at an inflection point in population growth so all of this capacity could be vacant in 15-20 years.
The issue is that people want to explode housing capacity ASAP in relatively limited geographic areas within cities. If mid rise was actually broadly incentivized, it wouldn’t be that challenging to rapidly increase that stock.
Strong towns [0] advocates a more bottom up approach to development, and is a good resource for anyone interested in how to add density in a sane manner.
How do you properly forecast housing need so your development curve (and the foundational urban planning necessary) aligns with the housing demand curve while keeping said housing affordable?
The CCP seems to have been very successful with their "build an entire city and move people into it model" (seeing the cities have filled), but I'm unsure how it works in a more capitalistic model.
Even mid-rise is kind of bad. You tend to share walls with a few neighbours still and there's no opportunity for any private garden space. It's not great for pet ownership, either. I'm not convinced it's a lot better than high rise/block housing as a space to actually live in. I say this presently living in a mid-rise in the UK. Edit: I should probably add that the leasehold system in the UK makes them a particularly bad proposition if you want to actually own your home as well.
Terraced housing has kind of okay density, at least it's not a total land crime like US suburban detached housing but affords many of the freedoms associated with it.
Right, I knew that not all SI units were base ten. But I also knew that lat/lon is in degrees, not radians. I also saw the Wikipedia section that knots are still used in industry rather heavily due to their relation to nautical miles. Also not SI.
I recall that astronomical units are also not SI.
Mentioning computers is just a cheap shot, I admit. Still, is valid.
Cooking is an odd one. The old units that were largely defined in thirds are quite useful. Weight is, of course, more reliable for baking, but you can go very far at home quantities with cups and spoons.
To be fair, I'm a large believer that the units are arbitrary and whatever you learned will be good. Such that if you learned SI, it had advantages off the bat. But I am in less agreement that they have an intrinsic advantage.
Knots and nautical miles are interesting because the nautical mile was originally based on latitude: one minute (1/60 degree) of latitude was 1 nautical mile.
So an airplane traveling due north at 120 knots would cover 2 degrees of latitude per hour.
Most of the US Customary and British Imperial units actually have similar logical definitions or derivations, but they aren't regularly taught anymore.
Right. My point was more that in industries where there is some advantage to keeping a non SI unit, they are want to do so without major external pressure.
So, knots persist because lat lon persists. Home cooks persist with imperial in some places because nobody cares to reprint all recipes and measuring devices. Astrological units because at that scale... Nothing scales. And computers, because binary won. (Curious to consider if ternary had been the winner...)
I confess I am actually personally moved by some of the intuitive arguments for older measurements. Usually very physical based and very in tune with numbers actually used in an industry. It is odd to think of a sixteenth inch wrench, but it is just the natural result of dividing by two, four times, after all. (That is, you have a measuring rod, put a midpoint on there. Four times. Now, do the same for millimeters?). (granted, in the age of computers, any measurement is much easier to do at the machining level.)
A typical case of the governing body decides one thing but in practice the "since-always" used convention is still preferred I guess? At least this is my experience with dealing with reqs from customers within aviation.
As a rule, yes, but not necessarily and not everywhere. The important part is to have a coherent set of units, which usually is going to be SI units, but not always. If the user's preferred unit never is going to be SI, does it make sense to base your program on SI units?
For example, aviation deals heavily in flight levels (multiples of 100 ft). If flight levels are a first-class concept in your program, then you're already using non-SI units below the presentation layer. At this point you've established that some altitudes in your program are expressed in feet, and it might be a better idea to use feet for altitudes everywhere rather than introducing a lot of unit conversions. Or it might not be.
I took it as in the presentation layer, but I can see where you're coming from. In that case I would agree with you, and I'm not sure why GP would complain about that.
What do you think the current text encoding footguns are?
In a different direction, I don't know what your problem domain is/was, but in general when I'm dealing with UTF8, I don't need to convert back to bytes very often. Was the need for conversion mostly due to the libraries that still expected strings instead of bytes?
It's been quite a few years since we went through the conversion, and at this point, working in Python 3 is natural to me, so I may not be able to recall all the footguns. I can say a lot of the difficulty was due to libraries, both third-party and standard, and that hasn't improved very much. I don't want to single anyone out here, because it's pervasive. In Pytuon 2, str was the bag of bytes type. I think a lot of libraries didn't want to change to accepting bytes types instead, because it broke API compatibility, but it caused a lot of issues.
I should also say that we were working with files in tons of encodings, not just UTF-8. We had UTF-16 and UTF-32, both little and big endian, with and without BOMs, but we also had S-JIS and a bunch of legacy 8-bit encodings. Often we wouldn't know what encoding a file was in, so we'd have to use the chardet library, along with some home-grown heuristics to guess.
Off the top of my head, the two biggest footguns are:
- There should be no way to read or write the contents of a file into a str without specifying an encoding. locale.getpreferredencoding() is a mistake. File operations should be on bytes only, or require an explicit encoding.
- .encode() and .decode() are very poorly named for what they do, and it wasn't that uncommon that someone would get them backwards. Sometimes, exceptions aren't even thrown for getting them wrong, you just get incorrect data.
Both of which were still issues with Python 2. There's a valid architectural argument to be had between the Python 2 way, where str was a bag of bytes, and the unicode type was for decoded bytes, and the Python 3 way, where the bytes type is your bag of bytes and str holds your decoded string. I favor Python 3's way of doing it, but it's almost six of one, half a dozen of the other. The advantages of one over the other are slight, and given how many library functions relied on the old behavior, it was probably a mistake to change it like that, rather than continuing the Python 2 way, and fixing issues like those above that caused problems.
I haven't done a ton of work with Python recently, but the problems I remember encountering came from the fact that python doesn't try to have encodings in any other part of the basic type system. So like, if you have an int or a float, you can pass those to any interface that takes a 'number-y' value and it will mostly work like you expect. That's also how strings worked in P2 - you could pass them around and things would accept the values (though you might get gibberish out the other side). Now, in P3, things will blow up (which is helpful for finding where you went wrong ofc - I understand the utility), but it means that your code handling things-that-might-be-strings-or-bytes often needs to have a different structure than the rest of your code.
I think the P3 string/byte ecosystem was made substantially weaker by P3 deciding not to lean more into types (something I have complained about on here before!). Like...they are the only values where the stdlib is extremely specific about you passing a value that has the exact right type, but the standard tools for tracking that are pretty poor.
> but it means that your code handling things-that-might-be-strings-or-bytes often needs to have a different structure than the rest of your code.
Isn't that the point? String and bytes are different beasties. You can often encode strings to bytes and just about anything accepting bytes will accept it, but the converse is not true. Bytes are more permissive in that any sequence of any 0x00-0xff is acceptable, but str implies utf8 (not always guaranteed, I've seen some shit), meaning e.g. you can dump it to json without any escaping/base encoding.
Letrec requires either laziness (which implies some sort of internal mutability, as thunks get replaced by values at some point), or "internal" mutability that may not be visible to the programmer (like in OCaml, where in a `let rec` some variables will initially be null pointers, and get updated once other definitions have run).
In languages with no mutability at all (not even internal mutability like with these two features), you can't create cycles and refcounting becomes possible, but I'm not aware of non-academic general purpose languages that actually fall in this category.
Esoteric functional languages like Unlambda do make that guarantee, and implementations can use a ref counter as their GC.
If data is immutable and you tie the lifetime of data to a single stack frame, you only need a single bit for reference counting: whether the current frame owns the reference. Calling a function just requires setting this to bit low (some asterisks here), and all objects tied to a given frame can be collected when the frame returns.
Erlang does have some mutability (the process dictionary, and the mailbox), but the fact that the language is largely immutable is leveraged by the GC: if I remember correctly the private heap GC is a generational semi-space copying GC, but because the language is immutable it doesn’t need to have write barriers, to perform a minor collection only a scan of the stack(and process dict) is necessary to know the valid roots of the young heap.
I'm always interested in understanding animal neurology, especially in the context of trying to reach more general levels of AI. I find it interesting that modern ANNs perform near human level at some difficult tasks, and yet it seems there are some tasks that insects perform that we'd have no idea how to implement. Animal nervous systems certainly have interesting things to teach us.
It also seems like understanding simpler brains will help in progressively understanding the human brain, even if these brains are very different. Developing a less human-focused toolkit might be what we need; sometimes studying a more general problem is what you need to get past blockers in a more concrete problem.
On that topic I really enjoyed _Other minds_ by Peter Godfrey-Smith, which talks about octopus behavior and some neurology, but I'm interested in recommendations for more technical readings in animal cognition / neurology.
> You should strive to never reassign or update a variable outside of true iterative calculations in loops.
If you want a completely immutable setup for this, you'd likely have to use a recursive function. This pattern is well supported and optimized in immutable languages like the ML family, but is not super practical in a standard imperative language. Something like
Of course this is also mostly insensitive to ordering guarantees (the compiler would be fine with the last line being `return l[-1] + sum(l[:-1])`), but immutability can remain useful in cases like this to ensure no concurrent mutation of a given object, for instance.