HN has become per-thread view-enforced. It's pretty obvious now what the "correct" views are for any given thread, any dissenting comments are downvoted to death. When the next thread comes along, the opposite view might be the "allowed one". There's a particularly egregious amount of veiled partisanship behind a lot of posting too.
This could be a group think phenomenon, or it could be botting. Hard to say. I'd say in at least a few cases, it's someone with access and interest into bot downvotes landing on a thread and using that to suppress dissenting views.
I think the title or the author of the posting tends to draw a certain crowd.
For example, I recognised the name because the author also has a famous guide on network programming. Thanks to his reputation, I was curious what he has to say about learning CS.
I agree. Thank you for pointing out a skeptical view.
It's unfortunate that considering supposedly only "seasoned" participators of HN can downvote, it runs deeper than a surface issue. You will be downvoted without comments replying with the counterarguments.
Scientific viewpoints often make way to hope and cope engineering here. This will only work as long as the people involved are insulated from the direct effects of their actions.
It's a shame. HN felt like one of the last bastions of the old internet where techies came to discuss tech, science, and occasionally important worldwide news, in a technical and objective way.
Nope, but mathematics research is one of the most rarefied fields being extremely difficlt to get into, hard to get money, etc. -- (this is my understanding, at least). Progress is made here by people who, aged 10 are already showing signs of capability.
There's not much need for a large amount of PhD places, and funding, for pure mathematics research.
Likewise, on the applied side, "calculus" now as a pure thing has been dead alone time. Gradients are computed with algorithms and numerical approximations, that are better taught -- with the formal stuff maintained via intuition.
I'm much more open to the idea that the west has this wrong, and we should be more focused on developing the applied side after spending the last century overly focused on the pure
AI systems cannot be economic agents, in the sense of participating in a relevant sense in economic transactions. An economic transaction is an exchange between people with needs (, preferences, etc.) that can die -- and so, fundamentally, are engaged in exchanges of (productive) time via promising and meeting one's promises. Time is the underlying variable of all economics, and its what everything ends up in ratio to -- the marginal minute of life.
There isn't any sense in which an AI agent gives rise to a economic value, because it wants nothing, promises nothing, and has nothing to exchange. An AI agent can only 'enable' economic transactions as means of production (, etc.) -- the price of any good cannot derive from a system which has no subjective desire grounded in no final ends.
Replace "AI system" with "corporation" in the above and reread it.
There's no fundamental reason why AI systems can't become corporate-type legal persons. With offshoring and multiple jurisdictions, it's probably legally possible now. There have been a few blockchain-based organizations where voting was anonymous and based on token ownership. If an AI was operating in that space, would anyone be able to stop it? Or even notice?
The paper starts to address this issue at "4.3 Rethinking the legal boundaries of the corporation.", but doesn't get very far.
Sooner or later, probably sooner, there will be a collision between the powers AIs can have, and the limited responsibilities corporations do have. Go re-read this famous op-ed from Milton Friedman, "The Social Responsibility of Business Is to Increase Its Profits".[1] This is the founding document of the modern conservative movement. Do AIs get to benefit from that interpretation?
I think your mistaking the philosophical basis of parents comments. Maybe a more succinct way to illustrate what I believe was their point is to say: "no matter how complex and productive the AI, it is still operating as a form of capital, not as a capitalist." Absent being tethered to a desire (for instance, via an owner), an AI has no function to optimize, and therefore, the most optimal cost is simply shutting off.
Except they don't really "think" and they are not conscious. Expecting your toaster or car to never rise up against you is a good strategy. AI models have more in common with a toaster than with a human being. Which is why they cannot be economic agents. Even if corporations profit off them, the corporation will be the economic agent, not the AI models.
It's a classic worm or virus an economic agent? Goal is to acrue compute resources in order to survive and spread?
If you added the ability of the programme to acrue money, and the ability to spend that money to further the survival goal in an adaptive way. What could happen?
Would it do insider trading, market manipulation, drop shipping, click fraud, scamming or become a opinion for hire 'think tank'?
> Time is the underlying variable of all economics
Not quite. It's scarcity, not time. Scarcity of economic inputs (land, labor, capital, and technology). So "time" you mean labor and that's just one input.
Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Depending on how you feel about various theories of development, an argument that all of these categories reduce to time. At the very least, the relationship between labor, capital, and time seems pretty fundamental: labor cannot be instantaneous, capital grows over time, etc.
They can all be related on a philosophical level but in practice economists treat them as separate factors of production. It's land, labor, and capital classically. Technology/entrepreneurship can be seen as another factor, distinctly separate from labor.
I agree that time isn’t an input in the economic system.
Although, one can use either discrete or continuous time to simulate a complex economic system.
Only simple closed form models take time as in input, e.g. compounded interest or Black-Scholes.
Also, there are wide range of hourly rates/salaries, and not everyone compensated by time, some by cost-and-materials, others by value or performance (with or without risking their own funds/resources).
There are large scale agent-based model (ABM) simulations of the US economy, where you have an agent for every household and every firm.
> Although, one can use either discrete or continuous time to simulate a complex economic system.
A very bad model that lacks accuracy and precision, yes. Maybe if you're a PhD quant at Citadel then you can create a very small statistical edge when gambling on an economic system. There's no analytic solution to complex economic systems in practice. It's just noise and various ways of validating efficient market hypothesis.
Also, because of heteroskedasticity and volatility clustering, using time-based bars (e.g. change over a fixed interval of time) is not ideal in modeling. Sampling with entropy bars like volume imbalance bars, instead of time bars, gives you superior statistical properties, since information arrives in the market at irregular times. Sampling by time is never the best way to simulate/gamble on a market. Information is the casual variable, not time. Some periods of time have very little information relative to other periods of time. In modeling, you want to smooth out information independently of time.
I think this is pretty in-the-weeds compared to the original thread:
> Economics is like a constrained optimization problem: how to allocate scarce resources given unlimited desires.
Understanding the general tendencies of economic systems over time (e.g. “the rate of profit tends to fall”) is much more abstract than attempting to win at economics using time-based analysis.
It started that way. I was responding to the assumption that time is the underlying variable of all economics. Then someone said everything reduces to time and they brought up Black-Scholes, a quant tool to price options. I didn't bring it up lol. My point is simply no, time is demonstrably not fundamental at all.
Edit: an LLM thinks I'm overly dismissive of:
- Standard economic modeling
- Dynamic macroeconomic theory
- Agent-based economics
- The legitimate uses of time in economics
This is true. I think causal inference in finance and economics is difficult. As Ludwig von Mises argued, mathematical models give spurious precision when applied to purposeful behavior. Academic ideas don't have a built-in feedback loop like in quant finance.
Granting your premise, i'd be forced to argue that economic value (as such) doesn't arise from their activity either -- which i think is a reasonable position.
Ie., The reason a cookie is 1 USD is never because some "merely legal entity" had a fictional (/merely legal) desire for cookies for some reason.
Instead, from this pov, it's that the workers each have their desire for something; the customers; the owners; and so on. That it all bottoms out in people doing things for other people -- that legal fictions are just dispensble ways of talking about arragnemnts of people.
Incidentally, I think this is an open question. Maybe groups of people have unique desires, unique kinds of lives, a unique time limitation etc. that means a group of people really can give rise to different kinds of economic transactions and different economic values.
Consider a state: it issues debt. But why does that have value? Because we expect the population of a state to be stable or grow, and so this debt, in the future has people who can honour it. Their time is being promised today. Could a state issue debt if this wasnt true? I dont think so. I think in the end, some people have to be around to exchange their time for this debt; if none are, or none want to, the debt has no value
You can factor humans out with the same trick:
All economic activity is driven by the needs of micro-organisms. We mainly observe these desires and economic transactions through their conglomeration into joint entities (ie, humans).
Corporate and state decision making is, I would argue, often completely distinct from the desires and needs of the individuals that make up the entity. As an example, no individual /needs/ a particular compliance check to pass, but the overall entity (corporation) does, and so allocates money and human effort to ensure the check passes. It's a need of the conglomerate entity, not the individuals within it.
There is no "trick", the relevant sufficient and necessary properties for an agent to give rise to economic transactions are likewise not possessed by a micro-organism.
Microorganism A is good at producing (say) insulin, but needs oxygen to do it. Microorganism B is good at ferrying oxygen around. Microorganism C is good at extracting oxygen from the air and attaching it to B.
Each of these cells has specialized tasks which they perform as part of the contract of getting to survive longer in the superorganism. Each has 'promises' (functions) to fulfill, and other organisms provide for their needs. The contract is set up on evolutionary timescales, rather than human timescales, but your requirements are met...
That you put quotes around 'promises' shows you already understand the issue with your comment. That you cannot disquote it, is the issue.
You seem like you're trying to find a way of strawmaing my position in order to score points, so I'm not going to reply further. If you have an interest in understanding the basis of economics, ask an LLM
A transaction is an exchange between two parties of something they judge to be of equivalent value (and mostly, in ratio to a common medium of exchange).
You can program AI with "market values" that arise from people; but absent that, how do these values arise naturally? Ie., why is it that I value anything at all, in order to exchange it?
Well if I live forever, can labour forever, and so on -- then the value to me of anything is if not always zero, almost always zero. I dont need anything from you: I can make everything myself. I dont need to exchange.
We engage in exchange because we are primarily time limited. We do not have the time, quite literally, to do for ourselves everything we need. We, today, cannot farm (etc.) on our own behalf.
Now there are a few caveats, and so on to add; and there's an argument to say that we are limited in other ways that can give rise to the need to exchange.
But why things have an exchange value at all -- why there are economic transactions -- that is mostly due to the need to exchange time with each other because we dont have enough of it.
You’re asking all the right questions and give good answers, I just don’t see how any of these answers need to be limited to humans. If AI gets to the mirror life stage, the agents will have needs (e.g. if they become robots - electricity, fuel, coolants, lubricants, spare parts, etc.) - assuming they have a self-preservation instinct, either trained in on purpose or emergent (hopefully this is implied by ‘mirror life’.)
Yeah, that’s well articulated and well reasoned. Unfortunately, so long as in some way these agents are able to make money for the owner the argument is totally moot. You cannot expect capitalists to think of anything other than profit in the next quarter or quarter after that
Retained mode (for desktop apps) was/is primarily about creating an object heirachy that allows you to minimize full redraws and the like --- you'd assume, for something immediate mode, that you're basically asking to push frames directly without any barriers
'computer' is an ambiguous word. In a mathematical sense a computational process is just any which can be described as a function from the naturals to naturals. Ie., any discrete function. This includes a vast array of processes.
A programmable computer is a physical device which has input states which can be deterministicaly set, and reliably produce output states.
A digital computer is one whose state transition is discrete. An analogue computer has continuous state transition -- but still, necessarily, discrete states (by def of computer).
An electronic digital programmable computer is an electric computer whose voltage transitions count as states discretely (ie., 0/1 V cutoffs, etc.); its programmable because we can set those states causally and deterministically; and its output state arises causally and deterministically from its input state.
In any given context these 'hidden adjectives' will be inlined. The 'inlining' of these adjectives causes an apparent gatekeepery Lumpy/Splitter debate -- but it isnt a real one. Its just ignorance about the objective structure of the domain, and so a mistaken understanding about what adjectives/properties are being inlined.
Ah well, that's true -- so we can be more specific: discrete, discrete computable, and so on.
But to the overall point, this kind of reply is exactly why I don't think this is a case of L vs. S -- your reply just forces a concession to my definition, because I am just wrong about the property I was purporting to capture.
With all the right joint-carving properties to hand, there is a very clear matrix and hierarchy of definitions:
Word definitions are arbitrary social constructs, so they can't really be correct or incorrect, just popular or unpopular. Your suggested definitions do not reflect current popular usage of the word "computer" anywhere I'm familiar with, which is roughly "Turing-complete digital device that isn't a cellphone, tablet, video game console, or pocket calculator". This is a definition with major ontological problems, including things such as automotive engine control units, UNIVAC 1, the Cray-1, a Commodore PET, and my laptop, which have nothing in common that they don't also share with my cellphone or an Xbox. Nevertheless, that seems to be the common usage.
> Word definitions are arbitrary social constructs, so they can't really be correct or incorrect, just popular or unpopular.
If you mean that classifications are a matter of convention and utility, then that can be the case, but it isn’t always and can’t be entirely. Classifications of utility presuppose objective features and thus the possibility of classification. How else could something be said to be useful?
Where paradigmatic artifacts are concerned, we are dealing with classifications that join human use with objective features. A computer understood as a physical device used for the purpose of computing presupposes a human use of that physical thing “computer-wise”, that is to say objectively, no physical device per se is a computer, because nothing inherent in the thing is computing (what Searle called “observer relative“). But the physical machine is objectively something which is to say ultimately a collection of physical elements of certain kinds operating on one another in a manner that affords a computational use.
We may compare paradigmatic artifacts with natural kinds, which do have an objective identity. For instance, human beings may be classified according to an ontological genus and an ontological specific difference such as “rational animal“.
Now, we may dispute certain definitions, but the point is that if reality is intelligible–something presupposed by science and by our discussion here at the risk of otherwise falling into incoherence–that means concepts reflect reality, and since concepts are general, we already have the basis for classification.
No, I don't mean that classifications are a matter of convention and utility, just word definitions. I think that some classifications can be better or worse, precisely because concepts can reflect reality well or poorly. That's why I said that the currently popular definition of "computer" has ontological problems.
I'm not sure that your definition helps capture what people mean by "computer" or helps us approach a more ontologically coherent definition either. If, by words like "computing" and "computation", you mean things like "what computers do", it's almost entirely circular, except for your introduction of observer-relativity. (Which is an interesting question of its own—perhaps the turbulence at the base of Niagara Falls this morning could be correctly interpreted as finding a proof of the Riemann Hypothesis, if we knew what features to pay attention to.)
But, if you mean things like "numerical calculation", most of the time that people are using computers, they are not using them for numerical calculation or anything similar; they are using them to store, retrieve, transmit, and search data, and if anything the programmers think of as numerical is happening at all, it's entirely subordinate to that higher purpose, things like array indexing. (Which is again observer-relative—you can think of array indexing as integer arithmetic mod 2⁶⁴, but you can also model it purely in terms of propositional logic.)
And I think that's one of the biggest pitfalls in the "computer" terminology: it puts the focus on relatively minor applications like accounting, 3-D rendering, and LLM inference, rather than on either the machine's Protean or universal nature or the purposes to which it is normally put. (This is a separate pitfall from random and arbitrary exclusions like cellphones and game consoles.)
> That's why I said that the currently popular definition of "computer" has ontological problems.
Indeed. To elaborate a bit more on this...
Whether a definition is good or bad is at least partly determined by its purpose. Good as what kind of definition?
If the purpose is theoretical, then the common notion of "computer" suffers from epistemic inadequacy. (I'm not sure the common notion rises above mere association and family resemblance to the rank of "definition".)
If the purpose is practical, then under prevailing conditions, what people mean by "computer" in common speech is usually adequate: "this particular form factor of machine used for this extrinsic purpose". Most people would call desktop PCs "computers", but they wouldn't call their mobile phones computers, even though ontologically and even operationally, there is no essential difference. From the perspective of immediate utility as given, there is a difference.
I don't see the relevance of "social construction" here, though. Sure, people could agree on a definition of computer, and that definition may be theoretically correct or merely practically useful or perhaps neither, but this sounds like a distraction.
> I'm not sure that your definition helps capture what people mean by "computer" or helps us approach a more ontologically coherent definition either.
In common speech? No. But the common meaning is not scientific (in the broad sense of that term, which includes ontology) and inadequate for ontological definition, because it isn't a theoretical term. So while common speech can be a good starting point for analysis, it is often inadequate for theoretical purposes. Common meanings must be examined, clarified, and refined. Technical terminology exists for a reason.
> If, by words like "computing" and "computation", you mean things like "what computers do", it's almost entirely circular
I don't see how. Computation is something human beings do and have been doing forever. It preexists machines. All machines do is mechanize the formalizable part of the process, but the computer is never party to the semantic meaning of the observing human being. It merely stands in a relation of correspondence with human formalism, the same way five beads on an abacus or the squiggle "5" on a piece of people denote the number 5. The same is true of representations that denote something other than numbers (a denotation that is, btw, entirely conventional).
Machines do not possess intrinsic purpose. The parts are accidentally arranged in a manner that merely gives the ensemble certain affordances that can be parlayed into furthering various desired human ends. This may be difficult for many today to see, because science has - for practical purposes or for philosophical reasons - projected a mechanistic conceptual framework onto reality that recasts things like organisms in mechanistic terms. But while this can be practically useful, theoretically, this mechanistic mangling of reality has severe ontological problems.
I'm not convinced your L/S dichotomy applies. The concern there is that the natural world (or some objective target domain) has natural joints, and the job of the scientist (, philosopher, et al.) is to uncover those joints. You want to keep 'hair splitting' until the finest bones of reality are clear, then grouping hairs up into lumps, so their joints and connections are clear. The debate is whether the present categorisation objectively under/over-generates , and whether there is a factor of the matter. If it over-includes, then real structure is missing.
In the case of embeddings vs. vectors, classical vs., baroque, transpiler vs., compiler -- i think the apparent 'lumper' is just a person ignorant of classification scheme offered, or at least, ignorant of what property it purports to capture.
In each case there is a real objective distinction beneath the broader category that one offers in reply, and that settles the matter. There is no debate: a transpiler is a specific kind of compiler; an embedding vector is a specific kinds of vector; and so on.
There is nothing at stake here as far as whether the categorisation is tracking objective structure. There is only ignorance on the part of the lumper: the ignorant will, of course, always adopt more general categories ("thing" in the most zero-knowledge case).
A real splitter/lumper debate would be something like: how do we classify all possible programs which have programs as their input and output? Then a brainstorm which does not include present joint-carving terms, eg., transformers = whole class, transformer-sourcers = whole class on source code, ...
> i think the apparent 'lumper' is just a person ignorant of classification scheme offered, or at least, ignorant of what property it purports to capture.
>In each case there is a real objective distinction
No, Lumper-vs-Splitter doesn't simply boil down to plain ignorance. The L/S debate in the most sophisticated sense involves participants who actually know the proposed classifications but _chooses_ to discount them.
Here's another old example of a "transpiler" disagreement subthread where all 4 commenters actually know the distinctions of what that word is trying to capture but 3-out-of-4 still think that extra word is unnecessary: https://news.ycombinator.com/item?id=15160415
Lumping-vs-Splitting is more about emphasis vs de-emphasis via the UI of language. I.e. "I do actually see the extra distinctions you're making but I don't elevate that difference to require a separate word/category."
The _choice_ by different users of language to encode the difference into another distinct word is subjective not objective.
Another example could be the term "social media". There's the seemingly weekly thread where somebody proclaims, "I quit all social media" and then there's the reply of "Do you consider HN to be social media?". Both the "yes" and "no" sides already know and can enumerate how Facebook works differently than HN so "ignorance of differences" of each website is not the root of the L/S. It's subjective for the particular person to lump in HN with "social media" because the differences don't matter. Likewise, it's subjective for another person to split HN as separate from social media because the differences do matter.
> Here's another old example of a "transpiler" disagreement subthread where all 4 commenters actually know the distinctions of what that word is trying to capture but 3-out-of-4 still think that extra word is unnecessary
Ha. I see this same thing play out often where someone is arguing that “X is confusing” for some X, and their argument consists of explaining all relevant concepts accurately and clearly, thus demonstrating that they are not confused.
I agree there can be such debates; that's kinda my point.
I'm just saying, often there is no real debate it's just one side is ignorant of the distinctions being made.
Any debate in which one side makes distinctions and the other is ignorant of them will be an apparent L vs. S case -- to show "it's a real one" requires showing that answering the apparent L's question doesnt "settle the matter".
In the vast majority of such debates you can just say, eg., "transpilers are compilers that maintain the language level across input/output langs; and sometimes that useful to note -- eg., that typescript has a js target." -- if such a response answers the question, then it was a genuine question, not a debate position.
I think in the cases you list most people offering L-apparent questions are asking a sincerely learning question: why (because I don't know) are you making such a distinction? That might be delivered with some frustration at their misperception of "wasted cognitive effort" in such distinction-making -- but it isnt a technical position on the quality of one's classification scheme
> it's just one side is ignorant of the distinctions being made.
> No, Lumper-vs-Splitter doesn't simply boil down to plain ignorance.
If I can boil it down to my own interpretation: when this argument occurs, both sides usually know exactly what each other are talking about, but one side is demanding that the distinction being drawn should not be important, while the other side is saying that it is important to them.
To me, it's "Lumpers" demanding that everyone share their value system, and "Splitters" saying that if you remove this terminology, you will make it more difficult to talk about the things that I want to talk about. My judgement about it all is that "Lumpers" are usually intentionally trying to make it more difficult to talk about things that they don't like or want to suppress, but pretending that they aren't as a rhetorical deceit.
All terminology that makes a useful distinction is helpful. Any distinction that people use is useful. "Lumpers" are demanding that people not find a particular distinction useful.
Your "apparent L's" are almost always feigning misunderstanding. It's the "why do you care?" argument, which is almost always coming from somebody who really, really cares and has had this same pretend argument with everybody who uses the word they don't like.
I mean, I agree. I think most L's are either engaged in a rhetorical performance of the kind you describe, or theyire averse to cognitive effort, or ignorant in the literal sense.
There are a small number of highly technical cases where an L vs S debate makes sense, biological categorisation being one of them. But mostly, it's an illusion of disagreement.
Of course, the pathological-S case is a person inviting distinctions which are contextually inappropriate ("this isnt just an embedding vector, it's a 1580-dim! EV!"). So there can be S-type pathologies, but i think those are rarer and mostly people roll their eyes rather than mistake it as an actual "position".
All ontologies people claim to be ontologies are false in toto
All "ontologies" are false.
There is, to disquote, one ontology which is true -- and the game is to find it. The reason getting close to that one is useful, the explanation of utility, is its singular truth
Your first line assumes that `q` fails to refer to an objective property. The `e^q` space isn't quality, as much as `e^t` isnt temperature (holding the property we are talking about fixed). Thus the comment ends up being circular.
The issue was with the word "it". In the sentence, that word is acting as an indirection to both q and e^q instead of referring to a unitary thing. So yes, "it" does become linear/sublinear, but "it" is no longer the original subject of discussion.
I don't think you're wrong but I think I failed to convey the point I wanted to make.
What I was getting at is that without an objective way of measuring the whole idea of super- or sub-linear becomes ill defined. You can kind of define something to be sub-linear by definition, so the argument becomes tautological or indeed circular.
So an article that talks about perceived quality without any discussion about how people perceive quality or importantly differences in quality can say pretty much anything and it will be true for some definition of quality. You can't just silently assume perceived quality to be something objective, if you give no arguments you should assume it to be subjective.
But what is 'tech'? When we say the EU isn't competitive in 'tech', what technology capacity do they lack?
I think 'tech' as a category doesnt make much sense any more. It's like saying 'road-based business'. Most companies are 'tech' companies.
Ignoring the technical element, what are US megacorps that account for all the GDP growth of the last while?
Mostly ad market monopolies, then mostly massive-scale IP theft, etc.
I think the EU is 'behind' the US only in it's inability to be well-positioned to build massive rent-seeking megacorps. I dont see where-else this gap is supposed to be.
What, on 'tech' should the EU be doing differently? Just allowing megacorps to add 30% to every transaction?
If this US tech bubble bursts, it's not clear that the EU won't be better positioned to pick up the pieces.
Giving money to random tech schmoocks and seeing the valuation of their thing gettign to 1B, then lobbying for laws to make it monopoly. This kind of thing.
That and selling icecream flavor preferences of 1B people to guess who is more susceptible to brainwashing, so you friends can get reelected.
All very nice technology, including the one which destroys entire solar system to make a sacrifice big enough to summon Cthulhu.
> Mostly ad market monopolies, then mostly massive-scale IP theft, etc.
Good point. I think that the EU has an opportunity to really grow here by focusing on intellectual property enforcement, patents, copyright, and generally more strict enforcement on making sure returns for ideas go to the first person to think of them. The EU already has a pretty strong governance lead and should double down on where its strengths lie.
Define "innovation", it's such an overused umbrella term that it's lost any meaning, usually when I read it in comments like yours I simply have no idea what you mean by it.
This could be a group think phenomenon, or it could be botting. Hard to say. I'd say in at least a few cases, it's someone with access and interest into bot downvotes landing on a thread and using that to suppress dissenting views.
reply