The point of LLMs is to refute language, turn anything remotely tied to a specific reality into full arbitrariness, essentially to dislodge our relationship to reality.
The the only remaining point of arbitrary language is to refute itself prior to automation where it becomes nothing.
Either we shift to direct perception or we succumb to arbitrariness led by feudal primates.
The question is how did we not see the cultish idea of anthropomorphizing machines that use words. Words are nothing. The "space" between words, as arbitrary as the words to begin with, are not meaningful in terms of actions. The images we take and automate in AI are arbitrary. There's nothing to automate in reality that doesn't require our action-syntax to participate in.
AI is a completely buffoonish mistake. It's a road to nowhere that words and symbols began and counting (binary) adds the illusion of thought to. How we did not solve words instead of lazily automating them is totally self-deceptive.
Tech's problem is it's trapped in the ancien regime of cog-sci: beliefs, intents, motivations, and not recognizing the words we use come beset by those initial misconceptions. We can't extract them in the arbitrariness, nor can we seem to grasp where belief, motivation, intent are seamlessly connected to hormones, the endocrine system, neurotransmitters. We don't understand yet where we take control from our biology. William James saw this, how did Hinton, McCullough not?
>The point of LLMs is to refute language, turn anything remotely tied to a specific reality into full arbitrariness, essentially to dislodge our relationship to reality.
That is not and has never been the point of LLMs. Is has that effect mostly because the web and social media have already fractured consensus reality into an infinite fractal of hyperrealities where LLMs can fill the void of societal alienation, but correlation is not causation.
>The question is how did we not see the cultish idea of anthropomorphizing machines that use words.
We did. We saw this coming from miles away. As with everyone who criticized LLMs and AI, we were ridiculed as delusional Luddites standing in the way of progress. So it goes.
>Words are nothing. The "space" between words, as arbitrary as the words to begin with, are not meaningful in terms of actions.
And yet here you are expressing your thoughts and opinions with words. Odd.
It's clear you believe you're on to something profound regarding the nature of cognition but your excessive verbosity combined with a lack of specific sources and concrete ideas makes you come off as a bit of a crank. It's telling that the one time an actual neuroscientist called you out, you dismissed their entire field as "folk psychology." Giving off strong "Here is my thesis on why Einstein was a fraud and free energy is real" vibes.
The point of high dimensional space is to generate the illusion of specifics from arbitrary intermediaries, from what is thought is specific.
(Vectors and high-dimensional spaces Vocab space Embeddings)
That it is conceptually distinct from language does not erase the inherent arbitrariness that links both fatally.
Yes, words are nothing: the only reputable operation of language is to refute itself on the path to next-gen specifics (action-syntax or otherwise). Make sense? All these words are not for naught, but they only have one purpose.
You need context or citation to the notion language is irrelevant: Aristotle, there are no contradictions, and if you need this expanded into a clear argument, Cassirer Language and Myth, which eviscerates language in 1946.
That we remained blind to these disproofs is hair-raising.
You've certainly refuted my hope that you might have a coherent point to make. "I won't bother to back up any of my claims, but it's shocking how no one recognizes what a genius I am" is a hell of a red flag. Good day, mallowdram.
If you had any grasp of what signaling is vs. communication, you'd be asking questions rather than characterizing, which is the refuge of the inhibited and the limited.
You and most industries live under the domain of the conduit metaphor paradox: That meaning can be extracted from anything other than an action-reaction specific. This eliminates words as meaning bearing, and reiterates that an arbitrary system has little certain effect as evolutionary. They're handicaps.
Language is meaningless, except as a status, domination, manipulation, or control factor.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.”
Ev Fedorenko Language Lab MIT 2024
What is hypothesized by Aristotle about non-contradiction, and then theorized by Cassirer in language, is revealed in Systemic Functional, and Western functional linguistics, and then proven in aphasia studies beginning in 2016.
Language is done, it serves no purpose except either to wither us in disinformation dark matter biases, or to replace itself.
If you know humanity would anthropomorphize AI, you already knew the basics of the worthlessness of language.
The question is how did we not see the cultish idea of anthropomorphizing machines that use words. Words are nothing. The "space" between words, as arbitrary as the words to begin with, are not meaningful in terms of actions. The images we take and automate in AI are arbitrary. There's nothing to automate in reality that doesn't require our action-syntax to participate in.
AI is a completely buffoonish mistake. It's a road to nowhere that words and symbols began and counting (binary) adds the illusion of thought to. How we did not solve words instead of lazily automating them is totally self-deceptive.
Tech's problem is it's trapped in the ancien regime of cog-sci: beliefs, intents, motivations, and not recognizing the words we use come beset by those initial misconceptions. We can't extract them in the arbitrariness, nor can we seem to grasp where belief, motivation, intent are seamlessly connected to hormones, the endocrine system, neurotransmitters. We don't understand yet where we take control from our biology. William James saw this, how did Hinton, McCullough not?