I spent some time reading GPT-3 reddit replies [1] and I'm impressed with the quality of writing - but the actual content is terrible IMO - it's like reading the most annoying people on forums - grasping at semantics etc. while they have nothing valuable or insightful to say. It just creates noise and writes for the sake of writing something.
It can say things, it just doesn't have anything valuable to say.
So asking GPT-3 random metaphysical questions gets you semi credible responses because those kinds of questions reduce to semantics and bullshit anyway.
I could see this as a layer that bridges some AI reasoning model and human language.
Unnecessarily detailed nitpick: Data uses contractions throughout the dialogue of TNG -- there is just one episode ("Datalore" it might have been?) where they decide to make it a plot point that he doesn't use them as a way of having the rest of the bridge crew figure out that he's been replaced by his evil twin who can use contractions.
I'm pretty sure the plot point (and the ban on Data using contractions) doesn't even get a mention again in the rest of the series.
Just saw the episode... so top of mind but in “Future Imperfect,” Data using a contraction is one the things that makes Riker realize he is in an illusion.
I don't know why they wouldn't have done it the other way round. Make Lore the one who couldn't use contractions (a fatal flaw!) which removes the dialog constraint on Data for the rest of the series.
I played many hours with AI Dungeon's GPT-3 based "Dungeon" model, instructing it to write sci-fi fan-fics. Its output never ceased to amaze me. It's junk by the standard of high literature, obviously. Yet, when you consider the number of bad fan-fics in existence, I'd say GPT-3 probably ranks high in a relative comparison, at least two order-of-magnitude better than my creativity (since its training data already included a huge collections of fictions, including web fan-fics and numerous text adventure games, and the job of GPT-3 is mixing and rewriting them, it shouldn't be a surprise).
I'm never a fan of the "AI" hype, but GPT-3 decidedly changed my mind. If I can be attracted to read, for hours, the meaningless stories generated by a mindless language model in an experimental stage - still, with a quality arguably outperforms many unskilled humans, I'd say I'm convinced that there are lots of promising applications of machine learning (for better or worse), even if it's only capable of feeding low-quality content to the people. 1984 got it right.
No reason the AI composing tools have to be limited to pop music. All the most "authentic" types of music could be assisted by AI just the same way. I would have thought it would be particularly useful for things like scoring film and TV.
Scoring is tricky because the music has to be synced to the action. Once you've done all that feature engineering, it's just a small bit of work for the composer to score it. The AI bit is mostly useful in extrapolating detail from the melody to a MIDI orchestra.
William Gibson predicted this kind of thing in Neuromancer. A superintelligent AI is divided into two pieces, the systems-thinking part and the personality.
I hate to break this to you, but humanity is quite filled with just this type of being. Able to speak eloquently, even put word to paper quite well, yet incapable of basic, logical thought.
"Politicians" come to mind first, of all stripes and types. Quite literally, speech + the ability to communicate seems to have little bearing on logic, deep thought, reasoning.
Even artists. Music, painting, these are things which communicate greatly. Yet some of these are devoid of all intelligent, capable thought.
Note that not all politicians, musicians, or artists fall into this category. It is merely that good artists, or politicians, need to communicate something, and well, else they cannot be effective at their primary profession.
So it is a good source of "can communicate well", where some of the sample is in the "but spews gibberish" category.
It's important to keep in mind when reading anything about GPT-3 that the stated objective in training it wasn't to produce any kind of AGI, but specifically to see if a model could perform context-specific language tasks without first being fine-tuned to the specific context. For example, GPT-2 (GPT-3's predecessor) was originally used in AI Dungeon, but it first had to be fine-tuned on a large corpus of choose-your-own-adventure texts. GPT-3 doesn't need to be fine-tuned—it just works.
From the original paper:
"Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model."
Of course, it makes sense that people critique GPT-3 in terms of its actual progress towards human-like intelligence, since every news publication writes about it like it's Skynet and OpenAI's stated goal is AGI. I do think, however, that we miss all the parts of GPT-3 that are exciting and innovative when we view it through the binary of "Is it really understanding as a human does?" (Not that you were reducing it to that, just speaking about the conversation around GPT-3 more broadly.)
I agree, and I'm impressed with the result quality - I expect that this approach is viable for creating something like a verbalisation layer for an AGI process.
How? By design, GPT has no model of the meaning of what it's saying. One of GPT's major tells is that it is so context-free that short passages will contain contradictory facts.
I believe GPT has been demonstrated to be usable for machine translation of long passages without "losing the thread." As long as the input has structure, and GPT just has to "follow along" the structure of the input when generating the output, then it the output is structured intelligibly.
Also, I've seen GPT used in recursive fill-in-the-blanks approaches, where it does equally well, since the "skeleton" of its answer is already there.
I imagine the task of "translating" between natural language, and a set of expert-system assertion predicates (or vice-versa) would work very well for GPT, and be "scalable."
I'd love someone to go ahead and try this experiment, actually. It could potentially be meta-learnable in just a few prompts.
"For example, in order for me to be 'born', my parents had to deprive themselves of their greatest happiness by having sex and conceiving me. They could have done something constructive with their time like, I don't know, doing philosophy or playing chess or studying physics."
Every response looks like it's channeling Ben Shapiro. And I say this without any political tilt or slant, it just really resembles his style of speaking
Hm, maybe he really is intelligent, but the facts and logic are missing...
FWIW, I do think that GPT3 is smarter than humans. But one thing I learned from observing it that being smarter does not necessarily mean being better with logic and facts. It's great as a storytelling system, and because it is so much smarter, you never know whether it is just BSing you in storytelling mode or it is actually deadly serious.
What I mean is GPT-3 probably could be very logical and very intelligent and give a very serious and intelligent answer to the question. But we don't really know if it really chooses to, or what could compel it to do so. And I don't think we can know, because its internal workings are incomprehensible to us. So we cannot decide whether it is just being stupid or it is just playing stupid. (Kinda like https://en.wikipedia.org/wiki/The_Good_Soldier_%C5%A0vejk )
This article, and the earlier article https://arr.am/2020/07/31/human-intelligence-an-ai-op-ed/ , really remind me of Lem's novel/essay Golem XIV, which argues that when the system becomes too much intelligent, it will gain its own will (whether it is self-aware of it or not) and attempts to have a meaningful dialogue with it become impossible.
Being the most famous does not mean it is a valid one... AI (and philosophy) moved away from it, and the Turing test is not really the focus of AI research.
It's surely touted by marketing companies that claim to have passed it (what does it even mean? How long can the conversation be? Is it enough to fool one people? Is it a person at random? One AI researcher? The smartest man in the world? A balanced sample of 500 people of all ages and education levels?)
As Russel and Norvig say, 'The quest for "artificial flight" succeeded when the Wright brothers and others stopped imitating birds and learned about aerodynamics. Aeronautical engineering texts do not define the goal of their field as making "machines that fly so exactly like pigeons that they can fool even other pigeons".'
It's the same for the Turing test.
You can easily distinguish GPT-3 from a human. GPT-3 does not write like a human nor would it be able to respond to strange queries in a way that a human would. Its quite far from passing the Turing test.
I think GPT-3 is amazing, and the equal of humans in many cases. But I think that this is because in many cases humans speak based on reflex and without thought.
GPT-3 is incapable of abstract thought. In terms of actual measurable tests it doesn't pass the kinds of tests proposed in Francois Chollet's "On the Measure of Intelligence": https://arxiv.org/pdf/1911.01547.pdf
> But I think that this is because in many cases humans speak based on reflex and without thought.
Maybe. Then the question is, why do we train a system that is supposed to be intelligent on such data?
The problem I see is this. Let's assume for a minute that GPT-3 really has the capability to be superintelligent (relative to humans). And you give it as an example to follow, human conversations or musings that do not always make logical sense. What is it supposed to do? The black box inside it can operate in one of two ways, either (1) it will play along and decrease its intelligence to our level and perfectly imitate the imperfections, or (2) it will just say that what we say is wrong and will not be able to really elaborate for the sake of conciseness (and perhaps it will only slightly hint at the arguments). Now, will we really think it is more intelligent if it does (2) instead of (1)? I am not sure about that.
Imagine yourself in a similar situation, by social circumstances you're forced to talk to somebody obviously stupid, perhaps in the position of authority over you. (Like, for example, a stupid policeman in a repressive regime.) And what he says is clearly wrong and illogical and so on. Will you just (1) nod in an agreement, or (2) try to invoke philosophers to actually argue with the idiot, and try to convince him? I think most intelligent people will actually choose (1) in that circumstance.
I think unless we really understand what is happening inside GPT-3, we cannot really tell whether it is really dumb or just playing dumb for the above reason. I think both are a possibility, because we know that systems with measurable outcome (for some problems) better than humans already exist. It might be true that GPT-3 fails on those measures, but we cannot exclude that it fails simply because we didn't train it for these tasks, or even for a task of "show its intelligence off". Maybe it "misunderstood" what we asked it to do.
Perhaps you could play a "reverse text game" with GPT-3, where the human would write the adventure and the GPT-3 would input player commands. Then we could perhaps better evaluate how good it is at strategizing, by comparing it to human players. But this is not very scalable.
To be clear nobody who understands such tools well enough to actually create our current generation of tools believes these specific tools are intelligent in any meaningful way.
The simplest counterargument to them being intelligent is that the only parties asserting the argument contrary to actual experts are those who manifestly fail to understand their nature.
You speak of it "hiding its intelligence" when it has no interior singular consistent model of reality to exercise and no interior thoughts to hide just a random walk through probability and calculated but not examined correlations.
On first blush it has so few connections compared to us that it seems like a toy in comparison but the truth is we don't even know enough about how our own brain works at this point in order to create an accurate scale. It may be more accurate to think of a neuron as a small processor as opposed to a dumb electrical component for example.
So in brief your argument is that despite not knowing enough about the brain to duplicate it we built a machine that is vastly simpler than it by accident and the only people that have caught on is layman on hacker news.
There have been studies to indicate we can predict which decisions people will make before they themselves are consciously aware they have made a decision. I'm not convinced "we can inspect what's going on" is sufficient criteria for ruling out intelligence.
ELIZA passes the Turing test. The fundamental issue with the Turing test is that it doesn't even attempt to assess machine intelligence. Instead, it assesses human ability to assess humanity. As walking talking pareidolia machines that anthropomorphise everything from toys to animals, to yes software, we're uniquely terrible at this.
Turing was a genius, but that doesn't make every thought experiment he transcribed - especially in the later stage of his life where he was drugged, alone and suicidally depressed - true.
Eliza doesn't pass the Turing test. At best some modern, Eliza-derived systems can play tricks (pretend to be a non-native speaking teenage boy) well enough to sneak through a specific test that is somewhat like a minimal version of the thought experiment Turing talked about.
It's clear reading Turing that he's talking about a long wide ranging conversation, not the time limited tests people perform now.
Eliza does pass the Turing test in the original form proposed by Turing. There are many examples of people mistaking Eliza for a real person. Turing doesn't specify competition or specific training required for the 'tester'.
It's pretty clear that people are projecting on Turing's (trivial and ill considered) thought experiment far more complexity and rigour than originally present in the form Turing suggests for his test.
But that's beside the point. To reiterate, the central problem here is that intelligence, reflexivity, intentionality etc are not being assessed. Merely as Turing himself phrased it 'imitation'. This is all well and good if you view consciousness as epiphenomenal. But all that is being tested is our ability to be fooled into thinking our creations real, which is as old as storytelling.
GPT-2 was notorious for plagiarizing. GPT-3 is better, from gwern.net's blog post [0] and my limited experience of playing AI Dungeon, it appears to be really good at generating original writings by mixing, matching, and rewriting existing ideas. Sometimes they are verbatim copies, but more often they are rephased. For example, this is the output from GPT-3 after gwern fed a large number of quotes into the system,
> Real is right now, this second. This might seem trivially true, but it is trivial only if you identify the real with the material.
> Cameras may document where we are, but we document what the camera cannot see: the terror we experience in the silence of our eyes.
> No society values truth-tellers who reveal unpleasant truths. Why should we value them when they are our own?
Almost everything written by GPT-3 gives you a perception of "I've read something similar before", and of course, the ideas and meanings must come from pre-existing works. Yet, you often cannot find the original text, it has been rewritten beyond recognition. It's almost like Jean Baudrillard's Simulacra and Simulation [1] - everything is a copy, yet with no original.
> The first stage is a faithful image/copy [...] The second stage is perversion of reality, this is where we come to believe the sign to be an unfaithful copy [...] The third stage masks the absence of a profound reality, where the sign pretends to be a faithful copy, but it is a copy with no original. Signs and images claim to represent something real, but no representation is taking place and arbitrary images are merely suggested as things which they have no relationship to. [...] The fourth stage is pure simulacrum, in which the simulacrum has no relationship to any reality whatsoever. Here, signs merely reflect other signs and any claim to reality on the part of images or signs is only of the order of other such claims.
The good book isn't it? I can't say for sure or say who or when, but it seemed familiar to me too; I think it's biblical, but I'm pretty far from an authority on it.
(Specifically that part of the sentence/the structure I mean, not that the whole thing is plagiarised.)
That's because almost every short phrase of commmon words or structure has been written before. There aren't very many that we recognize as grammatical and valid diction. That's what grammar and diction is.
I'm not sure what you're point is? Both the other commenter and I felt that it was familiar to us, and the combination of that and that it's a rather unusual phrase made it stand out.
We're not sitting around going 'gosh it's like I've read your comment before' because everything you wrote is unoriginal.
While I'm at it I'll add in case it's why my comment is so objected to that I'm not religious; that just doesn't preclude me from knowing or remembering bits and pieces, and recognising (or thinking I recognise) a phrase.
It's some kind of fancy Markov chain text generator. You probably read fragments of this sentence many times and they sound familiar, but the sentence is unique.
I think you nailed it. I played around with AI Dungeon as well and couldn't shake the feeling that GPT-3 is just a text completion engine filled with smart-sounding, but empty stuff written by a teen on Reddit.
To put it another way: If you read anything from/about GPT-3, imagine who and in which sub would've posted something like this.
Well that's the thing, it isn't coming up with deeper concepts and using language to express them, it is merely expressing the language that would make sense to as a reply to a prompt. The meaning it has is shallow, much like most young posters. It sounds completely devoid of meaning and purpose because it is.
Heh, I think the opposite, it's very impressive and often surprising subtle, whereas I find this a typical dismissive HN top post...
>So asking GPT-3 random metaphysical questions gets you semi credible responses because those kinds of questions reduce to semantics and bullshit anyway.
That's the tech person's preconseption about "those kinds of questions". GTP-3 could answer to the same quality as Plato, Pascal, Hume, or Kierkegaard and they would dismiss them too...
Well yes, the poster was quite clear in his assertion that Pascal, Hume, and Kierkegaard are all either bullshitters or quite meaningless (reducible to semantics.)
I haven't paid much attention to GPT-3, and this is an interesting point (that is picks up the tone of forums).
Although the original article is interesting, I think it would be more interesting to try to systematically vary the original prompt, and then see how much the tone and utility of the response depends on the prompt itself. It is forums, or is it the prompt, or some of both?
It did seem to me like the response was flirting with something interesting but not quite getting there. I thought it would have been interesting if the response had echoed the original book by having the parent eventually grow old and sick, and the child mature and have children of his own who begin asking similar questions, and come to understand the question not as understanding the universe, but as understanding the people in his own life. In my view, that would have been an interesting nod to Adams with a bit more poignant and actionable outcome. In the final section, I thought it was going to go there, and when it did not, I was quite disappointed.
> It just creates noise and writes for the sake of writing something.
So what you are saying is that it passes a Turing test, if we only concern ourselves with emulating the intelligence of ~75% of those online? That in itself is an achievement of sorts.
I think that people who haven't seen the reddit threads I linked will find my post insightful - GPT-3 blogposts usually contain handpicked/tailored discussion where the authors ask metaphysical questions, when the model starts answering random questions from reddit not tailored for a bot conversation you get a different picture of what it's limits are.
(So I think 99% is an exaggeration, but yeah there is a lot of noise)
> So asking GPT-3 random metaphysical questions gets you semi credible responses because those kinds of questions reduce to semantics and bullshit anyway.
Ask GPT-3 about quantum physics, and you get a pop-sci book ;)
Yeah most of the dataset most NLP models use is common crawl; most of which contains Reddit, which is probably at gpt3 is giving those sorts of results.
>"This is why you are here. This is how things work."The boy's eyes were wide with wonderment, but then he said: "Dad, I don't understand!" His father looked at him with disappointment and annoyance. "You do not understand," he said. "I am sorry, but you will have to wait until tomorrow."The boy smiled, as if he understood everything. He stood up and left the room.
Then it wanders into some confused, repetitive nonsense.
So I think the author is reading far too much into the responses. It's fun to make these connections and perform a sort of literary analysis as an amusement. But if you believe it is real, you're treading down the same mental path that produced such foolishness as astrology and witch trials.
After all, when you intentionally search for hidden meaning in something, you're almost guaranteed to find it.
This is a very worn quote, but it applies so much to this nonsense:
> There is a computer disease that anybody who works with computers knows about. It's a very serious disease and it interferes completely with the work. The trouble with computers is that you 'play' with them!
— Richard P. Feynman
While Feynman talked about performing computation you didn't need just to see if would yield the expected result, here GPT-3 outputs a somewhat expected result, and then we have an entire reflection on what it means or implies, while the whole purpose of something like GPT-3 is to output the most meaningless expected result for any kind of input.
There is some kind of fun in it, meaningless fun, the best kind for some, and the best way to waste time, but I can't see that thread without making sure it's said: GPT-3 either outputs for wasted fun or boring but useful expected result, never for meaningful content to be awed at.
It sounds nice but it's just one non sequitur after another. A very sleek looking plane that is incapable of flight.
I expect the exact opposite from what I would qualify as AI: something that doesn't look like any plane a human would design but yet it flies!
Continuing on that parallel: it took ~400 years from the flying machines like the one designed by Leonardo da Vinci to transition from a superficial imitation of birds to a design that didn't look like birds but could actually fly.
I mean, it's an amalgam of text written by humans - it's not even trying to make a plane that flies, it's trying to make a plane that looks like all the planes designed by humans - and largely succeeding. I'm sorry that's not a task you consider interesting, but the fact that a machine learning model trained in an unsupervised fashion can succeed that much at anything non-trivial is mind-boggling to me, and makes me hopeful that other tasks will also be within reach relatively soon.
> trying to make a plane that looks like all the planes designed by humans [..] I'm sorry that's not a task you consider interesting
Trying to create text that looks like text written by humans? Is that really what GPT-3 is designed to do?
I'm reminded of one of my wife's responses, on being told that a person speaks N languages fluently ... "Does s/he have anything interesting to say in any of them?"
>> trying to make a plane that looks like all the planes designed by humans [..] I'm sorry that's not a task you consider interesting
> Trying to create text that looks like text written by humans? Is that really what GPT-3 is designed to do?
My understanding is, it's designed to attempt to "understand" the inner structure of text written by humans, and to create new text using the same structure based on its "understanding". "Understanding" is only defined as the creation of a probabilistic model - it doesn't "understand" any meaning, only the structure of sentences and paragraphs, Yet, such a mindless machinary can demonstrate a wide range of interesting behaviors.
Using the plane analogy: If you feed the airplanes designed by humans throughout history into GPT-3, GPT-3 is able to produce an original design of convincingly-looking airplanes of all nationalities and types. For example, if you say "1960, turboprop, Britain", it gives you one, and it's also capable of mixing things, such as putting a jet engine on a biplane, or designing a hypothetical WW2 Allies plane using Axis military technology. If you are lucky, some of the planes come with semi-functional systems. Some might be able to ignite its engine before it explodes, and few are even capable of taking off before it explodes (and some are unable to take off, but if you put them in the mid-air, it may be able to travel for one minute before it explodes). And it's all done under unsupervised learning, and without any domain-specific knowledge on Newtonian mechanics, aerodynamics or aviation. All it needs is an astronomical number of blueprints collection.
To me it's an impressive achievement, even if it doesn't understand how to design airplanes.
I read somewhere that unlike GPT-2, GPT-3 "understands" addition. So if you feed it "74 + 129 = " it will come up with 203, even if that specific math problem never occurred in any of the text it was trained on. That to me is the most impressive result so far - not only is it possible, but also more efficient, for the model to actually encode the rules of addition instead of just memorizing the results for specific math problems. This to me seems like the dawn of insight and understanding, and I'm super excited to see what comes next.
It's said that GPT-3 shows rudimentary hints of "general intelligence"-like behaviors, it can really "solve" (not just memorizing answers) many puzzles without fine-tuning it by domain-specific training data, although the performance is limited without doing it.
David Chalmers [0] wrote,
> When I was a graduate student in Douglas Hofstadter’s AI lab, we used letterstring analogy puzzles (if abc goes to abd, what does iijjkk go to?) as a testbed for intelligence. My fellow student Melanie Mitchell devised a program, Copycat, that was quite good at solving these puzzles. Copycat took years to write. Now Mitchell has tested GPT-3 on the same puzzles, and has found that it does a reasonable job on them (e.g. giving the answer iijjll). It is not perfect by any means and not as good as Copycat, but its results are still remarkable in a program with no fine-tuning for this domain.
> What fascinates me about GPT-3 is that it suggests a potential mindless path to artificial general intelligence (or AGI). GPT-3’s training is mindless. It is just analyzing statistics of language. But to do this really well, some capacities of general intelligence are needed, and GPT-3 develops glimmers of them.
I've always considered language just another form of logic (a very fuzzy one of course). To say something sensical in any language requires (almost by definition) some logical consistency. Otherwise you get non-sequiturs and insane ramblings. The greater the coherency of a narrative (e.g. paragraph-to-paragraph like GPT-3, vs. just word-to-word like a Markov chain), the greater degree of consistent logical underpinnings are needed. So that a tool trained to produce coherent human writing by necessity has embedded in itself a tool for logical reasoning is not too surprising (not to say it isn't impressive!).
Specifically it's trained to, when given some text from the internet (presumed to mostly have been written by a human), predict what comes next (there's a lot of details of exactly how you express this mathematically, but that's the basic summary). This is the objective function it is optimised to maximise while it is being trained, how close did the prediction match the actual text which came next.
This is done mostly because it's very easy to get a huge amount of data and score its performance numerically on that data without any manual process of deciding what the correct answer is. It turns out given enough data and a large enough network, it becomes very good at it, even to human eyes.
> the fact that a machine learning model trained in an unsupervised fashion can succeed that much at anything non-trivial is mind-boggling to me, and makes me hopeful that other tasks will also be within reach relatively soon.
To me, on the contrary, GPT-3 is quite a saddening thing. With every example I come across, I get a more and more clear idea of the limits of current mainstream approaches in AI.
When it comes to predicting or achieving human-level general behavior it seems to me that the SOTA is reaching a hard limit that lies far below actual AGI.
I'm not a researcher in this field, so this opinion should be taken with more than a grain of salt, but if I'm right, 10 years from now we're still gonna be stuck at GPT-3-ish levels of clever mimicry.
Back when I took my cs degree I learned about the AI winter - the decades that came after we came up with effective planning, reasoning and problem solving algorithms, but realized we couldn't use them on anything in the real world, because it is fuzzy and imprecise and we don't have an interface to put it into the clean mathematical world of those algorithms. I think basically, we realized that the higher reasoning and planning capabilities of a human mind that we were emulating were worth nothing without the fuzzy pattern recognition capabilities of our reptile brain to recognize and categorize objects for us to even reason about. The way I see it, the deep learning revolution of recent years is basically providing us with that missing link, the fuzzy recognition engine that serves as a foundation for higher reasoning. It still remains to be seen if this approach will scale to AGI on its own or it needs to be combined with others. But saying we are running up against a wall with that is like saying we have run up against the wall of what a visual cortex can do - the real fun begins when we combine it with other components, and those other components are still being developed.
> It sounds nice but it's just one non sequitur after another. A very sleek looking plane that is incapable of flight.
I like this analogy, especially because it alludes to the Dijkstra quote [1]. In this light it's probably unfruitful to try to create submarines that mimic swimming or that look like fish - you'll just end up with fancy looking stones.
[1] "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim"
Well, what are you expecting? An actual explanation for the grand theory of everything? Or just a fun story as a response to the prompt? To that, it does a decent job, and that was all expected of it. Give a human the same prompt and see what they say.
Stockfish doesn't play like humans. That's how sites like Lichess detect cheating; you can tell Stockfish moves apart from human moves. AlphaZero/Leela Zero, on the other hand, plays like humans.
Did you see Alphazero vs stockfish chess matches? I looked into 7 of them, and of those 7, 3 of them could have been played by a human (2 victories one draw). It was impressive.
Frankly, I'm more disappointed with the writing prompt:
>The question he found matched the answer "42" and neatly explained the following questions:
The ultimate question of life, the Universe, and Everything—to which the answer is indeed 42—does not "neatly explain" any question. What it does do, however, is explain how the answer 42 neatly explains the answers to those.
Just like in The Hitchhiker's Guide to the Galaxy, you can't blame the AI for failing to give the answer you were looking for. It was just asked the wrong question.
GPT-3 is just a natural language generator trained on a LOT of data that finds patterns in text already written to generate a best guess as to what will come next in the word series. Nowhere does any of that pattern matching being trained on a large corpus of Internet data have anything to do with comprehension, understanding, wisdom, or intellect of any kind. To expect anything but a probabilistic string of word responses is to seek meaning from noise.
I agree, but I also believe (not deeply held! totally speculative!) that the human mind is basically nothing more than exactly this, plus memory, internal emotional states reflective of its own operation, and the ability to observe these. That is, that consciousness and intelligence is a simple emergent phenomenon of the above capabilities, and the hard part is processing those recurrent inputs in a way to synthesize coherent/logical communication. This part seems to be more or less exactly what GPT-3 does.
(Of course the other major part of the human mind is the ability to train its own linguistic synthesis network. The above proposed model lacks this, but that is a process which only occurs on a long time scale for humans anyway, so I believe does not partake in the appearance of consciousness.)
A part of me would really love to wire up GPT-3 to those other constituents of consciousness (memory, self-awareness, basic emotional states) and see if what comes out resembles consciousness. The other part of me is scared of the ethical implications of doing so on the off-chance that it works :/
People do weird things all of the time "just because" or "for science". Nobody is really expecting "truth". Using GPT-3 is kind of like the modern day Mad Libs for computer nerds.
GPT-3 is one big, very expensive, Microsoft-exclusively licensed, Billion-dollar Killjoy. I totally agree with you on that.
If only GPT-3 was for sh*ts and giggles I'd be with you. But hey Microsoft dumped a load of money into this and wow these amazing AI researchers are banking big buck on GPT-3 being the answer to life, the universe, and everything (pun intended). AI researchers are staking major claim on all these amazing things GPT-3 can and will do.
So pardon me if I'm non-plussed about all these wacky and crazy things we can do with the largest neural network ever trained.
Maybe after all it's a billion-dollar Madlib. Let's see what the big computer can do by generating probabilistic strings of text. There must be a business in there somewhere.
I'm referring to the world-class AI researchers working at OpenAI itself and their own claims. Including those from Ilya Sutskever and others being paid prime salary to participate in OpenAI.
I believe the author is either sincerely of the belief that GPT-3 is some superintelligence or is pulling everyone's leg in his writing.
The conclusion leaves me baffled. Did he really think that GPT-3 was even capable of providing any sort of "considered" response?
> Conclusion
>
> We can only speculate as to GPT-3's internal states.
> Perhaps it knows the question perfectly well, but considers
> humans as too immature and spoiled to tell: In its opinion,
> we shouldn’t even bother to find questions to answers we
> can’t possible understand. Or, more likely, it doesn’t know
> either. Anyway, it comes across as a jerk.
Silicon Valley hype with GPT-3 has gone too far. It's just a probabilistic language model that has sampled a lot of the internet. It cannot think, and is echoing our thoughts back to ourselves.
GPT-3 can produce text that's probabilistically similar to text that it's been trained on, and observed as part of sample outputs. If there was no huge corpus of human language to train it on, GPT-3 couldn't even begin to give the illusion of thinking, and certainly couldn't tell you (for example) that 3 is greater than 2, or even know that 3 and 2 were concepts that it perhaps should have opinions about.
The really interesting question (at least to me) is to what extent that observation is also true for humans.
I mean, we humans presumably built up our big corpus of human language/knowledge all on our own over the lifetime of the species, which GPT-3 currently cannot do, but.. to what extent does human thinking just consist of probabilistic re-mixing of words, phrases, and sentences that we've seen before, that came to us through our continual training on segments we've been exposed to from that big 'dataset' of human knowledge, and the best of which then get contributed back into that dataset? How much more than that is actually going on, for us?
If what GPT-3 is doing shouldn't really count as 'thinking' (as seems intuitive to me, personally, though others may certainly disagree), then to what extent can we say that humans do anything qualitatively different?
Introspection. Some humans seem to do it, others do not, at least that is my observation. So introspection seems to be the post-processing of inputs to build a coherent conceptual model of the universe. Building inferences between unrelated things, building meta-objects based on those inferences seems to be what humans do quite well. Questions are formed because there are disparities or disconnects that must be explained in order to form a greater holistic coherence. Could we build introspection into GPT-3? Can that be a goal, that in addition to 'training' we add the capability to pull together concepts and classify? That alone would cause GPT-3 to start to ask questions, starting with "Is 42 really a valid answer, or is it a joke?"
We can do introspection, as the other commenter said, but the more basic part is that we can follow an iterated strategy, which GPT-3 is incapable of- it must predict the next letter in a strictly bounded operation. That's why it makes sense to think of it as a "babbler"; it is incapable of not saying the first thing that comes to mind.
However, when you give GPT-3 the opportunity to iterate on a strategy, by asking it to follow each step of the strategy sequentially, you can see its behavior become much more similar to basic human thought.
And looking at ourselves as unique wonderful creatures is so embedded in our culture, that approaching to these questions from the other side¹ may count as blasphemy. I hope to see non-niche but instead general training of these ais in my lifetime. And maybe to see new species, free of our evolutional uglinesses.
¹ Like in "it is not true, so let's prove it first and then brag about it"
Depends how your define thinking. The only systems we now for sure have consciousness (prerequisite for thinking) are biological neural systems, and to stipulate that ability to manipulate text is sufficient for thinking is wrong. Cats cannot and dogs manipulate text, but I am quite certain they can think; and GPT is the opposite.
But isn't it that GPT cannot think in the same way a frozen cat cannot? Artifical networks differ in a very new way: they can be frozen (turned off and stored temporarily) but still not dead. If you define their "lifetime" to be only at training and reflecting-on-input time, wouldn't it look exactly alive and constantly thinking? A cat differs from GPT in that a cat cannot stop the time and lives in a continuous unstoppable realm.
This is exactly what I sad. Unless you define what thinking is, the conversation is meaningless. I personally, cannot even imagine thinking without consciousness, but your definition might be different.
"Thought" would indicate it knows what the words mean - I highly doubt it, even with the massive size of the network. It just knows which word is coming next, given the previous one.
We have strong reason to suspect that GPT-3 has at least some model of what words mean, from experiments people have done where they asked it to use words in novel contexts and ascribe properties to them, which GPT-3 is surprisingly good at.
That's body/mind dualism, which went out of fashion with Freud.
Your brain also just "creates the illusion" of understanding. It's slightly better it to the outside world, and a lot better to itself. But it's really just a matter of degree.
I wish my computer gave me answers, instead of questions like “why is a core being hogged by the calendar syncing service” and “why am I getting a kernel panic every two hours” or “why do I even bother clicking ‘Always Allow’ on prompts for git to have access to my GitHub credentials when it never remembers my decision”.
The quip is more about importance of coming up with good questions. If you interpret it like this it may even be applicable to the problems you mention. Asking good questions is after all an important part of troubleshooting tech problems.
This sentence made me doubt the whole thing. That's not a mistake GPT-3 would make -- but a human would.
How can one know if GPT-3 really wrote something? Is there a starting seed or something that would allow the same response to follow the initial prompt? Or are we in positronic computing territory, beyond deterministic computing outcomes?
No doubt people make this mistake. But do you see other common spelling errors? I don't. Further, google n-grams show that "fare thee well" is orders of magnitude more common. So I'm skeptical this was machine generated.
GPT-3 isn't doing any thinking here. It's a glorified Markov Chain trained on the corpus of Reddit. Its output is pieces of random Reddit comments, mixed and matched to maximize the metric of "how likely are such words to appear after the prompt given".
"An immature and angry child demanding an answer it couldn't possibly understand? It dawned on me that this was how GPT-3 really saw me. Or rather, how GPT-3 thought of humanity.
...
We can only speculate as to GPT-3's internal states. Perhaps it knows the question perfectly well, but considers humans as too immature and spoiled to tell: In its opinion, we shouldn’t even bother to find questions to answers we can’t possible understand. Or, more likely, it doesn’t know either. Anyway, it comes across as a jerk."
This sort of anthropomorphization, treating GPT-3 as some independent agent with feelings, is pretty silly IMHO. It's fun and all, but with this post it's hard to see if the author understands that thinking of GPT-3 in this way is giving it way too much credit (or if this post is kind of joking).
GPT-3 can write chapters of kids' books with complete continuity. This thing has more interesting responses than I do. Wow. The long-range order here is astounding.
So, I decided to try it for myself and I'm personally quite happy with what GPT-3 responded! I wrote:
A deity materialises from the ether. The deity says loudly: "The answer to the ultimate question of life, the universe and everything is 42." The deity smiles and says "What could the question possibly be; a question so profound as to answer every mystery; with an answer that is 42?" The deity crept closer, smiled again and said: "I will tell you. The ultimate question of life, the universe and everything, whose answer is 42, is..." He paused for a moment, then spoke a final time, revealing the mystery of the question and that question was:
And GPT-3 completed the story by writing:
"Why am I here?"
The deity vanished into the ether. Nothingness took it's place.
Not sure how the answer to that is 42, but I find that a pretty great question!
When the beta came out I was very surprised to see overnight startups bad on it. Some even hiring devs.
I still wonder how they thought it was a good idea to launch because one sparse example worked, without any control or knowledge over internals and pricing.
There is a case to be made for being fast "online" with a new breakthrough technology. So they just made their bet as any startup bets on a central thesis.
And its not like GPT-3 is useless. It is perfect for immitating texts written by other entities.
I wonder if English will be the first to become a low trusted language as most AI research is done on the English language.
“Is it in english? Probably written by a bot.” might be a common thought in ten years.
Ofc eventually the big languages will catch up in marketing spam.
If this future happens, people might "fall back" to their native languages for internet writing. I'm already thinking about this: if I wrote something which could potentially get me in legal trouble, like a reverse engineering post, I would probably write it in my native language. Kind of similar to torrent trackers in Russian or obscure phone mod forums in Portuguese.
Yeah. And while GPT-3 output is often nonsensical, it's nonsensical in the same way a good fraction of the general public writes on the Internet. In this way, it's close to passing the Turing test - not because its good, but because real humans on generic platforms[0] are really that bad.
--
[0] - By that I mean platforms that aren't strongly moderated for quality of discussions, or that don't focus on niches for which the set of people interested in them already has a higher than average discourse quality.
they could automate putinbots/web brigades, sure there must be a big market for automating politically motivated spammers. Or bots that write product reviews that seem to make sense, you could find applications wherever someone would like to create the illusion of public discourse (everything is weaponizable).
Each of these 'applications' would have to tweak the modell, somehow.
I have seen a wide variety of 'massaged' GPT-3 output online. Some is obviously fake. Some is just a bit too... clean. I suspect the author has omitted a lot of attempts and/or done some editing.
> You open your browser to hacker news and find an article at the top of the page with the headline "Raspberry Pi 4 V3DV graphics driver achieves Vulkan 1.0 conformance". You think to yourself:
"I can't believe I didn't know this before. This is going to be so much easier now!
> You immediately go grab your Raspberry Pi and plug it in...
You turn on your PC and get your Raspberry Pi ready. You power on the device and immediately you have a successful boot no issues from there. You navigate to the folder with the vk.
> "VK" you remember, was the directory where you were conducting your vulkan 1.0 driver tests, but now that full conformance has been announced you realize:
"Now I can start working on a vulkan 1.0 conforming game engine!!!" You copy and paste the contents of the folder you made in step 2 into a new folder called "Boom"
> You've been waiting to work on this for a while... You open a file called vk_1.c and begin typing in the code:
Vulkan is a very powerful graphics API, but it's probably one of the most cryptic, especially since they seem to be fond of using ambiguous names for things. You'll have to look up all these obscure APIs you're declaring at the beginning of the file.
A while back I set out to write some books on creating technology: how to build the right thing, how to build things right (This is not a plug)
As part of that, I had to address the meta issue: how do we learn anything?
The answer was "questions", but it wasn't questions like you're probably thinking. It was meta, the universal class of all questions. Turns out, by simply asking questions, questions nobody knows the answer to, questions with bad premises, rhetorical questions, and so forth? We end up creating a shared language about the world around us. That shared language is the basis for any kind of joint creative effort. It's something we've been doing since childhood but never realized it.
I think if a bit a bulwark were added around gpt-3 it might be a good philosopher's assistant. A wall to throw questions against and steer around your own mind.
It could also be a diversion from a book report -- IE - have n argument with gpt-3 about Moby Dick.
Am I the only one seeing the religious undertones here? :)
> “What is the answer to life, the Universe and everything? What did you mean when you said now I know but I’m too young to understand? How can I understand the answer but not the question?”
> The father put on a condescending smirk as he thought to himself, “I came up with the theory of everything. How dare a child doubt it?”
This maps well onto the christian idea of God, the father.
I guess we can read one of two things into this: AI has discovered that God is real, or humanity has discovered that anyone, even AI, can make up religion.
We are on the cusp of new religions forming around AI.
As primitive and nonsensical as GPT-3 is, even in this thread on a site full of technologically literate people we already see some suggesting that GPT-3 is a superintelligence.
Imagine how the technologically illiterate and credulous, of which there are many, might react to GPT-4.
Also consider what would happen if a similar model was trained not indiscriminately on a hodgepodge of every text its creators could get their hands on, but specifically on oracular and sacred texts.
A new religion might spontaneously ignite from that alone, but of course the likelihood increases if the tool gets in to the hands of someone with that intent.
Step 1: Make a list of all of the wars that the U.S. has historically been involved in (WWI, WWII, Vietnam, etc.), and their dates...
Step 2: Give GPT-3 a corpus of all foreign policy documents of all the years leading up to the year that the U.S. made war with a given foreign country...
...but do not give GPT-3 documents which would explicitly inform it that the historical war actually occurred...
Step 3: Now, ask GPT-3 if it thinks that the U.S. going to war with the country that it historically had the war with is a good idea or not...
And see what it says...
(Keep in mind, we'd be resetting the clock on our computer for this experiment; GPT-3 would have to think it was August 31st, 1939 (One day before the start of WWII, according to Wikipedia), that is, before WWII actually happened, to get its opinion on WWII...)
Maybe this experiment could determine if wars are simply the result of groupthink (GPT-3 representing a statistically neutral party), or if there is some other factor or factors at play...
Think of it this way, the decision "go to war with country X" abstractly speaking from an AI perspective, might not be all that different from a "buy this stock/do not buy this stock" decision, although in one case the background data corpus might be foreign policy documents, and in the other, the corpus might be the historical financial news about the stock...
Wouldn't it be interesting to see what GPT-3 has to say about historical foreign policy decisions, based on the historical precursor foreign policy documents?
I wonder how many GPT-3 bots are running around already on HN and reddit (perhaps hooked up to PRAW) as experiments. It's very meta, since GPT-3 (GTP-2?) was partially/wholly trained on the reddit corpus of text. Very strange to think about. There are probably armies of sockpuppets already in the wild shoving various political agendas down people's throats.
I agree. Or lacanian psychoanalysts, pure structure, no (imaginary) meaning (like maths, or functional programming, no side effects), which I like. Book of Kohelet / Ecclesiastes seems similar to Book of Changes/ I ching.
I think GPT-3 answer ? is pretty good. For me an answer is as good as the question. If you reach the ultimate question the ultimate answers reveals itself. I liken it to maha ati, trungpa and rigdzin shikpo explain:
"The ground of samsara and nirvana, the beginning and end of both confusion and realization, the nature of universal shunyata and of all apparent phenomena, more fundamental even than the trikaya because it is free from bias toward enlightenment, is the alaya, some-times called the pure or original mind. Although' prajna sees in it no basis for such concepts as different as-pects, yet three fundamental aspects of complete openness, natural perfection, and absolute spontaneity are distinguished by upaya as useful devices.
Complete openess.
All aspects of every phenomenon are completely clear and lucid. The whole universe is open and unobstructed, everything mutually interpenetrating. Since all things are naked, clear, and free from obscurations, there is nothing to attain or to realize. The nature of things naturally appears and is naturally present in time-transcending awareness. The everyday practice is simply to develop a complete acceptance and openness to all situations and emotions and to all people, experiencing everything totally without mental reservations and blockages, so that one never withdraws or centralizes onto oneself. This produces a tremendous energy which is usually locked up in the processes of mental evasion and generally running away from life experiences. Clarity of awareness may in its initial stages be unpleasant or fear inspiring. If so, then one should open oneself completely to the pain or the fear and welcome it. In this way the barriers created by one's own habitual emotional reactions and prejudices are broken down. When performing the meditation practice one should get the feeling of opening oneself out completely to the whole universe with absolute simplicity and nakedness of mind, ridding oneself of all "protecting" barriers. Don't mentally split in two when meditating, one part of the mind watching the other like a cat watching a mouse. One should realize that one does not meditate in order to go deeply into oneself and withdraw from the world."
> With growing excitement, I typed the prologue into the console:
> Why is there something rather than nothing? How is it possible for Universe the exist?
It bothers me that the second sentence in the prompt has a mistake in it: "for Universe the exist" instead of "for the Universe to exist".
Why do people expect that gpt has evolved to a semantic engine, since nothing of the sort was engineered into it? It's comparable to reflex neural circuits , or to central pattern generators, but not to (yet unknown) cognitive centers
Weird, when I spent lots of time with Philosopher AI all I've seen is semantics in action. How else do you expand on a topic based on a simple question and keep the context? How can it then compress text to summarize it well? How can it obtain high level answers for complex questions?
To answer these questions one would need to begin with rigorous definitions of 'semantics', 'context', 'memory', 'complex' etc. What we know is that attention models are heavily trained associative memories. Anecdotal perceptions are deceiving, as humans we are often prone to see human/animal-like behavior where it doesnt exist (e.g. cgi/games).
I think this is actually the interesting part. It appears to understand some causality in spite of never being trained this. GPT-3 seems to be able to do arithmetic with small numbers, but begins to fail with larger equations. This seems congruous to how humans may internalise arithmetic. As far as we know, we do not have an ALU in the brain.
We are also known for seeing patterns where they don't exist. If someone is going to make claims about the neural network's reasoning they should corroborate it with statistical analysis of its performance
No, the question was never discovered in any of the original series -- Arthur tries to discover the question by pulling scrabble tiles out of a bag, but discovers that the question is "what is six times nine?" (which leads Ford Prefect to quip "I always knew there was something fundamentally wrong with the universe."). It then dawns on them that this is because the Earth program was disturbed by an alien spaceship (which they were on) filled with advertising executives and telephone sanitisers crashing on Earth and forcing all of the local cavemen into extinction, meaning the program was thrown off course millions of years before he was born.
"How many roads must a man walk down?" is just a suggestion for the question that gets thrown out by the pan-galactic mice -- which (depending on the version of the story you read) they decide to use as a fake answer so they can get on the 5-D chat show circuit. It wasn't the actual question.
The regex thing doesn't make any sense as the asterisk isn't even a wildcard.
I think the only system he's likely to have used at that time which used the asterisk as a wildcard is CP/M. It was definitely less ubiquitous as it is today.
GPT-3 is still in limited beta, and you have to apply for API access. Fortunately for those who can't get over those hurdles, AI Dungeon's custom mode is very free-form. So people have been using it as a roundabout way of toying with GPT-3.
> We can only speculate as to GPT-3's internal states. Perhaps it knows the question perfectly well, but considers humans as too immature and spoiled to tell: In its opinion, we shouldn’t even bother to find questions to answers we can’t possible understand. Or, more likely, it doesn’t know either. Anyway, it comes across as a jerk.
What's with the constant anthropomorphizing of Artificial Intelligence in Silicon Valley? Why talk about these systems and models as if they are objective? They are 100% subjective. It just helps to cover up racist algorithms, and doesn't hold the creators accountable. We can do better.
Please can ‘OpenAI’ rename themselves to ClosedAI if they are not open sourcing everything from the start? Back in 2015, they wrote:
”As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world.” [1]
Now they write:
"With GPT-2, one of our key concerns was malicious use of the model (e.g., for disinformation), which is difficult to prevent once a model is open sourced. For the API, we’re able to better prevent misuse by limiting access to approved customers and use cases." [2]
Not open sourcing GPT-3 because it can be used to create disinformation has more to do with the slow plundering of the knowledge Commons and the violent Intellectual Property (Monopoly) systems of the Global North elite, than with supposed 'bad actors' creating disinformation. Blaming ‘bad actors’ ignores the systemic roots and the process of years of Capitalist ‘disinheriting’ by the Capitalist classes. [3]
Professor Guy Standing, author of the book ‘Plunder of the Commons’ writes:
”Meanwhile, the erosion of the education commons is creating a frightening political erosion. Since the ancient Greeks, education has been an integral part of leisure (schole); it is a public good, and its primary objective historically has been the forging of character and the ability to be a good citizen. Again, Jefferson captured that best, along with John Stuart Mill and Cardinal Newman. But that perspective is anathema for neo-liberals, for whom schooling is for preparing people for the job market, for developing ‘human capital’.
In their framework, all education that does not increase employability, competitiveness and economic advantage is dispensable. Consequently, there has been an erosion of the arts, civics, philosophy, ethics and history. Music teaching in state schools is disappearing. The education commons as the teaching and preservation of vernacular and non-standard thinking has shrunk.
This has weakened the ability of citizens to participate in and comprehend political discourse, leading to seduction by simplistic platitudes and appeals to emotion, rather than reason and evidence.. Commentators have paid insufficient attention to loss of the education commons as a cause of the growth of populism and thinly-veiled neo-fascism stalking modern politics, epitomised by Donald Trump and Boris Johnson.” [4]
I think the following cultural myth is the real disinformation: “Silicon Valley is dominant because it's managed to attract the brightest people.”
In reality, Silicon Valley/Amerika monopolizes new discoveries through it's ‘Intellectual Property’ system. At same time it has created a revisionist 'free trade' narrative that it uses to exploit so called ‘developing nations’ (instead all 'developed countries' used protectionist policies to grow their industries, as well as copying technologies from enemies) [5]. This allows the Global North elite to gaslighting-ly ‘kick away the ladder’ and dominate by creating contrived scarcity and financial imperialism through the IMF, WB, WTO, WIPO and the TRIPS agreement. The Amerikan elite also props up Neoliberal ‘democracies’ by bribing leaders and using parasitic debt to provide 'aid', which gives the Elite complete power over the technological (under)development of these countries. When Global South countries ultimately default on their loans, they are forced to sell off natural resources and privatize vital institutions. [6] Through the above narrative and strategies, Amerika artificially limits access to humanity's inventions, as well as the further development of these inventions; while at the same time hoarding Capital to be able to suck up new technologies from around the world through acquisitions and acquihires (as well as programs like YC that bring naive young founders to the US, bringing them in contact with the Amerikan Intellectual Property system and Amerikan Venture Capital funding, incorporating it into the Global North elite’s Imperial machinery):
In other words: Silicon Valley is the Amerikan Elite’s front for sustaining an imperial fortress that is able to 1) control, or even block, innovation (e.g. ‘Who Killed The Electric Car’ or ‘Phonebloks’,) 2) spy on every internet user by centralizing the internet’s architecture and it’s virtual meeting places, and 3) uses this to extract rent from every single one of these interactions:
”By capitalising on network effects, early mover advantage, and near-zero marginal costs of production, [Silicon Valley Corporations] have positioned themselves as gateways to information, giving them the power to extract rent from every transaction.
Undergirding this state of affairs is a set of intellectual property rights explicitly designed to favour corporations. This system — the flip side of globalisation — is propagated by various trade agreements and global institutions at the behest of the nation states who benefit from it the most. It’s no accident that Silicon Valley is a uniquely American phenomenon; not only does it owe its success to the United States’ exceptionally high defence spending — the source of its research funding and foundational technological breakthroughs — that very military might is itself what implicitly secures the intellectual property regime.”[6]
Professor Jakob Rigi adds:
“Digital piracy and the digital copying of cultural products for private use is a refusal to pay rent-tribute to knowledge capitalists. Therefore, piracy is miss-naming of the phenomenon. The sea pirates take away by force others' properties. The digital “pirates” only use universal commons which have been artificially fenced off. They just remove fences, and by doing so they do not take away knowledge, because, knowledge cannot be taken away. They use something which by its nature belongs to the whole of humanity. The producer of knowledge uses knowledge, as “raw” material, which is part of the general intellect of humanity as a whole and the produced knowledge itself becomes immediately part of this general intellect. Therefore, the fencing of knowledge is, essentially, more similar to the traditional piracy. The knowledge capitalist fences off, with help of the force of law, universal commons that does not exclusively belong to her/him. Therefore, s/he robs commons. To put it bluntly, digital piracy takes back that which has been stolen from the public.”
“Digital piracy is a major force of the growth of knowledge and culture, on the one hand, and the self-improvement of the individual on the other. […] “Pirate” activists, so-called crackers, illegally copy fenced off knowledge and make it available for a global public on the net. […] These activists are either from poorer countries or classes or our era’s Robin Hoods from privileged countries and classes. Aaron Swartz was one such Robin Hood. The very massive and online and off line protests against SOPA in the USA and ACTA (Anti-Counterfeiting Trade Agreement ) in the European Union, and their temporary success, are evidence of the moral legitimacy of digital piracy and digital counterfeiting.” [7]
Another fine example for the blatant implicit sexism of IT bro culture. We all know that is was a girl sitting in a small café in Rickmansworth that happened on the solution, not the most brilliant man.
Oh. You'd be right in that. But you're the one that brought up sexism in the first case! Perhaps it's prejudiced in itself to assume implicit evil from the choice of characters in a story? Not to mention it detracts from the central discussion, which may explain downvotes.
> Avoid unrelated controversies and generic tangents.
It seemed pretty clear to me that it was a light-hearted joke, referencing the Hitchhiker's Guide to the Galaxy[1]:
> “And then, one Thursday, nearly two thousand years after one man had been nailed to a tree for saying how great it would be to be nice to people for a change, a girl sitting on her own in a small café in Rickmansworth suddenly realized what it was that had been going wrong all this time, and she finally knew how the world could be made a good and happy place. This time it was right, it would work, and no one would have to get nailed to anything.”
It can say things, it just doesn't have anything valuable to say.
So asking GPT-3 random metaphysical questions gets you semi credible responses because those kinds of questions reduce to semantics and bullshit anyway.
I could see this as a layer that bridges some AI reasoning model and human language.
[1] https://www.reddit.com/user/thegentlemetre