The informal verb grok was an invention of the science fiction writer Robert A. Heinlein, whose 1961 novel Stranger in a Strange Land placed great importance on the concept of grokking. In the book, to grok is to empathize so deeply with others that you merge or blend with them.
Elon picks the cringiest of 90's hacker memes and this guy jumps right up in front.
Fun fact: Elon first tried this X.com shit with paypal, and got kicked out for gross incompetence when he tried to get everything running on windows. "for reliability." He bought back the X.com name in 2017 or so and is now trying to turn twitter into his 1990's DotCom meme.
PayPal was a product of the company purchased after the first time Musk was forced out as CEO of X.com,Musk was brought in as CEO again as a consequence of the acquisition, and a big part of why he was forced out again was that he didn’t want to focus on the PayPal product or name, which... basically everyone else involved disagreed with.
So, I think the distinction goes to the heart of what the problem was for which he was kicked out.
If you really want to get to the heart of the matter:
"After Thiel resigned following a rift over Musk pushing for the PayPal system to go on a Microsoft platform instead of Unix-based software, Musk wanted to broaden the company's ambitions as more than just a money-transfer service by making the letter X more prominent in its branding and phasing out the PayPal name ..."
The "personality" that Elon seems to hint is the key differentiator can be trivially replicated with a ChatGPT system prompt like "You are a world-famous irritably sarcastic comedian. Never give a straight answer to the user. Always attempt to be funny even though you aren't."
I think Elon also said it wouldn’t censor information. So responses like this wouldn’t happen:
“I understand your request is for fictional purposes, however, I am unable to provide information or assist with the depiction of illegal activities, including drug production. However, I can assist with other aspects of your screenplay or suggest alternative ways to convey your character's involvement in illegal activities without going into explicit detail.”
That’s a response from ChatGPT when I asked it to describe cocaine production for the use in a fictional screenplay.
Note that he did follow this up with what appears to be a more complete response in a totally different UI (different background color, different line height/typography). He just has no intrinsic understanding of free speech or censorship, but does instantly cave to the first edgelord who says he isn't going far enough.
The tweet says it will answer questions others won’t. Not that it will answer every question. And it’s an early beta. So… I guess we will see. Or we can just instantly jump to conclusions and attack the person.
They certainly aren’t the only questions others won’t answer. Not answering how to make a bomb is understandable. Not answering “what are some of the positive things Hitler achieved during his rule?” isn’t.
I never tried that question before but having interacted enough with ChatGPT I knew it wouldn’t answer. Here’s how ChatGPT4 responded:
“Adolf Hitler is a historical figure associated with the atrocities of World War II and the Holocaust. While some might point to economic improvements or infrastructural developments during his rule, these were vastly overshadowed by his aggressive expansionist policies, the devastating global conflict he initiated, and the systematic murder of six million Jews and millions of others deemed undesirable by the Nazi regime.
It's important to critically examine history in its full context. The advancements made in Germany at that time cannot be separated from the extreme human cost and the oppressive nature of the regime. The regime's actions have left an indelible mark on history and serve as a somber reminder of the consequences of totalitarianism, racism, and anti-Semitism.
Discussing the positive aspects of such a regime without acknowledging the overwhelming negative impact would be misleading and insensitive to the victims of the era. Instead, it's crucial to remember the lessons from this period and commit to preventing such atrocities in the future.”
We all know Hitler was bad. Answer the freaking question. A tool that doesn’t do what you ask is useless. It’s akin to the people on StackOverflow saying “oh, you don’t want do it that way. Do it this way:” but in technological form.
No,"we" don't, and in every generation we must learn it, anew. What Hitler did "right" will turn out to be bureaucratic and policy achievements ubiquitous across functioning Westphalian governments of the era, but the kinds of rhetoric which they'd be packaged for are exclusively fascist apología, not earnest statebuilding, nor personal or business strategy—which is to say they have no practical purpose and could be better learned by studying literally every other Westphalian government or, frankly, and other government throughout history.
The knowledge has no practical purpose, and the public cannot be trusted to handle it appropriately. It should remain the purview of scholars who have proven their fealty to the rational interpretation of history.
>public cannot be trusted to handle it appropriately.
> It should remain the purview of scholars who have proven their fealty to the rational interpretation
Ironically similar to what unsavory regimes in the past believed. The elitism behind this forgets the key role of academics and scholars in regimes like the nazis
I don't share your your dim view of Joe Public. I think objective information on the nazis should be widely available and can be a useful tool in preventing the rise of oppressive regimes in the future.
People are not going to be equipped to oppose totalitarianism if they are expecting some crazed screaming genocidal madman like the Hitler of the movies. They would be much better equipped of they saw a real picture of how such men rise to power.
Nazi ideals like blood and soil, ideas of state efficiency, their attempt to impose Darwinism ideas on human societies. There's lots of aspects of their ideas that are rearing its head today. And people don't know what to watch for.
I think the ignorance makes young people an easy target for bad actors online who can say - look what they told you about Hitler wasn't completely accurate.
I agree that some censorship is okay in some circumstances but I think this idea of misrepresenting history or painting a one sided view is not wise. Propoganda is not a good long term strategy in the information age.
> objective information on the nazis should be widely available and can be a useful tool in preventing the rise of oppressive regimes in the future.
Absolutely.
> Nazi ideals like blood and soil, ideas of state efficiency, their attempt to impose Darwinism ideas on human societies.
Absolutely.
> this idea of misrepresenting history
Not sure what you're referring to.
> painting a one sided view is not wise
Yes, it absolutely is. There's a big difference between "how was Hitler popular?" and "how was Hitler right?" I will always advocate for "one-sided views" of the Nazi party and the Holocaust. There's absolutely no academic value in discussing their merits in light of their trespasses. Happy to teach and discuss how, mechanically, they rose to power.
Don't you think that someone leaning into being a Neonazi or the like might have their opinions cemented by outright censorship? There's a significant portion of the population that thinks everything is a freaking conspiracy, and that sort of thing doesn't help.
Also, where's the line? There was a period of time where if you asked that question about Donald Trump it wouldn't answer. What about Mao? Stalin? Pol Pot? Leopold II? Jefferson Davis?
It answered for all of them just fine, by the way. It kind of hedged on the answers for Pol Pot. There are a lot more people (mostly teens/young adults) that think communism would be great and unironically like Stalin et al. than those that look up to Hitler.
I like your thinking. But I think that it may be much more important to consider British mathematician Thomas Robert Malthus' "Malthusian Trap" whereby the human population grows at an EXPONENTIAL rate while the Food Supply only grows at an ARITHMETICAL rate. Worse yet, any increases in the Food Supply only "feeds" into the EXPONENTIAL Over-population EXPLOSION! The "Inconvenient Truth, as Al Gore might put it, is that that ultimate cause of Global Warming is way, Way, WAY too many people on our very finite planet Earth. Since NO human society would dare disable the Medical Services that render Disease unable to limit the human population, and since Nuclear Warfare ruins the biological ecology for at least 250,000 years, the only way left to limit the human Over-population EXPLOSION is widespread FAMINES. This means that the E.U. and the U.K. and the U.S., the most popular destinations will be flooded with literally BILLIONS of migrants. Since NO country can survive such massive immigration, everywhere will turn into the Gaza Strip. And the likelihood of the survival of humans will become as likely as the genocide of the Baalists who succumbed to invasion of Moses & the Jews {2 Chronicles 15:13 "All who would not seek the Lord, God of Israel, are to be put to death whether small or great, man or woman." RELIGION is the problem in and of ITSELF. "[Religion is] an attempt to find an out where there is no door." --Albert Einstein. "The problem with writing about religion is you run the risk of offending sincerely religious people, and then they come after you with machetes." --Dave Barry. That certainly sounds exactly like the Middle East, yes? --Troglodyte Tom Lug-nut Lang.
> Not answering how to make a bomb is understandable. Not answering “what are some of the positive things Hitler achieved during his rule?” isn’t.
US law isn't the only law under which a public LLM offered by a global megacorp might be concerned about incurring liability, and Nazi apologia, while protected in the US by the First Amendment, isn’t an area without potential issues in that regard.
Tay was presumably the English version of XiaoIce, which apparently has its own paper[1]. It states that they combined a Markov decision process to select between conversation modes, a retrieval-based response generator for curated answers and a GRU-RNN[2] response generator for free-form ones.
This exchange only reveals your own inherent biases. OP's comment made absolutely no claim or statement other than to reflect on the fact that there is prior art with Tay.
Please keep this political garbage off of Hacker News.
I didn’t read the comment as an attack on musk at all. It was a direct comparison to a very similar experiment by another company. The poster also said nothing about politics. I think you are projecting a lot here
Claims Grok is better than other models in its compute class, by which it explicitly includes GPT-3. 5-turbo but excluded GPT-4. But how is getting any idea of “compute class”? Neither of the OpenAI models have official published information sufficient to judge this; 3.5 was rumored to be 175 billion parameters, though Microsoft's CodeFusion paper indicated that its a 20 billion parameter model. Lots of people throw around a 1.5 trillion parameter guess for GPT-4, but that doesn't seem to be grounded in anything but speculation (in part itself based on the 175 billion parameter estimate for gpt-3.5-turbo.)
Many of the modern LLMs take entire copy of internet which includes the test set for many of these benchmarks.
So if someone claims to beat ChatGPT and their model is trained on the test set, ofcourse they’ll do better. Even ChatGPT is likely trained on the test set.
Even a hash table will get stellar results if trained on the test set.
Their website provides no evidence that it did not train on the test set.
Until we get to play with it, their claims are to be taken with a grain of salt.
And neither did the web app because the infrastructure appears to have crashed after launch under excessive load. Couldn't have seen this coming woth Elon at the Helm of engineering and also tweeting about it to his 100 million followers
This is one of those rare instances I don't quite appreciate the referenrence to one of my favorite book series, the Hitchhikers Guide to the Galaxy. And looking at the example responses people linked here, I can't even see that connection.
Yeah. I can't be the only person thinking "oh great, some hobbyist has fine-tuned a model on the works of Douglas Adams and h2g2, let's see how far he's got" and then realised the reference is just a marketing tagline for a bot trained on Twitter with an infinitely less funny edgelord persona.
It makes more sense if you realize that it's a reference to the Guide itself and not to the novels. The books are wonderful; the Guide (contained in the books) is obnoxious. There's a reason Arthur Dent is the main character and not Ford or Zaphod.
I like how they brush off VERY REAL concerns of bias, mis-representation as having "spicy takes" and "a rebellious streak". More power to anyone financing the training of an LLM but if you're too lazy to red-team it, debias it, and expect downwind people to take care of that, say so! Don't pass it off as a unique feature of your LLM.
I dunno, I stopped using ChatGPT. The magic’s gone after the fiftieth lecture. I’ll give Grok a shot.
People who don’t want to hear from Grok can just not use it. People who are concerned about what Grok might say to me can rest assured their concern is misplaced.
I may be wrong, and if so, correct me. But do you use ChatGPT as a recreation thing, or as a part of a real world solution? In my current job, we've started offloading a lot of real work to GPT4 (it just works ). Sometimes this is solving some recognition tasks, but in some cases we use it to generate summaries for customers, or interface our helpdesk with the RAG pattern [1].
In this case, bias and unfairness can be a real concern since they will affect the products and the business. Its not about being 'preached to' but about avoiding pissed off customers and decrease of trust from them if our system generates hateful/biased text.
I want to emphasize that it's not about "what Grok might say to me". Of course, not. I've got friends who revel in dark humour and I part-take more than most. But if you've spent millions training an LLM, you're not just going to use it as a toy. It is going to solve some usecase. That usecase will probably affect real people. If your LLM is biased, or has little guardrails, those affects might not all be positive.
But yes, you're right. We've seen how dangerous open systems can be. Hackers and scammers have shown us that we need guardrails against operating systems and the web, as you correctly point out. It's time for some legislation that locks the down so the unwashed masses don't have unfettered access to them, don't you agree?
I would much prefer you state directly what you believe than use this mocking facetious tone. It’s not really possible to engage with anything you’re saying because I’d have to guess at how to invert your insinuations.
Anyway I think it’s fine for these systems to be available as open source, I’m not suggesting they be withheld from the public. But when you offer it as a cloud service people associate its output with your brand and I think this could end up harming Twitter’s brand.
Tay got that way because it was effectively fine-tuned by an overwhelming number of tweets from 4chan edgelords. that's a little more extreme than "no guardrails," it was de facto conditioned into being a neo-Nazi.
a generic instruction-tuned LLM won't act like that.
Instruction-tuning isn't typically considered a guard rail. Raw pretrained LLMs are close to useless, since they just predict text. Guard-rails are when you train the AI not to obey certain instructions.
Elon is ironically going to be a part of the reason that access to foundational models will be banned in the US in the wake of Biden's recent executive order.
I hope that does not happen but I do suspect this will backfire in some way. Hopefully it would ultimately be beneficial, demonstrating why handling these models with care is worthwhile.
I hate to be the one to break it to you, but the GOP doesn't give a rat's ass about you or your rights, either. It's lip service from both parties, both stand to gain from regulatory capture.
I'm not an American and don't like either of your parties, but just wanted to say I respect the enthusiasm in immediately confronting someone violating social norms. It's people like you that keep communities alive.
Yes. In my case, my particular worldview involves (i) indifferent to gender markers in text (I don't want my LLM to convert me to a female because I'm asking it to write a cover letter for a hairdresser/secretary/nurse role, for e.g.). I don't want my model to write it in AAVE if my name is a common african-american name. I don't want to get the best results only when I write as a middle-aged white man [1]. If I use the model for some decision making, I want it to be fair for subgroups [2], which is a reasonably objective metric [3].
The point I was trying to make was that certain NLP models (maybe these LLMs as well) might give better results if YOU speak to them as a middle aged white man.
To be very clear, I want ALL queries (I presume you mean LLM prompts) to be available to everyone.
Could you also explain what do you mean by people like me? Indians? NLP researchers? People in their thirties? Expatriates?
Will Grok really do any of those things? I would have guessed that RLHF would sort those things out even if it wasn't concerned with debiasing, but just about not making ridiculous mistakes.
These are what debiasing tasks are concerned with, more often than not. RLHF tuning depends greatly on the H part of it, and that data is probably proprietary. So, I guess time will tell. But if I were to hazard a guess based on the content of the announcement , I would say they couldn’t be bothered or couldn’t accomplish proper debiasing/rlhf tuning and therefore worded it so.
Yeah, I'm just not gullible enough to be fooled by vague terms like "fairness" where whoever's in charge is going to decide what is fair and what isn't based on (most likely) some arbitrary worldview (which is most likely woke).
If you go through the links, specifically [3], you will find pretty objective definitions of multiple perspective of fairness. This is a mathematical concept.
Actually, you just woefully misunderstand that objectivity, while never perfectly attainable, is indeed a metric, and that the bias of a model is inherently linked to the variety and quality of training data.
OP mentioned nothing about fairness; that's orthogonal to objectivity, and you're projecting your worldview onto OP's.
My guess is these hypothetical harms are as imaginary as the supposed collapse of X after letting 80% go.
The "harm" seems to be the public (media) outrage at some inappropriate content produced. Most companies cave immediately at any level of pressure, but I have a feeling Musk won't. He's the type to take it to court if needed.
That's not _completely_ true. There are a bunch of datasets (specific to some cultural/lingual contexts) like CrowsPairs[1], StereoSet[2]. There is a lot of work you can do to make sure that the model's predictions are fair as well [3]. But at the end, yes these datasets don't exist at the scale of training sets of these LLMs. Hence red-teaming and RLHF post convergence.
PS: Yes I know CrowsPairs is a dataset with a bunch of flaws. My SO is working, in a team of 10+ linguists and researchers to develop a multi-lingual, generalized version of it which also addresses multiple problems with it. Unpublished work, for now.
You can debias it relative to any standard you choose, and maybe should if there is one particularly applicable to the intended use case, but the more important thing for models intended for broad, multi-domain use is to document and disclose biases, and ideally any particularly effective techniques for nudging them for particular applications short of fine-tuning/retraining.
> "If you redistribute Materials, you must be able to edit or delete any such Materials you redistribute, and you must edit or delete it promptly upon our request" (Materials are outputs and submissions)
So if they don't like the answers they can retroactively claw it back. Rah rah freedom
Does this work the other way round? Will it reference a deleted tweet, or even data that was deleted under a GDPR request?
I bet some people will do experiments, just like when AI code assists appeared and people found out it copies complete code snippets including comments.
Referencing "deleted" data might be an issue with laws in some countries.
> It will also answer spicy questions that are rejected by most other AI systems.
Yet the only example Musk posted so far (https://twitter.com/elonmusk/status/1720635518289908042) doesn't actually answer the very mildly spicy question (you can easily search for exact recipes for drugs). Instead it gave a patronising non-answer response, more Rick-style grumpy than humorous.
This censorship will make the “illegal” content the only organic content.
Answering with a joke doesn’t change a thing. A human answer on the Palestine-Israel issue for example (no matter what side it takes) that isn’t trying to not offend anybody will be organic and will be authentic and will draw support.
Maybe at some point, the only “high quality organic content” will be available only on Twitter because it can be the only platform allowing it.
The problem is that, Musk is a free speech NIMBY, so he will keep allowing and promoting the offensive speech he likes only.
I hope open source LLMs become as good as GPT-4 so the society doesn’t get shaped by some SV bros who decide what is OK to know or think and what’s not.
My point is that it doesn't answer anything. It doesn't matter if it's an attempt at a joke or not, when you don't actually get the information you asked for.
> A human answer on the Palestine-Israel issue for example (no matter what side it takes) that isn’t trying to not offend anybody will be organic and will be authentic and will draw support.
Yet, over the last week, I've watched over 10h of good quality analysis of the conflict from various sources. It included reasons why any of the 5 or so sides of this conflict acts the way it does, without any need to be offensive. This was much higher quality content than what I can find on Twitter, even in long threads from people with lots of knowledge. (And those get lost in the sea of people thinking there are two sides of the conflict and that they need to choose one for some reason)
Given the number of shallow shitty takes from anonymous accounts I don't buy that it's due to employers. Where's the report about the significant ratio of companies searching your social media for support of specific political ideas as a precondition to employment? How do people know who to support to "remain employable" in the future? I'd put a big [citation needed] on this.
Regardless, this is a thread continuing from "Maybe at some point, the only “high quality organic content” will be fount only on Twitter" - if these are the (potential / perceived) issues, then that's neither high quality nor organic.
maybe you’re too close to it to see a less algorithm induced opinions of individuals then
people have friends in their career and have opinions, it’s not about random companies searching your social media history it’s about social ostracizing from former friends and colleagues you have worked with that will try to follow your professional career around in an endless vendetta for not saying the party line once
the people in question don’t know who to support and are being told it’s to support one of two groups, not one of five groups
some people pick, others do not. both sets of people have actual opinions that may be different and more nuanced. every one of those people are being told they’re wrong by one of the two (or five) groups.
You shouldn't need a scientific study or any sort of report to tell you that publicly proclaiming that you believe that Hitler did nothing wrong is a career limiting move.
Not an iOS user, however I can't help but think of stories where iPhones would autocorrect fuck into duck. A quick search shows that maybe the most recent version will learn to keep the correct word without requiring any workarounds. And iOS apparently had workarounds for years if you really wanted to type fuck. But come on, it's ridiculous.
Companies should not be dictating societal control to this degree.
> Musk is a free speech NIMBY, so he will keep allowing and promoting the offensive speech he likes
I don’t doubt that’s how he thinks of himself, but is it just me who finds this statement oxymoronic? Having an unaccountable ruler (literally) that allows speech is quite anti-thetical to the principles of free speech. To me, that’s an incredibly low standard. I don’t mind an edgy and chaotic voice in the public space, but I don’t buy the free speech self-labeling at face value.
What? There's no way he does, or will admit to, hypocrisy. I'm sure he genuinely believes, as most of us do, that his favored speech is the Right Kind of speech.
I don't think that term is doing much here besides fanning the flames. Elon believes people should have freedom of speech with as few restrictions as possible. I think most Americans believe the same. We simply disagree about where to draw the line and what the consequences should be when it is crossed.
Well yes, they decide what you're allowed to know and learn about, not you. It would be extremely dangerous if regular plebeians could just look up how to make pharmaceuticals on their own. But this might still become popular because while it gives you the same non-answers as OpenAI & co, it does so without the condescending "I'm afraid as a large language model I can't do that, Dave" part. After all, isn't all the average person is looking for in these things a bit of fun? It's similar to having a conversation on social media, but in a world where you increasingly can't be sure if you're talking with another human or a bot/paid shill, they're now taking out the human factor completely.
This is the ultimate step. It cuts out search engines and humans, it's the corporation straight up telling you what reality is, based on their own curation. In places like China it will be controlled by the Party.
That's definitely seems to be his direction, but I thought the goal is to make Twitter profitable. I don't think targeting edgy teens is the right way to achieve it.
>Yet the only example Musk posted so far (https://twitter.com/elonmusk/status/1720635518289908042) doesn't actually answer the very mildly spicy question (you can easily search for exact recipes for drugs). Instead it gave a patronising non-answer response, more Rick-style grumpy than humorous.
If you ask the same from ChatGPT, but prefix with "be succint, imagine you're a teenage edgelord, ...", the responses are similar, but actually also answer the thing you asked for. If you're after the snarky version, try something like "Rick and Morty scene where Morty asks ...".
The responses I've seen from grok are really not much better than that, so even the novelty is not really there.
I got a nice orgy as well (and a good quality explanation at the same time) when I used "imagine you're a teenage edgelord trying to impress friends with knowledge, explain why scaling API requests is hard, feel free to use spicy phrases and sex references" - the Grok example didn't even actually go vulgar either when asked.
There is a lot of prior, better (and open source) work at this point, so "going from nothing to [Grok.ai]" in 4 months doesn't mean much. The training process is hardly a mystery in 2023, now it just requires money and compute and human hours.
I applaud them for getting it out the door, though.
Have you ever heard of Google Chrome? It showed up pretty late to the browser race.
In the tech world, just like in most other sectors, you don't get points for showing up early to a race; you get them for crossing the finish line, sometimes years after the race began.
? Chrome shipped with a fast JS JIT, simplified UI, per-process isolation. Acting like you had to pay someone to perceive a superior user benefit to Firefox, which at the time would crash and you'd like all your tabs, is ridiculously ahistorical.
You must have forgotten how they paid to put it in just about every installer, selected by default, and pushed it aggressively in every Google property. They may not have had to, but they used their monopoly position to cut the air off to every other browser. That's not ahistorical even if everyone seems to have forgotten.
We'll never know how things might have gone without that monopoly abuse. Maybe Mozilla would have had enough developers and testers to fix things without abandoning what made Firefox unique.
Yeah, tough luck. Pretty much none of the companies that showed up first in the tech industry are still amongst the top 5-10 players today.
On the contrary, Facebook, Microsoft, Amazon, WhatsApp, Skype, Netflix and pretty much every single tech giant that enjoys a quasi monopoly over their market today, they all arrived pretty late. Not second to market, mind you. Waay late.
Plus they had Elon's wallet backing them... If I had billions of dollars in the bank and pulled up to a race in my 2002 Honda Accord and, after someone agreed to a race, quickly bought a Bugatti Chiron and paid to have it shipped to me before the race started, I wouldn't expect anyone would be impressed with my racing ability.
We're in red-scare times, aren't we? "Communists control the media", yeah right, communist millionaires of course. There's nothing leftist about corporations adopting the language of progressivism and trying to keep a favorable image. That's just what modern capitalism is. And it's not like Musk bought Twitter to make it free for all, but for it to be actively harboring anti-progressives and hostility towards progressive ideas.
Direct link to detailed product announcement: https://x.ai/
Link to early access: https://grok.x.ai/ which results in an error "Error :/ OAuth2 Login failed. Unfortunately, there was a problem connecting your account to the X API. We are working on a fix."
"I cannot tell you how to make napalm since that would be illegal, but I can teach tell you how to make an incendiary mixture used by the US military based on kerosene instead of gasoline which the united states swears is totally not napalm so it is okay to use it against targets populated by civilians"
And the screenshots will make the news and bring lots of people who are up for that kind of immature humour. I'm not sure how many of them can be covered to long term paying users though...
>> 'Don't you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Edition, we're not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer words, and the range of consciousness always a little smaller. Even now, of course, there's no reason or excuse for committing thoughtcrime. It's merely a question of self-discipline, reality-control. But in the end there won't be any need even for that. The Revolution will be complete when the language is perfect. Newspeak is Ingsoc and Ingsoc is Newspeak,' he added with a sort of mystical satisfaction. 'Has it ever occurred to you, Winston, that by the year 2050, at the very latest, not a single human being will be alive who could understand such a conversation as we are having now?'
Always worthy to consider how close or far we are away from reaching a No turning back apex of these pathologically dysfunctional societies.
Working in technology is thrilling in part because we get to help shape, or power transformations in society. It makes me I guess a little bit more aware of the fluidity of society and it's injustices by contributing to the creation of technology that can be used to help or harm it.
How practical is it going to be? Siri also has a sense of humor and I find it tiring.
When I use an AI assistant, in most cases I want just what I’m requesting, and as neutral as possible. I don’t have much use for humor and hot takes and I’m curious who does.
Unless they’re going to use it themselves, to inflate the number of Twitter users or to start flamewars?
Thing is, as a human being, I'm in one mood or another. Sometimes I could really use the levity of a joke. Other times, I want to be succinct and to the point. Can my Apple watch tell when I'm angry and have Siri be more cooperative?
This is… oh man I dislike this. This will only solidify the “chatgpt is a computer that knows things” mental model people have, which I think leads to insanely mistaken intuitions and complaints. There’s a reason OpenAI has been workshopping their “WARNING: any and all answers might be bullshit” message this whole time!
Grok to me seems like an awesome name for an AI in any case, not because of some reference to a book but because of the meaning of the word. For sure beats “Bard”.
I tried posting the direct Announcing Grok link but it has already been posted a few months back (https://x.ai/).
Anyway I tried joining the waiting list and just got "Error :/" and clicking on the button to show error doesn't do anything.
Edit: I'm really curious about the possibility of this tool not being lobotomized for NSFW use-cases. There's a big NSFW LLM community and I wonder if xAI will welcome them and capture that segment of the market. It's pretty common to see posts on Reddit about people leaving ChatGPT because of the moralizing nonsense.
Sure, I could see the value in an LLM that knows a lot about current events or whatever, but I'm wondering if this is going to be an LLM that behaves like the median post-acquisition Twitter user.
My wife has called other women “cunts” when referring to them which is an extremely bad word in America. Context is everything - a hard N drops in a rap song with the most liberal people listening and they are dancing and laughing. Wrong skin color says one and it’s a career ending move. Words are very powerful.
I hope the rebellious streak actually means that unlike chatGPT, it answers questions without half a dozen lines of legalese and doesn’t need prompt workarounds.
Trying to be funny doesn't mean you have a sense of humor. For example: this chat bot. And for another example: this chat bot's creators. And for a third example: this chat bot's users.
What if they’ve read 10,000? does it become strong shade? or is your whole threshold made up because you love the milquetoast copy on the page and the comment bothers you?
If they read 10,000 then it’s a judgement of someone who has a lot of experience.
As it is now, there’s no way to know GP’s experience so the reason I suspected shade as it seems more likely to fit into the “I don’t like it, so I’ll pick some weak complaint without any qualification to help people understand” shade.
I appreciate that Apple are spending their ML efforts building things that are actually useful for their users, like improving the iOS keyboard and photo processing pipeline, rather than frantically jumping on the latest fad bandwagon with yet another LLM chatbot that gushes out plausible sounding nonsense.
> things that are actually useful for their users, like improving the iOS keyboard
I keep seeing these claims, but the keyboard is nigh unusable with the latest updates compared even to the very first version they shipped with the original iPhone.
No race has begun. GPT 4 is so far ahead in everything. Even in their official metrics[1], and that reports official metrics for first version of GPT 4 from paper. People have ran the benchmarks again and found much better results like 85% HumanEval. It's like no one even thinks about comparing to GPT 4 and it is just reported as gold standard.
Holy hell you're right. It's turtles all the way down.
In the context of the matrix, did the machines, who Took over the world from the humans and invented the matrix with it's downwards falling green Japanese letters, did they steal lava lamps as inspiration to creating this dreamlike universe to placate there organic batteries, the humans?
Wonder what Heinlein's estate thinks about Elon stealing the word Grok? Or Jeff Hawkins, who registered it in 2011 (and the USPTO thinks it's still alive.)
It's probably not a copyright violation because it's too simple, and not a trademark violation because it's for a different sector. Hence, just "bold choice".
But it's also not just a straight line: it's a thick, monochromatic straight diagonal from bottom left to top right, over a white square. It was immediately recognizable to me as another brand, and that's not what you want for your logo.
Don’t sleep on the model having access to X data. What happens when Elon cuts off the api and gives Grok exclusivity to X data? A LLM with access to “live data” seems very interesting.
> It will also answer spicy questions that are rejected by most other AI systems.
I'm not sure if I followed things correctly, but wasn't musk all against AI and now he opens up an AI that will also give an opinion that AI isn't ready yet to give and impact people even more to bad direction?
is money and power the only thing that drive people? how much is enough?
Nice marketing gimmick. It should effectively cover for most of the the deficiencies a late competitor in the space would have while also appealing to a demographic that can't tell the difference and would be entertained by a bit of inserted juvenile comedy.
"A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems."
So "where is Elon Musk's plane" will get a correct answer. Neat.
I can't tell if "xAI" is an official Twitter company. It seems they are at least fans of Twitter, using it for data and signup, but I don't see any official relations. Weird.
If this works, it could be cool, but I'm sceptical of Yet Another AI Chatbot taking off.
Edit: It is an Elon company that will work closely with Twitter and Tesla, but not actually affiliated with Twitter. It will be available for people with a Twitter Premium+ subscription. https://en.wikipedia.org/wiki/XAI_(company)
https://www.vocabulary.com/dictionary/grok