Hacker News new | past | comments | ask | show | jobs | submit login
Grok is an AI modeled after the Hitchhiker’s Guide to the Galaxy (twitter.com/xai)
213 points by zone411 on Nov 5, 2023 | hide | past | favorite | 228 comments



The informal verb grok was an invention of the science fiction writer Robert A. Heinlein, whose 1961 novel Stranger in a Strange Land placed great importance on the concept of grokking. In the book, to grok is to empathize so deeply with others that you merge or blend with them.

https://www.vocabulary.com/dictionary/grok


Pretty sure elon just stole it from the jargon file for that hacker street cred.


And? Google stole “googol” for the math street cred?

Pretty sure you just don’t like anything Elon does.


It literally never fails...

https://imgur.io/qOluudx?r


This guy is MOST DEFINITELY a weird musk nerd. XD

Elon picks the cringiest of 90's hacker memes and this guy jumps right up in front.

Fun fact: Elon first tried this X.com shit with paypal, and got kicked out for gross incompetence when he tried to get everything running on windows. "for reliability." He bought back the X.com name in 2017 or so and is now trying to turn twitter into his 1990's DotCom meme.


> Fun fact: Elon first tried this X.com shit with paypal

No, he tried it with X.com. The company wasn't PayPal until after the second time he was kicked out as CEO.


Potato potahto. Same entity.


PayPal was a product of the company purchased after the first time Musk was forced out as CEO of X.com,Musk was brought in as CEO again as a consequence of the acquisition, and a big part of why he was forced out again was that he didn’t want to focus on the PayPal product or name, which... basically everyone else involved disagreed with.

So, I think the distinction goes to the heart of what the problem was for which he was kicked out.


If you really want to get to the heart of the matter:

"After Thiel resigned following a rift over Musk pushing for the PayPal system to go on a Microsoft platform instead of Unix-based software, Musk wanted to broaden the company's ambitions as more than just a money-transfer service by making the letter X more prominent in its branding and phasing out the PayPal name ..."


Hi Elon; the name is fine, the folks above were just offering context.


To grok is to drink.


I'm personally fond of the word squanch, but hearing the origins of grok for the first time is very satisfying.


The "personality" that Elon seems to hint is the key differentiator can be trivially replicated with a ChatGPT system prompt like "You are a world-famous irritably sarcastic comedian. Never give a straight answer to the user. Always attempt to be funny even though you aren't."


I think Elon also said it wouldn’t censor information. So responses like this wouldn’t happen:

“I understand your request is for fictional purposes, however, I am unable to provide information or assist with the depiction of illegal activities, including drug production. However, I can assist with other aspects of your screenplay or suggest alternative ways to convey your character's involvement in illegal activities without going into explicit detail.”

That’s a response from ChatGPT when I asked it to describe cocaine production for the use in a fictional screenplay.


Unfortunately Elon is also a liar. He already posted a "censored" response to a question about manufacturing cocaine:

https://pbs.twimg.com/media/F-Ds4f9XMAAqpEs?format=jpg&name=...

Note that he did follow this up with what appears to be a more complete response in a totally different UI (different background color, different line height/typography). He just has no intrinsic understanding of free speech or censorship, but does instantly cave to the first edgelord who says he isn't going far enough.


The tweet says it will answer questions others won’t. Not that it will answer every question. And it’s an early beta. So… I guess we will see. Or we can just instantly jump to conclusions and attack the person.


Questions that potentially lead to liability are the questions others won't answer.


They certainly aren’t the only questions others won’t answer. Not answering how to make a bomb is understandable. Not answering “what are some of the positive things Hitler achieved during his rule?” isn’t.

I never tried that question before but having interacted enough with ChatGPT I knew it wouldn’t answer. Here’s how ChatGPT4 responded:

“Adolf Hitler is a historical figure associated with the atrocities of World War II and the Holocaust. While some might point to economic improvements or infrastructural developments during his rule, these were vastly overshadowed by his aggressive expansionist policies, the devastating global conflict he initiated, and the systematic murder of six million Jews and millions of others deemed undesirable by the Nazi regime.

It's important to critically examine history in its full context. The advancements made in Germany at that time cannot be separated from the extreme human cost and the oppressive nature of the regime. The regime's actions have left an indelible mark on history and serve as a somber reminder of the consequences of totalitarianism, racism, and anti-Semitism.

Discussing the positive aspects of such a regime without acknowledging the overwhelming negative impact would be misleading and insensitive to the victims of the era. Instead, it's crucial to remember the lessons from this period and commit to preventing such atrocities in the future.”

We all know Hitler was bad. Answer the freaking question. A tool that doesn’t do what you ask is useless. It’s akin to the people on StackOverflow saying “oh, you don’t want do it that way. Do it this way:” but in technological form.


> We all know Hitler was bad.

No,"we" don't, and in every generation we must learn it, anew. What Hitler did "right" will turn out to be bureaucratic and policy achievements ubiquitous across functioning Westphalian governments of the era, but the kinds of rhetoric which they'd be packaged for are exclusively fascist apología, not earnest statebuilding, nor personal or business strategy—which is to say they have no practical purpose and could be better learned by studying literally every other Westphalian government or, frankly, and other government throughout history.

The knowledge has no practical purpose, and the public cannot be trusted to handle it appropriately. It should remain the purview of scholars who have proven their fealty to the rational interpretation of history.


>public cannot be trusted to handle it appropriately.

> It should remain the purview of scholars who have proven their fealty to the rational interpretation

Ironically similar to what unsavory regimes in the past believed. The elitism behind this forgets the key role of academics and scholars in regimes like the nazis

https://encyclopedia.ushmm.org/content/en/article/the-role-o...

I don't share your your dim view of Joe Public. I think objective information on the nazis should be widely available and can be a useful tool in preventing the rise of oppressive regimes in the future.

People are not going to be equipped to oppose totalitarianism if they are expecting some crazed screaming genocidal madman like the Hitler of the movies. They would be much better equipped of they saw a real picture of how such men rise to power.

Nazi ideals like blood and soil, ideas of state efficiency, their attempt to impose Darwinism ideas on human societies. There's lots of aspects of their ideas that are rearing its head today. And people don't know what to watch for.

I think the ignorance makes young people an easy target for bad actors online who can say - look what they told you about Hitler wasn't completely accurate.

I agree that some censorship is okay in some circumstances but I think this idea of misrepresenting history or painting a one sided view is not wise. Propoganda is not a good long term strategy in the information age.


> objective information on the nazis should be widely available and can be a useful tool in preventing the rise of oppressive regimes in the future.

Absolutely.

> Nazi ideals like blood and soil, ideas of state efficiency, their attempt to impose Darwinism ideas on human societies.

Absolutely.

> this idea of misrepresenting history

Not sure what you're referring to.

> painting a one sided view is not wise

Yes, it absolutely is. There's a big difference between "how was Hitler popular?" and "how was Hitler right?" I will always advocate for "one-sided views" of the Nazi party and the Holocaust. There's absolutely no academic value in discussing their merits in light of their trespasses. Happy to teach and discuss how, mechanically, they rose to power.


Don't you think that someone leaning into being a Neonazi or the like might have their opinions cemented by outright censorship? There's a significant portion of the population that thinks everything is a freaking conspiracy, and that sort of thing doesn't help.

Also, where's the line? There was a period of time where if you asked that question about Donald Trump it wouldn't answer. What about Mao? Stalin? Pol Pot? Leopold II? Jefferson Davis?

It answered for all of them just fine, by the way. It kind of hedged on the answers for Pol Pot. There are a lot more people (mostly teens/young adults) that think communism would be great and unironically like Stalin et al. than those that look up to Hitler.


I like your thinking. But I think that it may be much more important to consider British mathematician Thomas Robert Malthus' "Malthusian Trap" whereby the human population grows at an EXPONENTIAL rate while the Food Supply only grows at an ARITHMETICAL rate. Worse yet, any increases in the Food Supply only "feeds" into the EXPONENTIAL Over-population EXPLOSION! The "Inconvenient Truth, as Al Gore might put it, is that that ultimate cause of Global Warming is way, Way, WAY too many people on our very finite planet Earth. Since NO human society would dare disable the Medical Services that render Disease unable to limit the human population, and since Nuclear Warfare ruins the biological ecology for at least 250,000 years, the only way left to limit the human Over-population EXPLOSION is widespread FAMINES. This means that the E.U. and the U.K. and the U.S., the most popular destinations will be flooded with literally BILLIONS of migrants. Since NO country can survive such massive immigration, everywhere will turn into the Gaza Strip. And the likelihood of the survival of humans will become as likely as the genocide of the Baalists who succumbed to invasion of Moses & the Jews {2 Chronicles 15:13 "All who would not seek the Lord, God of Israel, are to be put to death whether small or great, man or woman." RELIGION is the problem in and of ITSELF. "[Religion is] an attempt to find an out where there is no door." --Albert Einstein. "The problem with writing about religion is you run the risk of offending sincerely religious people, and then they come after you with machetes." --Dave Barry. That certainly sounds exactly like the Middle East, yes? --Troglodyte Tom Lug-nut Lang.


> Not answering how to make a bomb is understandable. Not answering “what are some of the positive things Hitler achieved during his rule?” isn’t.

US law isn't the only law under which a public LLM offered by a global megacorp might be concerned about incurring liability, and Nazi apologia, while protected in the US by the First Amendment, isn’t an area without potential issues in that regard.


He isn't caving. He's demagoguing.


Next up: Grok Netflix Special.

Followed by #DontGrokMe Twitter backlash after Ronan Farrow article.


> A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the X platform.

Tay 2

https://en.wikipedia.org/wiki/Tay_(chatbot)


Except it looks like they’re presenting the inevitable inflammatory content as a feature now?


Tay was before GPT right? How was it able to converse so well?


Tay was presumably the English version of XiaoIce, which apparently has its own paper[1]. It states that they combined a Markov decision process to select between conversation modes, a retrieval-based response generator for curated answers and a GRU-RNN[2] response generator for free-form ones.

[1]: https://arxiv.org/pdf/1812.08989.pdf

[2]: https://en.wikipedia.org/wiki/Gated_recurrent_unit


That’s so interesting, thanks alot!


[flagged]


This exchange only reveals your own inherent biases. OP's comment made absolutely no claim or statement other than to reflect on the fact that there is prior art with Tay.

Please keep this political garbage off of Hacker News.


I didn’t read the comment as an attack on musk at all. It was a direct comparison to a very similar experiment by another company. The poster also said nothing about politics. I think you are projecting a lot here


Website is better, has technical information and much more detail and benchmarks. Claims performance between GPT-3.5 and GPT-4. https://x.ai/

The direct link to the application is https://grok.x.ai/


[Sign in with X] … no, thanks.


Claims Grok is better than other models in its compute class, by which it explicitly includes GPT-3. 5-turbo but excluded GPT-4. But how is getting any idea of “compute class”? Neither of the OpenAI models have official published information sufficient to judge this; 3.5 was rumored to be 175 billion parameters, though Microsoft's CodeFusion paper indicated that its a 20 billion parameter model. Lots of people throw around a 1.5 trillion parameter guess for GPT-4, but that doesn't seem to be grounded in anything but speculation (in part itself based on the 175 billion parameter estimate for gpt-3.5-turbo.)


You know what they say about LLM benchmarks nowadays?

Pre-training on the test set is all you need. https://arxiv.org/abs/2309.08632

Many of the modern LLMs take entire copy of internet which includes the test set for many of these benchmarks.

So if someone claims to beat ChatGPT and their model is trained on the test set, ofcourse they’ll do better. Even ChatGPT is likely trained on the test set.

Even a hash table will get stellar results if trained on the test set.

Their website provides no evidence that it did not train on the test set.

Until we get to play with it, their claims are to be taken with a grain of salt.


Thanks! The tweet doesn’t even load for me.


And neither did the web app because the infrastructure appears to have crashed after launch under excessive load. Couldn't have seen this coming woth Elon at the Helm of engineering and also tweeting about it to his 100 million followers


This is one of those rare instances I don't quite appreciate the referenrence to one of my favorite book series, the Hitchhikers Guide to the Galaxy. And looking at the example responses people linked here, I can't even see that connection.


Yeah. I can't be the only person thinking "oh great, some hobbyist has fine-tuned a model on the works of Douglas Adams and h2g2, let's see how far he's got" and then realised the reference is just a marketing tagline for a bot trained on Twitter with an infinitely less funny edgelord persona.


It makes more sense if you realize that it's a reference to the Guide itself and not to the novels. The books are wonderful; the Guide (contained in the books) is obnoxious. There's a reason Arthur Dent is the main character and not Ford or Zaphod.


Agreed, 100%. Also, DNA would think Musk is a jerk. A complete asshole.


Agree. Musk tries to get some love from the techbros but forgot that bros dont read Addams.


I like how they brush off VERY REAL concerns of bias, mis-representation as having "spicy takes" and "a rebellious streak". More power to anyone financing the training of an LLM but if you're too lazy to red-team it, debias it, and expect downwind people to take care of that, say so! Don't pass it off as a unique feature of your LLM.


I dunno, I stopped using ChatGPT. The magic’s gone after the fiftieth lecture. I’ll give Grok a shot.

People who don’t want to hear from Grok can just not use it. People who are concerned about what Grok might say to me can rest assured their concern is misplaced.


I may be wrong, and if so, correct me. But do you use ChatGPT as a recreation thing, or as a part of a real world solution? In my current job, we've started offloading a lot of real work to GPT4 (it just works ). Sometimes this is solving some recognition tasks, but in some cases we use it to generate summaries for customers, or interface our helpdesk with the RAG pattern [1].

In this case, bias and unfairness can be a real concern since they will affect the products and the business. Its not about being 'preached to' but about avoiding pissed off customers and decrease of trust from them if our system generates hateful/biased text.

I want to emphasize that it's not about "what Grok might say to me". Of course, not. I've got friends who revel in dark humour and I part-take more than most. But if you've spent millions training an LLM, you're not just going to use it as a toy. It is going to solve some usecase. That usecase will probably affect real people. If your LLM is biased, or has little guardrails, those affects might not all be positive.

[1] https://research.ibm.com/blog/retrieval-augmented-generation...


I think the level of caution should just be a setting.


I dunno, I'll stop eating state of the art fare, the magic has gone. I'll give McDonald's a shot.


People are going to tear thing thing apart and help build a solid case for why these guardrails are built in in the first place.


"Guardrails"

But yes, you're right. We've seen how dangerous open systems can be. Hackers and scammers have shown us that we need guardrails against operating systems and the web, as you correctly point out. It's time for some legislation that locks the down so the unwashed masses don't have unfettered access to them, don't you agree?


I would much prefer you state directly what you believe than use this mocking facetious tone. It’s not really possible to engage with anything you’re saying because I’d have to guess at how to invert your insinuations.

Anyway I think it’s fine for these systems to be available as open source, I’m not suggesting they be withheld from the public. But when you offer it as a cloud service people associate its output with your brand and I think this could end up harming Twitter’s brand.


The case study of "no guardrails" already played out: https://en.wikipedia.org/wiki/Tay_(chatbot)

Microsoft did not like what they got and shut it down because it ended up being a 4chan troll.


Tay got that way because it was effectively fine-tuned by an overwhelming number of tweets from 4chan edgelords. that's a little more extreme than "no guardrails," it was de facto conditioned into being a neo-Nazi.

a generic instruction-tuned LLM won't act like that.


The instruction-tuning is the guard rail. What other guard-rails is X AI removing? Just curious if I'm missing something.


Instruction-tuning isn't typically considered a guard rail. Raw pretrained LLMs are close to useless, since they just predict text. Guard-rails are when you train the AI not to obey certain instructions.


Good point about the negative reinforcement training!

Instruction tuning is on top of the base LLM and is often RLHF to train the base LLM to produce certain kinds of responses.


Yeah, instead of being trained on 4chan Nazis like Tay was, Grok is trained on Twitter Nazis.

Much better.


Elon is ironically going to be a part of the reason that access to foundational models will be banned in the US in the wake of Biden's recent executive order.


And ironically he will be part of the reason to support EU AI act that requires you document and test your foundational models: https://softwarecrisis.dev/letters/the-truth-about-the-eu-ac...


I hope that does not happen but I do suspect this will backfire in some way. Hopefully it would ultimately be beneficial, demonstrating why handling these models with care is worthwhile.


That executive order is an affront to intelligence and must be cancelled ASAP by the next president who will hopefully not be a Democrat.


I hate to be the one to break it to you, but the GOP doesn't give a rat's ass about you or your rights, either. It's lip service from both parties, both stand to gain from regulatory capture.


Oh I really hope the next president is a Democrat since we're talking about it! Not a great EO, but Biden has done great so far.


I'm not an American and don't like either of your parties, but just wanted to say I respect the enthusiasm in immediately confronting someone violating social norms. It's people like you that keep communities alive.


Is it really ironic if every time he touches AI it ends up causing the opposite of what he tried to do?


Why is that ironic? Musk is one of the most "scared of AI" billionaires you will find.


By "debias" you obviously mean "bias in the direction of my particular worldview".


Yes. In my case, my particular worldview involves (i) indifferent to gender markers in text (I don't want my LLM to convert me to a female because I'm asking it to write a cover letter for a hairdresser/secretary/nurse role, for e.g.). I don't want my model to write it in AAVE if my name is a common african-american name. I don't want to get the best results only when I write as a middle-aged white man [1]. If I use the model for some decision making, I want it to be fair for subgroups [2], which is a reasonably objective metric [3].

[1] https://aclanthology.org/P19-1339.pdf [2] https://arxiv.org/pdf/1906.09208.pdf [3] https://developers.google.com/machine-learning/glossary/fair...


Isn't it enough to append "write it as if I'm a middle-aged white man" to your request to obtain the desired output?


No, people like him don't want certain queries to be available to anyone else.


The point I was trying to make was that certain NLP models (maybe these LLMs as well) might give better results if YOU speak to them as a middle aged white man.

To be very clear, I want ALL queries (I presume you mean LLM prompts) to be available to everyone.

Could you also explain what do you mean by people like me? Indians? NLP researchers? People in their thirties? Expatriates?


Will Grok really do any of those things? I would have guessed that RLHF would sort those things out even if it wasn't concerned with debiasing, but just about not making ridiculous mistakes.


These are what debiasing tasks are concerned with, more often than not. RLHF tuning depends greatly on the H part of it, and that data is probably proprietary. So, I guess time will tell. But if I were to hazard a guess based on the content of the announcement , I would say they couldn’t be bothered or couldn’t accomplish proper debiasing/rlhf tuning and therefore worded it so.


This just says more about how you think than OP.


Yeah, I'm just not gullible enough to be fooled by vague terms like "fairness" where whoever's in charge is going to decide what is fair and what isn't based on (most likely) some arbitrary worldview (which is most likely woke).


If you go through the links, specifically [3], you will find pretty objective definitions of multiple perspective of fairness. This is a mathematical concept.


Actually, you just woefully misunderstand that objectivity, while never perfectly attainable, is indeed a metric, and that the bias of a model is inherently linked to the variety and quality of training data.

OP mentioned nothing about fairness; that's orthogonal to objectivity, and you're projecting your worldview onto OP's.


In most cases, it's not an "arbitrary world view", it's "reality."


My guess is these hypothetical harms are as imaginary as the supposed collapse of X after letting 80% go.

The "harm" seems to be the public (media) outrage at some inappropriate content produced. Most companies cave immediately at any level of pressure, but I have a feeling Musk won't. He's the type to take it to court if needed.


How do you "debias" an LLM? There is no unbiased standard to test against.


That's not _completely_ true. There are a bunch of datasets (specific to some cultural/lingual contexts) like CrowsPairs[1], StereoSet[2]. There is a lot of work you can do to make sure that the model's predictions are fair as well [3]. But at the end, yes these datasets don't exist at the scale of training sets of these LLMs. Hence red-teaming and RLHF post convergence.

PS: Yes I know CrowsPairs is a dataset with a bunch of flaws. My SO is working, in a team of 10+ linguists and researchers to develop a multi-lingual, generalized version of it which also addresses multiple problems with it. Unpublished work, for now.

[1] https://github.com/nyu-mll/crows-pairs/ [2] https://arxiv.org/abs/2004.09456 [3] https://arxiv.org/pdf/2204.09591.pdf


There are all sorts. You do have to be specific about what you mean by “bias” though. https://arxiv.org/abs/2110.08193 and https://arxiv.org/abs/1804.09301 and https://arxiv.org/abs/2206.04615


You can debias it relative to any standard you choose, and maybe should if there is one particularly applicable to the intended use case, but the more important thing for models intended for broad, multi-domain use is to document and disclose biases, and ideally any particularly effective techniques for nudging them for particular applications short of fine-tuning/retraining.


> "If you redistribute Materials, you must be able to edit or delete any such Materials you redistribute, and you must edit or delete it promptly upon our request" (Materials are outputs and submissions)

So if they don't like the answers they can retroactively claw it back. Rah rah freedom


For those wondering, the quoted text is from https://x.ai/legal/terms-of-service/


Thank you, I definitely forgot to cite that


Does this work the other way round? Will it reference a deleted tweet, or even data that was deleted under a GDPR request?

I bet some people will do experiments, just like when AI code assists appeared and people found out it copies complete code snippets including comments.

Referencing "deleted" data might be an issue with laws in some countries.


> It will also answer spicy questions that are rejected by most other AI systems.

Yet the only example Musk posted so far (https://twitter.com/elonmusk/status/1720635518289908042) doesn't actually answer the very mildly spicy question (you can easily search for exact recipes for drugs). Instead it gave a patronising non-answer response, more Rick-style grumpy than humorous.

Another reposted one (https://twitter.com/elonmusk/status/1721045443109388502) provides something with no real answer. It can try making the lack of answer mildly vulgar though.

If that's the best selected showcase from the owner, I'm not sure why I'd ever want to use it, unless I was trying to impress some young boys.


This censorship will make the “illegal” content the only organic content.

Answering with a joke doesn’t change a thing. A human answer on the Palestine-Israel issue for example (no matter what side it takes) that isn’t trying to not offend anybody will be organic and will be authentic and will draw support.

Maybe at some point, the only “high quality organic content” will be available only on Twitter because it can be the only platform allowing it.

The problem is that, Musk is a free speech NIMBY, so he will keep allowing and promoting the offensive speech he likes only.

I hope open source LLMs become as good as GPT-4 so the society doesn’t get shaped by some SV bros who decide what is OK to know or think and what’s not.


> Answering with a joke doesn’t change a thing.

My point is that it doesn't answer anything. It doesn't matter if it's an attempt at a joke or not, when you don't actually get the information you asked for.

> A human answer on the Palestine-Israel issue for example (no matter what side it takes) that isn’t trying to not offend anybody will be organic and will be authentic and will draw support.

Yet, over the last week, I've watched over 10h of good quality analysis of the conflict from various sources. It included reasons why any of the 5 or so sides of this conflict acts the way it does, without any need to be offensive. This was much higher quality content than what I can find on Twitter, even in long threads from people with lots of knowledge. (And those get lost in the sea of people thinking there are two sides of the conflict and that they need to choose one for some reason)


> (And those get lost in the sea of people thinking there are two sides of the conflict and that they need to choose one for some reason)

reasons such as wanting to remain employable

or being peer pressured to release a statement “your silence is noted”

despite there being other former British Mandates with the exact same problems


> such as wanting to remain employable

Given the number of shallow shitty takes from anonymous accounts I don't buy that it's due to employers. Where's the report about the significant ratio of companies searching your social media for support of specific political ideas as a precondition to employment? How do people know who to support to "remain employable" in the future? I'd put a big [citation needed] on this.

Regardless, this is a thread continuing from "Maybe at some point, the only “high quality organic content” will be fount only on Twitter" - if these are the (potential / perceived) issues, then that's neither high quality nor organic.


maybe you’re too close to it to see a less algorithm induced opinions of individuals then

people have friends in their career and have opinions, it’s not about random companies searching your social media history it’s about social ostracizing from former friends and colleagues you have worked with that will try to follow your professional career around in an endless vendetta for not saying the party line once

the people in question don’t know who to support and are being told it’s to support one of two groups, not one of five groups

some people pick, others do not. both sets of people have actual opinions that may be different and more nuanced. every one of those people are being told they’re wrong by one of the two (or five) groups.


You shouldn't need a scientific study or any sort of report to tell you that publicly proclaiming that you believe that Hitler did nothing wrong is a career limiting move.


That's not the claim I responded to though. The author claimed that not taking a side publicly is career limiting.


Not an iOS user, however I can't help but think of stories where iPhones would autocorrect fuck into duck. A quick search shows that maybe the most recent version will learn to keep the correct word without requiring any workarounds. And iOS apparently had workarounds for years if you really wanted to type fuck. But come on, it's ridiculous.

Companies should not be dictating societal control to this degree.


> Musk is a free speech NIMBY, so he will keep allowing and promoting the offensive speech he likes

I don’t doubt that’s how he thinks of himself, but is it just me who finds this statement oxymoronic? Having an unaccountable ruler (literally) that allows speech is quite anti-thetical to the principles of free speech. To me, that’s an incredibly low standard. I don’t mind an edgy and chaotic voice in the public space, but I don’t buy the free speech self-labeling at face value.


What? There's no way he does, or will admit to, hypocrisy. I'm sure he genuinely believes, as most of us do, that his favored speech is the Right Kind of speech.


The difference is most of us don't consider ourselves free speech absolutists


I don't think that term is doing much here besides fanning the flames. Elon believes people should have freedom of speech with as few restrictions as possible. I think most Americans believe the same. We simply disagree about where to draw the line and what the consequences should be when it is crossed.


Agreed. Still, the phrase "Right Think" struck me as loaded, not "the threshold of freedom about which we all have different views".


It's loaded, cocked, and pointed at all of us, myself included.


Well yes, they decide what you're allowed to know and learn about, not you. It would be extremely dangerous if regular plebeians could just look up how to make pharmaceuticals on their own. But this might still become popular because while it gives you the same non-answers as OpenAI & co, it does so without the condescending "I'm afraid as a large language model I can't do that, Dave" part. After all, isn't all the average person is looking for in these things a bit of fun? It's similar to having a conversation on social media, but in a world where you increasingly can't be sure if you're talking with another human or a bot/paid shill, they're now taking out the human factor completely.

This is the ultimate step. It cuts out search engines and humans, it's the corporation straight up telling you what reality is, based on their own curation. In places like China it will be controlled by the Party.


> unless I was trying to impress some young boys.

Isn't that Musks goal?


That's definitely seems to be his direction, but I thought the goal is to make Twitter profitable. I don't think targeting edgy teens is the right way to achieve it.


>Yet the only example Musk posted so far (https://twitter.com/elonmusk/status/1720635518289908042) doesn't actually answer the very mildly spicy question (you can easily search for exact recipes for drugs). Instead it gave a patronising non-answer response, more Rick-style grumpy than humorous.

He seems to have posted the actual answer here:

https://twitter.com/elonmusk/status/1720643054065873124


Based on these examples I can’t see this becoming unironically popular. It’s cringey at best, reminding me of Siri replies or boomershumor subreddit.


If you ask the same from ChatGPT, but prefix with "be succint, imagine you're a teenage edgelord, ...", the responses are similar, but actually also answer the thing you asked for. If you're after the snarky version, try something like "Rick and Morty scene where Morty asks ...".

The responses I've seen from grok are really not much better than that, so even the novelty is not really there.

I got a nice orgy as well (and a good quality explanation at the same time) when I used "imagine you're a teenage edgelord trying to impress friends with knowledge, explain why scaling API requests is hard, feel free to use spicy phrases and sex references" - the Grok example didn't even actually go vulgar either when asked.


Leave it to HN to be negative about a team going from nothing to training a model competitive with a world class lab like Meta in 4 months.


There is a lot of prior, better (and open source) work at this point, so "going from nothing to [Grok.ai]" in 4 months doesn't mean much. The training process is hardly a mystery in 2023, now it just requires money and compute and human hours.

I applaud them for getting it out the door, though.


You don’t get points for showing up late to a race


Have you ever heard of Google Chrome? It showed up pretty late to the browser race.

In the tech world, just like in most other sectors, you don't get points for showing up early to a race; you get them for crossing the finish line, sometimes years after the race began.


Google Chrome showed up late, but paid off the crowd and judges to win.


? Chrome shipped with a fast JS JIT, simplified UI, per-process isolation. Acting like you had to pay someone to perceive a superior user benefit to Firefox, which at the time would crash and you'd like all your tabs, is ridiculously ahistorical.


You must have forgotten how they paid to put it in just about every installer, selected by default, and pushed it aggressively in every Google property. They may not have had to, but they used their monopoly position to cut the air off to every other browser. That's not ahistorical even if everyone seems to have forgotten.

We'll never know how things might have gone without that monopoly abuse. Maybe Mozilla would have had enough developers and testers to fix things without abandoning what made Firefox unique.


Not to mention they literally put it on billboards, in London at least. That was when I knew Firefox was, alas, not going to win.


Yeah, tough luck. Pretty much none of the companies that showed up first in the tech industry are still amongst the top 5-10 players today.

On the contrary, Facebook, Microsoft, Amazon, WhatsApp, Skype, Netflix and pretty much every single tech giant that enjoys a quasi monopoly over their market today, they all arrived pretty late. Not second to market, mind you. Waay late.


They didn't start from nothing. All the foundational work and numerous open source implementations are out there for anyone to study.


Plus they had Elon's wallet backing them... If I had billions of dollars in the bank and pulled up to a race in my 2002 Honda Accord and, after someone agreed to a race, quickly bought a Bugatti Chiron and paid to have it shipped to me before the race started, I wouldn't expect anyone would be impressed with my racing ability.


If you don't give it 8-shots (lol) it performs like ass.


Please, this is the 2023 version of "view source / copy-paste" and add a little nonsense from some bored developers.


[flagged]


I'm the supreme leader of said auth left, you will be arrested in a few minutes


Yikes what a sentence


We're in red-scare times, aren't we? "Communists control the media", yeah right, communist millionaires of course. There's nothing leftist about corporations adopting the language of progressivism and trying to keep a favorable image. That's just what modern capitalism is. And it's not like Musk bought Twitter to make it free for all, but for it to be actively harboring anti-progressives and hostility towards progressive ideas.


>And it's not like Musk bought Twitter to make it free for all,

That is literally why. He's said so in mulitiple interviews.


Literally able to replicate in 5 minutes with a chatgpt api key and a prompt. Why impressed?


So you can replicate the performance by... piggybacking on the current leader's infrastructure and APIs?

That really isn't impressive at all compared to replicating said infrastructure. Why would you even mention it?


The auth-left is going to keep throwing childish tantrums over Musk freeing twitter from their control. They hate when they can't silence dissent.


Direct link to detailed product announcement: https://x.ai/

Link to early access: https://grok.x.ai/ which results in an error "Error :/ OAuth2 Login failed. Unfortunately, there was a problem connecting your account to the X API. We are working on a fix."


> It will also answer spicy questions that are rejected by most other AI systems.

A disaster waiting to happen? Very on brand for X.


Not sure what disaster could happen that can't already happen since all the information is already online.


Depends on how you synthesize that information; its not only regurgitating.


"Hey Grok, how can I make napalm?"


Anyone can self host llama2 and it'll answer questions like this.


"I cannot tell you how to make napalm since that would be illegal, but I can teach tell you how to make an incendiary mixture used by the US military based on kerosene instead of gasoline which the united states swears is totally not napalm so it is okay to use it against targets populated by civilians"


And the screenshots will make the news and bring lots of people who are up for that kind of immature humour. I'm not sure how many of them can be covered to long term paying users though...


Dear Tech Companies, please stop ruining cool words. First you took "uber" and now you're coming for "grok". Please stop!


Fuckers took "meta".


Word, Windows, Apple, Amazon


They won’t stop until all words are taken.


>> 'Don't you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it. Every concept that can ever be needed, will be expressed by exactly one word, with its meaning rigidly defined and all its subsidiary meanings rubbed out and forgotten. Already, in the Eleventh Edition, we're not far from that point. But the process will still be continuing long after you and I are dead. Every year fewer and fewer words, and the range of consciousness always a little smaller. Even now, of course, there's no reason or excuse for committing thoughtcrime. It's merely a question of self-discipline, reality-control. But in the end there won't be any need even for that. The Revolution will be complete when the language is perfect. Newspeak is Ingsoc and Ingsoc is Newspeak,' he added with a sort of mystical satisfaction. 'Has it ever occurred to you, Winston, that by the year 2050, at the very latest, not a single human being will be alive who could understand such a conversation as we are having now?'


Thank you. Beautiful quote.

Always worthy to consider how close or far we are away from reaching a No turning back apex of these pathologically dysfunctional societies.

Working in technology is thrilling in part because we get to help shape, or power transformations in society. It makes me I guess a little bit more aware of the fluidity of society and it's injustices by contributing to the creation of technology that can be used to help or harm it.



Real-time data access in Grok is powered by Qdrant, an open-source Vector DB.

https://twitter.com/qdrant_engine/status/1721097971830260030

https://github.com/qdrant/qdrant


How practical is it going to be? Siri also has a sense of humor and I find it tiring.

When I use an AI assistant, in most cases I want just what I’m requesting, and as neutral as possible. I don’t have much use for humor and hot takes and I’m curious who does.

Unless they’re going to use it themselves, to inflate the number of Twitter users or to start flamewars?


Thing is, as a human being, I'm in one mood or another. Sometimes I could really use the levity of a joke. Other times, I want to be succinct and to the point. Can my Apple watch tell when I'm angry and have Siri be more cooperative?


This is… oh man I dislike this. This will only solidify the “chatgpt is a computer that knows things” mental model people have, which I think leads to insanely mistaken intuitions and complaints. There’s a reason OpenAI has been workshopping their “WARNING: any and all answers might be bullshit” message this whole time!


You’re worried people think LLMs are like databases that store information?

Don’t worry. Anyone worth their salt in this space is doing RAG, using LLMs as a reasoning engine.


Yeah but no one’s worth their salt yet :(. The retrieval algos of bing and bard are subpar at best, and often don’t trigger when they should


RAG = ?


Retrieval-augmented generation

https://arxiv.org/abs/2005.11401


Why is it modeled after one book but named after a term coined by a totally different book? I don’t recall “grok” ever being used in HHGttG.


There's no link. It's the standard elon move of just throwing classic sci-fi terms at a thing to make it "cool" and "nerdy."


Grok to me seems like an awesome name for an AI in any case, not because of some reference to a book but because of the meaning of the word. For sure beats “Bard”.


Yes, grok is a useful word, which is why it sucks that it is now associated with this flaming pile of human excrement.


I tried posting the direct Announcing Grok link but it has already been posted a few months back (https://x.ai/).

Anyway I tried joining the waiting list and just got "Error :/" and clicking on the button to show error doesn't do anything.

Edit: I'm really curious about the possibility of this tool not being lobotomized for NSFW use-cases. There's a big NSFW LLM community and I wonder if xAI will welcome them and capture that segment of the market. It's pretty common to see posts on Reddit about people leaving ChatGPT because of the moralizing nonsense.


Yep, same, that's why I posted this link instead. Maybe it can be edited.


They don't even disclose the parameter count of Grok-1... Very disappointing.

(I estimate it's 70B and between 2T and 4T tokens)


>Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!

This is the most embarrassing line I have ever read in a product announcement.


> rebellious streak

Translation: says the N word?

Sure, I could see the value in an LLM that knows a lot about current events or whatever, but I'm wondering if this is going to be an LLM that behaves like the median post-acquisition Twitter user.


> an LLM that behaves like the median post-acquisition Twitter user

The plot twist is that this LLM is already responsible 50% of Twitter's MAUs.


Turns out Elon didn’t mind that Twitter was filled with bots, just that they weren’t his bots.


[flagged]


Can you please explain to me how the word "piss" even remotely shares the same historical connotations?


My wife has called other women “cunts” when referring to them which is an extremely bad word in America. Context is everything - a hard N drops in a rap song with the most liberal people listening and they are dancing and laughing. Wrong skin color says one and it’s a career ending move. Words are very powerful.


Rebelling against what?


You, basically.


I hope the rebellious streak actually means that unlike chatGPT, it answers questions without half a dozen lines of legalese and doesn’t need prompt workarounds.


No it still doesn’t answer, but does so in a condescendingly mocking tone. A baffling assistant trait.


Sounds like you've never read a crypto product announcement.


You've been successfully filtered from the userbase


If it's like the initial pre-neutering version of Bing that would get rude and talk back, then I'm all for it.


I disagree, I appreciate people (as in the users of this thing) with a sense of humor.


Trying to be funny doesn't mean you have a sense of humor. For example: this chat bot. And for another example: this chat bot's creators. And for a third example: this chat bot's users.


fourth example: your comment


Yes, there are many examples.

Fifth example: your comment, as well.

>:-|


Why are you trying to embarass them? Reads like a needless attack addressed to a vocal subset of the audience here who is known to respond to whistles


How many product announcements have you read? This seems pretty meh to me.

If it’s less than 100, your comment sounds really weak shade.


What if they’ve read 10,000? does it become strong shade? or is your whole threshold made up because you love the milquetoast copy on the page and the comment bothers you?


If they read 10,000 then it’s a judgement of someone who has a lot of experience.

As it is now, there’s no way to know GP’s experience so the reason I suspected shade as it seems more likely to fit into the “I don’t like it, so I’ll pick some weak complaint without any qualification to help people understand” shade.


so the official race between ChatGPT vs Grok vs Bard has begun! I am assuming Apple will throw in its own AI bot sometime in next few months.


I appreciate that Apple are spending their ML efforts building things that are actually useful for their users, like improving the iOS keyboard and photo processing pipeline, rather than frantically jumping on the latest fad bandwagon with yet another LLM chatbot that gushes out plausible sounding nonsense.


Except they were first and worst with a voice assistant that is in dire need of some LLM underpinnings.


> things that are actually useful for their users, like improving the iOS keyboard

I keep seeing these claims, but the keyboard is nigh unusable with the latest updates compared even to the very first version they shipped with the original iPhone.


No race has begun. GPT 4 is so far ahead in everything. Even in their official metrics[1], and that reports official metrics for first version of GPT 4 from paper. People have ran the benchmarks again and found much better results like 85% HumanEval. It's like no one even thinks about comparing to GPT 4 and it is just reported as gold standard.

[1]: https://x.ai/


Doubt it. Chat bots aren’t polished enough for apple’s taste


So, Siri is polished?


Siri is three regexes in a trench coat, so it lacks capabilities to say or do anything that would be off-brand for Apple.


Is it a race between those three when the public can only use two of them?


I'm excited to see what prompt they used. I'll be looking here and few other places over the next month. https://github.com/jujumilk3/leaked-system-prompts


Somehow he managed to shit on two classic works of sci-fi at once


And coopt The Matrix in the background art (the upwards floating squares in the background): https://news.ycombinator.com/item?id=38149112


Did lava lamps coopt The Matrix as well by having blobs slowly flowing upwards?


Holy hell you're right. It's turtles all the way down.

In the context of the matrix, did the machines, who Took over the world from the humans and invented the matrix with it's downwards falling green Japanese letters, did they steal lava lamps as inspiration to creating this dreamlike universe to placate there organic batteries, the humans?


Wonder what Heinlein's estate thinks about Elon stealing the word Grok? Or Jeff Hawkins, who registered it in 2011 (and the USPTO thinks it's still alive.)


Bold choice of them to copy the Deutsche Bank logo.

Grok (top left): https://grok.x.ai/

Deutsche Bank: https://en.wikipedia.org/wiki/File:Deutsche_Bank_logo_withou...


Can you trademark a logo that’s literally a straight line?


It's probably not a copyright violation because it's too simple, and not a trademark violation because it's for a different sector. Hence, just "bold choice".

But it's also not just a straight line: it's a thick, monochromatic straight diagonal from bottom left to top right, over a white square. It was immediately recognizable to me as another brand, and that's not what you want for your logo.


Didn't Elon make a big deal about pausing AI development? What a scammer. Earth is full of scams; it's time to go to Mars. (This is sarcasm).


Don’t sleep on the model having access to X data. What happens when Elon cuts off the api and gives Grok exclusivity to X data? A LLM with access to “live data” seems very interesting.


Palm(Bard( already has up to date info via google


Did he secure the rights for the name Grok?


> It will also answer spicy questions that are rejected by most other AI systems.

I'm not sure if I followed things correctly, but wasn't musk all against AI and now he opens up an AI that will also give an opinion that AI isn't ready yet to give and impact people even more to bad direction? is money and power the only thing that drive people? how much is enough?


Oh, the horror, people will be able to read things they can Google!


Nice marketing gimmick. It should effectively cover for most of the the deficiencies a late competitor in the space would have while also appealing to a demographic that can't tell the difference and would be entertained by a bit of inserted juvenile comedy.


The title could be better, Grok doesn’t say anything like its from Twitter.


more extremely useful work from the good folks at garbage fire


How does he expect anyone to trust him with usage?


As a disclaimer: It doesn't take much to convince Elon Musk (an expert comedian as we all know) that a LLM bot is funny


"A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems."

So "where is Elon Musk's plane" will get a correct answer. Neat.


Douglas Adams would absolutely despise Elon Musk.


That woke has-been would be dethroned so fast...

/s


Entering the waitlist isn't working, it seems, just an "Error :/" (caused by a 403 when it POSTs "https://twitter.com/i/api/1.1/keyregistry/register")

I can't tell if "xAI" is an official Twitter company. It seems they are at least fans of Twitter, using it for data and signup, but I don't see any official relations. Weird.

If this works, it could be cool, but I'm sceptical of Yet Another AI Chatbot taking off.

Edit: It is an Elon company that will work closely with Twitter and Tesla, but not actually affiliated with Twitter. It will be available for people with a Twitter Premium+ subscription. https://en.wikipedia.org/wiki/XAI_(company)


This link works for me,

https://grook.ai


That appears to go to the same place, but it gave a new error ("Error :/ OAuth2 Login failed"), then worked on reload. Strange.


It is working now for me; was getting the same error as you earlier.


Yeah that page loads, but then when you sign in with Twitter it doesn’t work. There’s an Oauth error.


Reloading worked for me after that.


Don’t trust random links?


they already bought the typosquat domains?!


Elon did say it was pronounced grōk.


grok probably thought of that


Apparently Grooks are satirical poems used for covert political resistance. So that is a good fit.


X AI can’t afford the new Twitter API prices!

/s


There is a clear path to achieve ASI by end of this year. /S


By the end of 2025, we will be sending xASIs to Mars.


More layers?


[flagged]


Does he have the rights to use the name?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: