Hacker News new | past | comments | ask | show | jobs | submit login




“Microsoft Chat 365”

Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.


> Although it would be beautiful if they name it Clippy and finally make Clippy into the all-powerful AGI it was destined to be.

Finally the paperclip maximizer


Clippy is the ultimate brand name of an AI assistant


It is too bad MS doesn’t have the rights to any beloved AI characters.


That's fine, making the "core" of an AI assistant that rights to characters can be laid onto is bigger business than owning the characters themselves.

Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.

Same as GPS voices I guess.


I can't tell if theyve ruined the Cortana name by using it for the quarter-baked voice assistant in Windows, or if it's so bad that nobody even realizes they've used the name yet.

I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.


Google really should have thought of the potential uses of a media empire years ago.


I guess they have YouTube, but it doesn’t really generate characters that are tied to their brand.

Maybe they can come up with a personification for the YouTube algorithm. Except he seems like a bit of a bad influence.


Assuming this is a joke about Cortana.


They already have a name, CoPilot. They made that pretty clear by mentioning it 15 times per minute at last week's Ignite conference :)


That name is stupid and won’t stick around. Knowing Microsoft, my bet is that it will get replaced with a quirky sounding but non-threatening familiar name like “Dave” or something.


Yeah maybe Clippy :)


At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.


I’m taking about the ultimate end product that Microsoft and OpenAI want to create.

So I mean proper AGI.

Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.

At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.


I think that the dispute is about whether or not AGI is possible (at least withing the next several decades). One camp seems to be operating with the assumption that not only is it possible, but it's imminent. The other camp is saying that they've seen little reason to think that it is.

(I'm in the latter camp).


I certainly think it’s possible but have no idea how close. Maybe it’s 50 years, maybe it’s next year.

Either way, I think GGP’s comment was not applicable based on my comment as written and certainly my intent.


I am with you. I am VERY excited about LLMs but I don't see a path from an LLM to AGI. Its like 50 years ago when we thought computers themselves brought us one step away from AI.


It's entirely possible for Microsoft and OpenAI to have an unattainable goal in AGI. A computer that knows everything that has ever happened and can deduce much of what will come in the future is still likely going to be a machine, a very accurate one - it won't be able to imagine a future that it can't predict as a possible/potential natural/or made progression along a chain of consequences stemming from the present or past.


Is there a know path from an LLM to AGI? I have not seen or read anything the suggests LLMs bring us any closer to AGI.


We are incredibly far away from AGI and we're only getting there with wetware.

LLMs and GenAI are clever parlor tricks compared to the necessary science needed for AGI to actually arrive.


What makes you so confident that your own mind isn't a "clever parlor trick"?

Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?


My layperson impression is that biological brains do online retraining in real time, which is not done with the current crop of models. Given that even this much required months of GPU time I'm not optimistic we'll match the functionality (let alone the end result) anytime soon.


I'm actually playing with this idea: I've created a model from scratch and have it running occasionally on my Discord. https://ftp.bytebreeze.dev is where I throw up models and code. I'll be releasing more soon.


Trillions of random chances over the course of billions of years.


Why do you think we'll only get there with wetware? I guess you're in the "consciousness is uniquely biological" camp?

It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.

Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.


And how do you know LLMs are not "close" to AGI (close meaning, say, a decade of development that builds on the success of LLMs)?


Because LLMs just mimic human communication based on massive amounts of human generated data and have 0 actual intelligence at all.

It could be a first step, sure, but we need many many more breakthroughs to actually get to AGI.


Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?


As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.

I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.


There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.

Especially since you have to explain how "just mimicking" works so well.


One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.


Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence. https://g.co/bard/share/a8c674cfa5f4 :

> [...]

> Premise 1: LLMs can realistically "mimic" human communication.

> Premise 2: LLMs are trained on massive amounts of text data.

> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.

"If P then Q" is the Material conditional: https://en.wikipedia.org/wiki/Material_conditional

Does it do logical reasoning or inference before presenting text to the user?

That's a lot of waste heat.

(Edit) with next word prediction just is it,

"LLMs cannot find reasoning errors, but can correct them" https://news.ycombinator.com/item?id=38353285

"Misalignment and Deception by an autonomous stock trading LLM agent" https://news.ycombinator.com/item?id=38353880#38354486


Or maybe the intelligence is in language and cannot be dissociated from it.


Yep, the lay audience conceives of AGI as being a handyman robot with a plumber's crack or maybe an agent that can get your health insurance to stop improperly denying claims. How about an automated snow blower?Perhaps an intelligent wheelchair with robot arms that can help grandma in the shower? A drone army that can reshingle my roof?

Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.


I’m pretty sure Clippy is AGI. Always has been.



Gatekeeping science. You must feel very smart.


Lmao, why are so many people mad that the word AGI is being tossed around when talking about AI?

As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.

Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).


>They could make ChatGPT++

Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.

When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)


ChatGPT#


Hopefully ChatGPT will make it easier to search/differentiate between ChatGPT, ChatGPT++, and ChatGPT# than Google does.


dotGPT


Visual ChatGPT#.net


Dot Neural Net


WSG, Windows Subsystem for GPT


ClippyAI


Also Managed ChatGPT, ChatGPT/CLR.


ChatGPT Series 4


ClipGPT


ChatGPT NT




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: