That's fine, making the "core" of an AI assistant that rights to characters can be laid onto is bigger business than owning the characters themselves.
Why acquire rights to thousands of different character favourites when you can build the bot underneath and then licenses to skin and personalise said bot can be negotiated by the media houses to own 'em.
I can't tell if theyve ruined the Cortana name by using it for the quarter-baked voice assistant in Windows, or if it's so bad that nobody even realizes they've used the name yet.
I've had Cortana shut off for so long it took me a minute to remember theyve used the name already.
That name is stupid and won’t stick around. Knowing Microsoft, my bet is that it will get replaced with a quirky sounding but non-threatening familiar name like “Dave” or something.
At least in this forum can we please stop calling something that is not even close to AGI, AGI. Its just dumb at this point. We are LIGHT-YEARS away from AGI, even calling an LLM "AI" only makes sense for a lay audience. For developers and anyone in the know LLMs are called machine learning.
I’m taking about the ultimate end product that Microsoft and OpenAI want to create.
So I mean proper AGI.
Naming the product Clippy now is perfectly fine while it’s just an LLM and will be more excellent over the years when it eventually achieves AGI ness.
At least in this forum can we please stop misinterpreting things in a limited way to make pedantic points about how LLMs aren’t AGI (which I assume 98% of people here know). So I think it’s funny you assume I think chatgpt is an AGI.
I think that the dispute is about whether or not AGI is possible (at least withing the next several decades). One camp seems to be operating with the assumption that not only is it possible, but it's imminent. The other camp is saying that they've seen little reason to think that it is.
I am with you. I am VERY excited about LLMs but I don't see a path from an LLM to AGI. Its like 50 years ago when we thought computers themselves brought us one step away from AI.
It's entirely possible for Microsoft and OpenAI to have an unattainable goal in AGI. A computer that knows everything that has ever happened and can deduce much of what will come in the future is still likely going to be a machine, a very accurate one - it won't be able to imagine a future that it can't predict as a possible/potential natural/or made progression along a chain of consequences stemming from the present or past.
What makes you so confident that your own mind isn't a "clever parlor trick"?
Considering how it required no scientific understanding at all, just random chance, a very simple selection mechanism and enough iterations (I'm talking about evolution)?
My layperson impression is that biological brains do online retraining in real time, which is not done with the current crop of models. Given that even this much required months of GPU time I'm not optimistic we'll match the functionality (let alone the end result) anytime soon.
I'm actually playing with this idea: I've created a model from scratch and have it running occasionally on my Discord. https://ftp.bytebreeze.dev is where I throw up models and code. I'll be releasing more soon.
Why do you think we'll only get there with wetware? I guess you're in the "consciousness is uniquely biological" camp?
It's my belief that we're not special; us humans are just meat bags, our brains just perform incredibly complex functions with incredibly complex behaviours and parameters.
Of course we can replicate what our brains do in silicon (or whatever we've moved to at the time). Humans aren't special, there's no magic human juice in our brains, just a currently inconceivable blob of prewired evolved logic and a blank (some might say plastic) space to be filled with stuff we learn from our environs.
Mimicking human communication may or may not be relevant to AGI, depending on how its cashed out. Why think LLMs haven't captured a significant portion of how humans think and speak, i.e. the computational structure of thought, thus represent a significant step towards AGI?
As you illustrate, too many naysayers think that AGI must replicate "human thought". People, even those here, seem to equate AGI to being synonymous to human intelligence, but that type of thinking is flawed. AGI will not think like a human whatsoever. It must simply be indistinguishable from the capabilities of a human across almost all domains where a human is dominant. We may be close, or we may be far away. We simply do not know. If an LLM, regardless of the mechanism of action or how 'stupid' it may be, was able to accomplish all of the requirements of an AGI, then it is an AGI. Simple as that.
I imagine us actually reaching AGI, and people will start saying, "Yes, but it is not real AGI because..." This should be a measure of capabilities not process. But if expectations of its capabilities are clear, then we will get there eventually -- if we allow it to happen and do not continue moving the goalposts.
There is room for intelligence in all three of wherever the original data came from, training on it, and inference on it. So just claiming the third step doesn't have any isn't good enough.
Especially since you have to explain how "just mimicking" works so well.
One might argue that humans do a similar thing. And that the structure that allows the LLM to realistically "mimic" human communication is its intelligence.
Q: Is this a valid argument? "The structure that allows the LLM to realistically 'mimic' human communication is its intelligence.https://g.co/bard/share/a8c674cfa5f4 :
> [...]
> Premise 1: LLMs can realistically "mimic" human communication.
> Premise 2: LLMs are trained on massive amounts of text data.
> Conclusion: The structure that allows LLMs to realistically "mimic" human communication is its intelligence.
Yep, the lay audience conceives of AGI as being a handyman robot with a plumber's crack or maybe an agent that can get your health insurance to stop improperly denying claims. How about an automated snow blower?Perhaps an intelligent wheelchair with robot arms that can help grandma in the shower? A drone army that can reshingle my roof?
Indeed, normal people are quite wise and understand that a chat bot is just an augmentation agent--some sort of primordial cell structure that is but one piece of the puzzle.
Lmao, why are so many people mad that the word AGI is being tossed around when talking about AI?
As I've mentioned in other comments, it's like yelling at someone for bringing up fusion when talking about nuclear power.
Of course it's not possible yet, but talking & thinking about it is how we make it possible? Things don't just create themselves (well maybe once we _do_ have AGI level AI he he, that'll be a fun apocalypse).
Yes, though end result would probably be more like IE - barely good enough, forcefully pushed into everything and everywhere and squashing better competitors like IE squashed Netscape.
When OpenAI went in with MSFT it was like they have ignored the 40 years of history of what MSFT has been doing to smaller technology partners. What happened to OpenAI pretty much fits that pattern of a smaller company who developed great tech and was raided by MSFT for that tech (the specific actions of specific persons aren't really important - the main factor is MSFT's gravitational force of a black hole, and it was just a matter of time before its destructive power manifests itself like in this case where it just tore apart the OpenAI with tidal forces)
https://en.wikipedia.org/wiki/Visual_J%2B%2B