Simple hypothesis. Top 5percent of US wealth now belongs to top 50 richest American. Even if you ignore corruption, lobbying and any ill intent you can definitely conclude that this top 50 individuals have better way of getting return from money than rest of the population. Even if if their delta of return is 5% we can assume that withing next 50 years there is a high probability that these guys will own 30-50% of wealth. I have a strong belive AI will acclerate that further.
But all your numbers (except maybe top 5% one) are completely made up. Strong beliefs don't prevent one from being completely wrong. Neville Chamberlain has a strong belief that he had ensured peace; Einstein had a strong belief that quantum theory's "spooky action at a distance" was incorrect. Both were wrong. Fifty years is a long time, and anything could happen. The last fifty years had the fall of Communism, the EU, China going from an impoverished countryside to a superpower, video phones in our pocket, social media upending communication and mental health, renewable energy displacing coal, Trump, etc.
Ideally this watermarks are much more subtle. For example they put watermark through different distribution of word sequence which are difficult to remove or identify
This is mostly because how current LLMs are learning stuff my memorizing stuff.
Worst thing of AI through LLM is rote learning where same model is used for reasoning and memorizing.
Hopefully in future it will be like small core of language understanding and reasoning which ideally can look up a reference ( through search ) and use RL to get better at the topic through trial and error
I personally didn't realize how fast other models would catch up to OpenAI.
There is a whole set of models now (and some like Meta are purposely trying to undermine OpenAI competitive advantage via open source models) and they are relatively interchangeable with nearly no lock-in.
OpenAI's main advantage is being first to market and having the strongest model (GPT 4), and maybe they can continue to run ahead faster than everyone else, but pure technical leadership is hard to maintain, especially with so many competitors entering.
Their main advantage for now is their super clean API. Open source alternative are already on par with GPT-3.5 and 4 capabilities, they just don't have as good a package but that could change rather quickly too.
> Open source alternative are already on par with GPT-3.5 and 4 capabilities
I'm not sure if this is true. With GPT-4, I can successfully ask questions in Japanese and receive responses in (mostly natural) Japanese. I have also found GPT-4 capable of understanding the semantics of prompts with Japanese and English phrases interleaved.
Out of curiosity, I tried doing the same with local models like Mistral 7B and I could never get the model to emit anything other than English. Maybe it's a difference in training data, but even then, GPT-4 has an allegedly small set of training data for non-European languages.
Is that true? I was running Llamas on my laptop a few days ago, and it was giving measurably worse results than ChatGPT. I think it was the uncensored 13B model, but if you got something that's on par with ChatGPT that I can run on my own hardware I'm pretty interested.
13B models probably cannot directly compare with ChatGPT 4 which maybe +1T parameters or a 5 way MoE of 200B each - or something like that. So you can not likely run a model competitive with ChatGPT locally in the near term.
For now. As others have said, there is no technological “moat” in this business that could prevent others from catching up.
Perhaps the best way for Open AI is to become THE established AI services company. AWS is still the leader in cloud computing space, and only has Azure competing, despite the fact that other big companies are also technologically capable of building similar products.
> AWS is still the leader in cloud computing space, and only has Azure competing, despite the fact that other big companies are also technologically capable of building similar products.
What happened to GCP? I personally switched away because of the bad experiences.. but is that happening in scale as well. I see it barely mentioned these days.
I'm pretty bearish on GPT 5 being better than 4. With how neutered 4 has gotten over time, I'd be surprised if GPT 5 is able to perform better with all the same restrictions that GPT 4 has. GPT 4 is less and less willing to actually accomplish a task for you than it is to tell you how you can do it. It looks more and more like Markov chains every day.
Sure, but it is somewhat disheartening to see GPT 4 still being the king by a clear margin after a full year, especially since it's been nerfed continuously for speed and cost effectiveness.
Is gpt4 as good in non-English uses? It's not clear to me that it would be particularly important or advantageous, but does Mistral being based in Europe and polyglot first make it interesting vs. gpt4 in some dimension?
I guess it might depend on language, but as a Spanish speaker who sometimes uses LLMs in Spanish, I'd say the gap between GPT-4 and most of the competition (Mistral included) is actually larger in Spanish than in English.
It's the best multilingual model out there and it's not even close.
Especially in terms of open models Mistral's are the most multilingual but outside a few handpicked ones the level of proficiency is just too poor for any real usage.
In my experience it’s not such a simple question. If you want to be able to speak in nuanced non-English and have it pick up on the intricacies, or have it respond in rich correct non-English, then it’s not the best model (Cohere recently released an aya model that I would recommend checking out if this is your use case).
If you want to be able to give basic commands and have the model reason about the logic behind your commands, gpt 4 is still the best, even in minority languages.
I don't disagree with you, but an open source model fine tuned for your use case, and embedded with your data is probably going to be way better at many companies uses cases than GPT4 is.
I'd say you should compare the models for your use case. Which is better depends on how much you're willing to pay, what kind of problems you need help with, speed, ease of use.
This is where other big tech giants need to move. MSFT provides nothing extra which Google/Amazon/Meta can not move. Make it multi platoform and make it more open source.
Sure, they can but that would be against all the safe alignment values they are pushing. They'll lose billions in current and potential investment and will spend the life in lawsuits. Also, govt may not like giving away cutting age tech to China.
I'm quite surprised that this point is not being made more. It's not like MSFT is the only shop that OpenAI can turn to, and you could argue that what will now happen is a full scale lobbying war will be now be waged by OpenAI backed by others who don't want MSFT to win (Goog? Musk?). Could be that OpenAI's principled stand will "win" regulation and MSFT will be in a very poor position.