I think that it’s the opposite. This algorithm requires many examples of text on the specific topic. Probably more than most humans would require.
> While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples [0]
I don’t know what constitutes an example in this case but let’s assume it means 1 blog article. I don’t know many humans that read thousands or tens of thousands of blog articles on a specific topic. And if I did I’d expect that human to write a much more interesting article.
To me, this and other similar generated texts from OpenAI feel bland / generic.
Take a listen to the generated music from OpenAI - https://openai.com/blog/jukebox/. It’s pretty bad, but in a weird way. It’s technically correct - in key, on beat, ect. And even some of the music it generates is technically hard to do, but it sounds so painfully generic.
> All the impressive achievements of deep learning amount to just curve fitting
Judea Perl [1]
Given one blog article in a foreign language: Would a human be able to write coherent future articles?
With no teacher or context whatsoever how many articles would one have to read before they could write something that would 'fool' a native speaker? 1000, 100,000?
I have no idea how to measure the quantity/quality of contextual and sensory data we are constantly processing from just existing in the real world, however, it is vital to solving these tasks in a human way - yet it is a dataset that no machine has access to
I would argue comparing 'like for like' disregards the rich data we swim amongst as humans, making it an unfair comparison
> While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples [0]
I don’t know what constitutes an example in this case but let’s assume it means 1 blog article. I don’t know many humans that read thousands or tens of thousands of blog articles on a specific topic. And if I did I’d expect that human to write a much more interesting article.
To me, this and other similar generated texts from OpenAI feel bland / generic.
Take a listen to the generated music from OpenAI - https://openai.com/blog/jukebox/. It’s pretty bad, but in a weird way. It’s technically correct - in key, on beat, ect. And even some of the music it generates is technically hard to do, but it sounds so painfully generic.
> All the impressive achievements of deep learning amount to just curve fitting Judea Perl [1]
This comment was written by a human :)
[0]https://arxiv.org/abs/2005.14165 [1]https://www.quantamagazine.org/to-build-truly-intelligent-ma...