Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> (including Google itself!)

Bet Google won’t make that mistake again, i.e. it won’t publish as much and will be much more careful about what it publishes, least they give a competitor a useful tool and get nothing in return - when the competitor (in this case very ironically named) goes full commercial and close source everything they can.

Open collaboration in AI, at least when it comes to corporations, might have come to an end.



The scale of the damage Open AI has done to the trust ecosystem with soliciting not just the work, but also massive fundraising and then privatizing the profits is almost unprecedented and permanent.


> trust ecosystem

What trust ecosystem are you talking about ? It was a lack of foresight by google on their own discovery of transformers, and it would probably have been sitting in dust or been killed off by the time it would have taken them to reach GPT-2 level of progress.


The trust ecosystem of the AI community - most every breakthrough was published publicly until OpenAI decided to take advantage of that.

Besides that, this comment contained a ton of statements on what “would” have happened had Google not published. Interesting but worthless way to defend openAI’s actions.


Seems to me that it's very hard to have a moat in LLMs without proprietary code, given that most of their training data is freely available. This is very different form the 2010's era of AI, where they were trained on large amounts of proprietary data that was specific to a given service and could not be replicated.

It's a lot easier to pat yourself on the back for releasing a paper about your techniques when your competitors can't replicate your service with it. I think that as generative AI models move past the hype phase into the competitive phase, they will be keeping a lot of innovation proprietary for at least a few years to maintain an edge over their competitors.

Let's just hope they don't move to patenting everything.


The point of OpenAI was that no single company would have a moat around LLM or foundation models in general. It was set as a non profit with this goal in mind and got money for it.

Whether Google pat itself on the back or not for releasing the paper no one could replicate, is not important, because an open research had never been their company’s goal. What happened is a for-profit company released a paper, that allowed a huge advantage to a company, whose mission was to ensure that no one has a huge advantage in the field. OpenAI was converted to for profit and established exclusive relationship with Microsoft.

Google fucked up and missed the train, but they can catch up. Much harder for smaller companies if as a result of this Fb, Google, etc AI research dept lock down their papers to tools to internal use only.


I suspect this is similar to Xerox with the graphical user interface


Sun Tzu.

When you are weak pretend to be strong.

When you are strong, pretend to be weak.


My thought exactly. I assume this is to reduce the competition.


Doesn't Google have any patents on the transformer architecture? I assume large enterprises tend to patent everything that comes out of their research.


https://patents.google.com/patent/US10452978B2/en

GPT models are based on transformer, but architecture is different from what's patented.

Not a lawyer, but can you really patent certain network architecture? Theoretically someone could invent new activation function that just happens to make same architecture perform a lot better on some tasks, can you call really cover that with patent?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: