This seems obvious but the reason has nothing to do with GPT-3. It’s just a bad idea to build a business around someone else’s API, especially when there’s no real competition to that API (for now) and it’s nontrivial to roll your own replacement if you had to.
Someone else mentioned the App Store. But that platform requires diverse businesses to participate for it to be valuable to Apple’s customers. And you can still diversify across other platforms and app stores. And despite that there are still lots of examples of businesses getting shut out for one, sometimes arbitrary and opaque, reason or another. Or Apple simply likes the product and clones it (Flux). We also have Twitter’s fickle relationship with developers. And various Google APIs that have been shut down (Reader, etc.), priced up (Maps) or modified their TOS to curtail use cases businesses had built on because it impinged on Google’s own interests. This list is nowhere near exhaustive.
What if OpenAI’s business model changes (again) and your business is no longer important to theirs? If it ever was in the first place...
Is this even mentioned in the article? I was reading and reading and thinking "this is all moot, you can't do anything with GPT-3 because you can't run it yourself". Seeing how much money this level of AI is worth, I think OpenAI's incentives are skewed at best at this point.
Author here, I discuss aspects of this in the "Economies of Scale" section - if you're existentially dependent on a supplier, that supplier has leverage to squeeze you hard.
The counterforce will be development of similar algos by AWS, Google, and others; and OpenAI's personal balance of volume vs prices. OpenAI has a monopoly now but it won't in a year or two; I'm sure they're planning for this.
I suspect that 6 months from now the situation will be different, and technology will have moved on. The potential for these very large models is just too tempting, and I can see a lot of different organisations piling on the bandwagon.
> This seems obvious but the reason has nothing to do with GPT-3. It’s just a bad idea to build a business around someone else’s API
The entire argument in the article is the “it’s bad to build a business around someone else’s API” argument, just directed at a particular surge of hype around the GPT-3 API, and taking (perhaps only for sake of argument) the basic premises of the hype about about the utility of the API before factoring in business concerns at face value.
The author's main points are all centered around the assumption that GPT-3 related technologies are ready to create market-disrupting products, and that the key business challenges remaining are mostly around figuring out how to build a defensible moat.
However, that assumption is dead wrong.
We don't know how well GPT-3 "works out of the box" (hence OpenAI's api release), and it's stupendously premature to assume that it is ready for products.
We don't have a good understanding of what inputs the model handles well vs. doesn't (a key pre-requisite for building a safe product), and it has been shown to regurgitate authoritative but untrue statements.
[EDIT]: by construction, it also cannot store information provided by the user for later use, nor can it "look up" facts.
I agree with the high level statement ("starting a business around gpt-3 is a bad idea") but almost none of the generic business advice, because it's founded on an assumption of technical capability that simply isn't there yet. The article talks about GPT-3 in an abstract "super powerful AI" sense, which leads me to suspect that the author doesn't really understand the technical limitations of GPT-3 beyond the demos that have been shown.
Author here. I agree it's early and we'll have to wait until real products hit the ground to see the full extent of its capabilities and limitations.
I was a skeptic of the performance too ("surely all these demos are cherrypicked") but having played with the beta for a dozen hours and gotten better at prompts, the performance is real, and it's good enough to build the mentioned products around that people are willing to pay money for.
Will these be good, airtight products at the same level as human-designed performance? No. But we're not talking about "self-driving car needs six 9's of reliability to meet regulations or face a PR nightmare" performance.
We're talking about "is this AI therapy bot fun to chat to and better than paying $200 per hour for a therapist?" performance.
We're talking about "is is cheaper to pay a writer $200 per news article or $0.05 for a good enough article?"
Fair enough, thanks for your reply and clarification. I think "PR nightmare" scenario might be more likely than you suspect.
1. What if an AI therapy bot tells a depressed person to kill themselves? How do you get a language model to obey confidentiality rules? How do you prevent it from memorizing and regurgitating someone's mental health conversation to another patient?
2. I think the implications of replacing journalists with far cheaper automated systems (with substantially less fact-checking capability) have not been well thought out, and I worry that some VC bro is going to rush a product to market before policy makers / stakeholders have thought carefully about whether this is something that we want.
It's telling that GPT-3's best writing successes have been of the "philosophical musing" variety, not of writing accurate articles. I'm not sure whether that says more about AI or Philosophy.
I agree, any morally conscious founder should build in extra safeguards to stifle the bad edge cases and launch only after pretty thorough testing. Even an immoral founder who wants to avoid bad PR would do so.
Ideally all creators to be as thoughtful and careful as you. But we all know 1) there will be plenty of builders will build and launch regardless of how ready the app is, 2) users happily use whatever's engaging, convenient, and low-cost, while ignoring problems with privacy, security, and whether the product is net negative for a % of users (see TikTok, Twitter, Whisper).
If the technology is here, then the products will exist, and regulation isn't going to come in time to stop it.
I feel that overestimation of fundamental capability is actually the cause of many morality/safety issues, if not at least a degradation of user experience (automated voice menus).
The big difference here is that TikTok, Twitter, Whisper actually have working technology, in spite of ethical concerns. What I am saying is that the people who want to use GPT-3 for business use case X, Y, and Z probably have not thought deeply enough about the limitations/implications of large language model methodology on specific nuances of X, Y, Z tasks.
Have you considered the fact that GPT-3 can't actually look up any information? Consider the implications of that before suggesting that GPT-3 could be used for therapy.
While GPT-3 cannot lookup information, a service using GPT-3 could. For instance one could include the past dialog/facts cleverly presented in the context window.
How well this would work in practice is up for debate.
> it's stupendously premature to assume that it is ready for products
Did you follow product development in recent decades? The iPhone was not ready for prime time as well but still people bought it and where very happy.
A lot of products nowadays are kind of MVP's or betas and people are willing to accept their flaws and edges just to be an early adopter of technologies which remind them of beloved Sci-Fi movies.
> by construction, it also cannot store information provided by the user for later use, nor can it "look up" facts
Other tools can do that. You can connect them just like you do with micro services or modular systems - what is the issue about this?
The statement "starting a business ... is a bad idea" cannot be proven and there are also no examples to strengthen this position. So until one does you just cannot prove this statement so this is just speculation.
> You can connect them just like you do with micro services or modular systems - what is the issue about this?
Clearly I'm missing something here. Can you walk me through how exactly a micro service would enable GPT-3 to look up facts, and incorporate that knowledge into a conversation? What would the microservice API look like? How are the outputs consumed?
- If your whole app is a wrapper around the GPT-3 API then yes, you don't have much of a moat. But GPT-3 isn't some kind of perfect oracle, so you have lots of engineering challenges in the not-GPT-3 parts of your system where you can build your moat.
- GPT-4 does not render your current efforts moot. It takes time to build business/technical expertise in a domain. That's like saying it was pointless to build a web company in the 90s because web technologies/computing power was increasing so fast. You work with the technology you have, and then when it's time you upgrade.
- I disagree with the idea that you'll have hundreds of competitors. We're just starting to figure out what we can do with this technology. The space is wide open for someone to figure out a novel application.
AI dungeon was mentioned in another comment and serves as a good counter example. Where are all the AI dungeon competitors? How easy would it be to build a competing product to AI dungeon if you wanted to? And AI dungeon started with GPT-2, but incorporated GPT-3 into their product when it came out.
To summarize, starting a business around GPT-3 is a great idea. Stay realistic, be cognizant of GPT-3's limitations, but do realize that there is a genuine opportunity here.
There may be a lot of room for first-mover type projects. There probably isn't room in the market for a lot of aidungeons, and in that case, having the first well known GPT-for-X could be very valuable.
Having said that, I am also in the nervous-of-building-a-business-around-someone-elses-api camp. It's just that sometimes things don't work out in the straight-forward way described - the fact that anyone could make a 1 million dollar homepage or a facebook clone (or my personal bugbear, ebay) very cheaply doesn't necessarily cause much competition for the first one that got popular.
AI Dungeon is very cool, but it's not clear that it has had enough success to inspire any competitors, so I don't think it's a very good counter example.
Agree - if it becomes known AI Dungeon 3 makes $500,000 in profit a year, expect plenty of competitors to pop up and conduct a pricing war down to the bottom.
I think businesses that benefit from large amounts of auto-generated content (namely video games) will also benefit from GPT-3. Some of what breaks the immersion in an open world game like GTA is repeated conversations among NPCs, so automated scripts and improvements to text-to-speech synthesis would massively improve that experience.
I could also see fiction writers using such tools to get around writer's block - feed previous works or a novel-in-progress as training data and let the AI generate the next paragraph.
Additionally, many forms of media and entertainment tend to be one-off affairs (essentially rapidly scaled up / quickly spun down businesses that benefit from quality off-the-shelf tooling).
I like the idea of using generated dialog for NPCs in those kinds of circumstances where dynamic dialog makes sense.
As an author I can assure you that writers block is a different animal. Being blocked on fiction happens because you don't know what comes next, and no amount of generative help will clarify a story.
Novels aren't written one paragraph at a time, moving inexorably forward. Generating another sentence to get over the hump would only push a lost novice even farther into the wilderness.
I do believe you could replace 95% of the comedy accounts on Twitter though.
> I do believe you could replace 95% of the comedy accounts on Twitter though.
I have yet to see GPT-3 write something that's funny because it's clever and not because it's absurd or ridiculous. But maybe that's 95% of twitter's comedy (I don't read twitter).
I don't really understand your point, you don't have to use it to generate the next sentence or paragraph of your story. You can it directly to generate ideas for what comes next by asking it to complete a summarization of the plot.
I understand the thought, and if you are unfamiliar with the process of writing a novel it makes some intuitive sense.
If that method were effective, we would not have to wait for a machine learning algo to make use of it. There has rarely been a time in our literate world where encyclopedic catalogues of plots and plot devices haven't been available. If one could throw a dart at Polti's 36 dramatic situations to understand their story better, or write their way out of a jam then I'd be more inclined to believe that you could use GPT-3 to muddy your way through a draft. This is not the case, however.
// Edit, addendum:
If I had GPT-3 generate a synopsis for me, based on a corpus of my work (let's say) I would have before me a framework that loosely adhered to my conventions and internal logic, however it would still be deep in that uncanny valley as any longer story from GPT-3 ends up being. The bulk of the work would be in reconceptualizing the generated synopsis into something that contained real, cohesive themes and character development. The project itself would likely be as much work as writing from scratch, but would also be an art project of sorts.
Novels are far more complex than most people assume. If you compare to movies or television, you have to take direction, cinematography, and production into account, rather than just the screenplay.
Maybe that sort of thing can be delegated to a GPT like algorithm, maybe GPT-4 will obsolete the novelist and the auteur, but I kinda doubt it.
If that's the case, there's an obvious moat (perhaps not an incredibly deep one) in being better at prompt engineering than your competitors, dedicating R&D effort to discovering new prompt engineering tricks/principles, etc.
I could see this as being kind of like an advanced form of SEO.
I don't think it's going to be about a single prompt; reverse engineering multiple prompts interacting with themselves is hard. There's a lot of cool things to be done with:
(a) creating a pipeline of prompts that combine outputs of previous prompts into new prompts in a predefined manner
and (b) designing prompts to generate other prompts
With the right type of online learning and possibly some of the weights frozen, GPT-3 could gain an unlimited memory instead of the fixed 2048 token memory.
True, but unless there is a clear leader in your market - lots of good enough products will appear with GPT-3 and to compete with them you will need at least 5-10x better product. 2x won't suffice. So probably it will go down to who has a bigger marketing budget.
Author here. I think about it this way - some tools are more like programming languages, and some are like can openers. Programming languages have a high skill ceiling and a big variation in performance - a great programmer is 10x more effective than a mediocre programmer.
In contrast, can openers have a low skill ceiling and low variation in performance.
iPhone apps I'd argue were more like a programming language. The App Store was more of a platform everyone had to be on to play, rather than the core technology that drove every app's value prop. The best app developers still had a big advantage over the median app developer - their products were more performant, easier to use. e.g. Candy Crush was 5x more fun and addictive than the next match-3 game.
The question I ask is, can a great GPT-3 developer have a 10x advantage over the median GPT-3 developer? Or does everyone's performance taper off quickly because of how powerful it already is, and how little any user can tweak under the hood?
--
I agree it's still early and there's room for exceptions:
-there will be brand new markets with applications that no one has thought of yet. My applications listed here are 'obvious' in the same way 1995 observers for the Internet had 'obvious' ideas - namely, wrong and not imaginative enough.
-even if you have to compete with incumbents, there will still be big winners. Even among all the meal kit companies, there was still one big winner - Hello Fresh. I just think it's going to be hyper-competitive, and founders will find it's less about technology and product (what they set out to focus on) and more about marketing/distribution.
It will never be a key differentiator for your company, since someone else can add it very easily to their product. There is also no network effect moat (i.e. Facebook), or data collection moat (i.e. Google).
Sure, you can build a little bit of UI around it, and maybe you have some prompts which you've refined, but that's about the only advantage you'll have over anyone else.
At the end of the day, I suspect OpenAI will be the only big winner in the GPT-3 space.
GPT-3 is just the enabling technology, what you do with the tool is where a company differentiates itself.
The key argument the article and your comment is making is that GPT-3 is the moat. There’s so much more to a company than just a piece of tech implemented within in.
Network effect and data collection are growth loops that exist separately from the technology and can be effectively layered into a company regardless of the underlying tech.
The clear winner is still OpenAI. Who's to say that established companies can do the exact same thing even with the OpenAI GPT-3 API.
I can see Google doing this and they also have many options here to remain competitive. They too can access this API, Google DeepMind can create another AI as a Service service similar to OpenAI or directly clone your idea as a feature in another product.
OpenAI has now become the AWS of AI services. I won't be suprised to see DeepMind thinking of doing the same thing.
The examples I have seen tease but are not fully clear on how far GPT-3 can go on tasks that are not in principle text generation tasks.
For example I’ve seen the translation to HTML code demo. Of course, LSTMs already generate quasi compilable code. But the promise seems far better here. Countless “AI” tasks can be conceptualized as entering prompts and receiving code — playing chess, finding logical implications (maybe from tabular data like in Formal Concept Analysis), detecting outliers in columnar/matrix data. How much does GPT do? How much turf beyond chatbots and automatic journalism does it cover?
> How much does GPT do? How much turf beyond chatbots and automatic journalism does it cover?
The real answer to this is that we don't really know just yet. People are still finding ways to represent problems as text completions and feeding them to GPT-3 and seeing what comes out. However there is a hard limit for GPT-3 specificially, and that's its context window. IIRC it can only be prompted with & generate 2048 "BPEs" in total (smaller than a word, but bigger than one character). So in your prompt you could give it a handful of tables, some with outliers, some without, and some metrics after each table concerning outliers. Then the last part of your prompt is a table you'd like the metrics for and let GPT-3 fill it in. Does this work? The answer is a strong maybe, lol. But you're so limited in space that for some use-cases it's more likely you'd need to wait for later iterations of this approach that raise or remove the length limitation.
Agreed - companies can compete by adding on services and focus on verticals. It might be a different type of company than the founder originally envisioned, though.
This article assumes the cost of training GPT-3 will remain high. I think it will depend on how quickly we can make these models more efficient vs how much bigger we should make them to keep improving the quality of output. At some point, the quality will plateau, and the engineering improvements should catch up. For example, the 2013 VGG model has 160M parameters, and achieves the same Imagenet accuracy as a 16M parameter 2020 model (or maybe even 1.6M parameter one, I haven't checked the state of the art in efficiency). The algorithmic improvements will be combined with hardware improvements. Once the cost of training falls below some threshold, people will apply this technology to various domains, and entirely new applications will appear, creating a lot of room for new startups.
I think that GPT-3 and similar "AI" technologies will help companies provide user-specific customization that previously would have required uneconomical software development. It's giving us the ability to put another layer of polish on existing products.
Personally, I find predictability to be the most important attribute of good software. I want to know that if I do an action, I can expect a specific, repeatable result. I find that any time AI technologies are involved, predictability decreases (for example any voice controlled system, or Google's increasingly fuzzy search suggestions). Therefore I don't understand how AI can add "polish" to a product. New features? Sure. But polish?
Agreed that AI will probably be frustrating if it's used in a way that makes "queries" unpredictable.
On the other hand, suppose we use it to make stable changes to a personal version of a product. I'd like to ask the "AI" to write the SQL query that answers a question within the context the product, and save the query once we get it right. Now I have customized my product with a new query without hiring a developer or learning SQL myself. And I can reuse this saved query for predictable results. The story probably gets more interesting with more components: reports, screens, storage, etc.
How do you know the SQL query is right without knowing SQL? Unless you have all possible inputs and outputs (and at that point there are definitely more reliable bits of tech) you won't be able to tell.
One thing that I think GPT-3 does show is that NLP technology has now reached a level of sophistication where it becomes possible to use it in a wide range of new and different applications.
Exactly. The real innovation here is NLP, not GPT. It will take time for the technology to become democratized, but it no doubt will. GPT will become one expensive API compared to the dozens of smaller, cheaper, and more focused APIs.
"GPT-3 looks more like a sustaining innovation than a disruptive innovation"
Definitely see GPT-3 as a utility to augment existing functions within a product than something that stands on its own. One immediate use case I was thinking about was a way to auto generate decent meta descriptions for different pages on a site
Perhaps the big winners from GPT-3 will be patent trolls? Easier than ever to mock an idea up and use the implementation to do a land grab in form of a patent? [Not saying I advocate this strategy, am just wondering if the low barrier to entry will enable it]
The article isn't wrong, but it also depends on how much (time, money) you invest in your business.
For example, creating AI Dungeon (first with GPT-2, now with 3) will probably be profitable for its founder, one way or another. So that likely was effort well spent.
I agree, AI Dungeon is a fantastic idea and right now has first mover advantage. Once it's clear it's a viable business, AI Dungeon will spawn a lot of competitors. Outside of GPT-3, their proprietary tech doesn't seem like anything fancy. From here, there are a few main outcomes:
-AI Dungeon has first mover advantage in branding and reputation, and new customers stay loyal to it. Its competitors end up seizing just a small % of market share in the niche automated AI text game market.
-five viable competitors to AI Dungeon pop up. They all compete with each other with very similar products, and it becomes a pricing war, sapping earnings from everyone.
-API pricing will kill the economics of AI Dungeon and similar projects. This is what happened to Geoguessr and related games, which relied on Google Maps API for years until the pricing became prohibitive.
I can see one avenue in which GPT-3 might be quite disruptive, and that is in reinventing our software development processes. For example, if it can be purposed into something that reliably converts source code between programming languages, then there is no longer any moat in library code and bindings beyond the prompt development; PL development will accordingly accelerate. And the same goes for tasks like developing tests, static checks, optimizations, user interfaces, import/export and so forth.
In this scenario, software itself commoditizes to a greater degree. Perhaps not to the point where the AI is a Star Trek computer, but transitionally towards that. And that means that software businesses increasingly become commission shops that pump out AI-built programs on demand, while software services built on platform lock-in get threatened by cheap data export and format conversion tools.
That's besides the point of the article. Whether or not GPT-3 can produce working software at a useful scale (beyond a small amount of markup), doesn't have anything to do with whether one can build a successful business wrapping OpenAI's API.
That's like saying building a video startup is a bad idea because everybody else including MS, Google, FB, etc. already have same tech. You just need a polished product that's appealing.
Interesting analogy, though not perfect. The key question is to what % of the user experience the core technology provides.
For software, the programming language plays a small %, from the user's perspective - it's what is done with it that matters. Thus, "building a Python startup is a bad idea because everyone else has Python" doesn't make sense.
For online mattress companies, the mattress is really 95% of the experience (with minor points for delivery and customer support). Thus, "building an online mattress company is a bad idea because everyone sells the same mattresses and no one has a product edge" does make sense.
Video startups are more like programming languages, IMO. The key of the user experience is less the video technology and more what videos are actually accessible. Here, the network effects of user-generated content (Youtube, Tiktok) or proprietary videos (Netflix) are the real secret sauce.
For GPT-3 startups, the question is whether GPT-3 forms the vast majority of the value or just a small % of it. The lower the % it takes up in your product, the more likely you can build a competitive advantage in technology.
It is all pattern matching and no conceptual understanding. Therefore, not very useful for business other than to maybe trick people into thinking it might understand something.
"A team of more than 30 OpenAI researchers have released a paper about GPT-3, a language model capable of achieving state-of-the-art results on a set of benchmark and unique natural language processing tasks that range from language translation to generating news articles to answering SAT questions. GPT-3 has a whopping 175 billion parameters. By comparison, the largest version of GPT-2 was 1.5 billion parameters, and the largest Transformer-based language model in the world — introduced by Microsoft earlier this month — is 17 billion parameters. [...]"
It's not a bad idea, the timing isn't right, I see GPT as being ~13-14 years ahead of its time. I'm pretty certain within our lifetime we will see an explosion of such tools, now with CSS and browser engines being significantly less intelligent than their future counterparts, GPT-3 WILL/ might just evolve into something substantial, hence the buzz
I certainly would not want to bet on it, but adding the ability to perform computation and access storage could take 10 years.
Right now, GPT-3 "sees" 2048 tokens, does a forward pass, and outputs the next word. We would like to to be able to say "hold on while I think for a while" but how to do that with current gradient-based deep learning methods is unclear.
Someone else mentioned the App Store. But that platform requires diverse businesses to participate for it to be valuable to Apple’s customers. And you can still diversify across other platforms and app stores. And despite that there are still lots of examples of businesses getting shut out for one, sometimes arbitrary and opaque, reason or another. Or Apple simply likes the product and clones it (Flux). We also have Twitter’s fickle relationship with developers. And various Google APIs that have been shut down (Reader, etc.), priced up (Maps) or modified their TOS to curtail use cases businesses had built on because it impinged on Google’s own interests. This list is nowhere near exhaustive.
What if OpenAI’s business model changes (again) and your business is no longer important to theirs? If it ever was in the first place...