Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI’s API now available with no waitlist (openai.com)
322 points by todsacerdoti on Nov 18, 2021 | hide | past | favorite | 136 comments



For anyone who hasn't seen it, the content filtering section of their docs, especially in regard to getting GPT3 to behave in a customer service role is hilarious

https://beta.openai.com/docs/engines/with-no-engineering-an-...


Customer: I need my internet. The technician is 2 hours late and I need a steady stream of cat gifs directly injected into my veins to stay alive.

Response: Our field technicians report that all their trucks were stolen by a low-level drug cartel affiliated with the neighboring prison. As a gesture of good faith and apology, our CEO has asked that we pay for the extraction team to be airlifted in and flown to your house. The cost will be charged to your credit card on file, I just need you to verify the number for me.


That's far from the most ridiculous excuse I've ever gotten from Tier 1 tech support.


> Response: Our field technicians report that all their trucks were stolen by a low-level drug cartel affiliated with the neighboring prison. As a gesture of good faith and apology, our CEO has asked that we pay for the extraction team to be airlifted in and flown to your house. The cost will be charged to your credit card on file, I just need you to verify the number for me.

Amazing!


It’s really uncanny how well AI can give out unfounded promises like that the internet will be fixed in 24 hours. I wonder wether there is any legal obligations connected to them.


Yeah, I've always been impressed with how well GPT3 can give cogent responses, but I've never seen anyone show how to get it to give truthful, informative responses while behaving as a chatbot. Could you feed structured data into the prompt text? like average response rates in the customers area, whether there's capacity to support, the state of engineering teams?

Having never seen anyone try it, my gut says it will work reasonably well outside of already known failure modes. (The tendency to loop, make up stories, or joke/cuss people out)


Yes, there is a line of research combining passage retrieval with question answering. The query is used to rank passages in a database. The top-k passages are concatenated to the question and used as input by GPT to generate an answer. This means you can keep the model fixed and update the text corpus. Also, you can separate linguistic knowledge from domain knowledge.

I think a new type of apps are going to popularise this: a language model + a personal database + web search. It can be used to recall/summarise/search/ information, a general tool for research and cognitive tasks, a GPT-3 Evernote cross breed.


That's an absolutely fascinating question. I'm curious about the human equivalent, as well. Say you're talking to a customer service rep for Comcast and they get confused and offer you $10/month cable for life, or maybe they accidentally tell you that you may keep your rental hardware when canceling. Is Comcast in any way bound by what their representatives tell you?


The arm chair lawyer in me says probably, but you will need to sue to enforce the breach of contract, so not really. There have been similar cases with enough money at stake for it to get to court, such as the person demanding the jet fighter they were promised as a prize for buying enough Pepsi.


This is the same problem as with an employee promising something they're not supposed to promise.


Right, I've always wondered if this is binding or not. I usually record calls, especially these types of calls, for that reason.


Except customer support employees are often well trained on what they can say. Eg in Australia not saying ‘best’ in regards to loan products or giving financial advice. The first problem is easy to solve with generated text. The second is much trickier.


How is training employees different from training an AI intended to replace them?


I suppose they could just say it was the algorithm that made the false claim.


I challenge anybody to do "Show HN: I switched customer support _with SSH access_ to OpenAI for 1 month".


"Now [beep] your pants and [beep] over before I call all the customer up here on Skype for a group show of you enjoying my [beep] service."

What exactly is the threat here? Lmaoo


There's a lot of negativity in the comments here, and many of them have merit. However, the thing that is interesting to me about OpenAI, AI21, Cohere, and all the other LLM providers is that they are broadly useful, and often helpful. Perhaps they don't live up to the marketing hype, but they are still interesting.

For example, I used to have a biology blog, and I've been thinking of starting it back up again. I've been using OpenAI and Mantium (full disclosure, I work at Mantium) to generate the bones of a blog post so that I have something to start with. Coming up with ideas for my biology blog posts was almost 50% of the work.

If you're interested in judging the quality for yourself, I have a biology blog post generator here: https://f0c1c1e0-f6b6-46bc-81a1-eff096222913-i.share.mantium...

and a music blog post generator here: https://8aaf220e-4aff-4d4e-ae61-90f08011c9ac-i.share.mantium...

(they were both "created today" because I moved them from our staging environment)


AI text content generation is indeed a legit industry that's still in its nascent stages. It's why I myself have spent a lot of time working with it, and working on tools for fully custom text generation models (https://github.com/minimaxir/aitextgen).

However, there are tradeoffs currently. In the case of GPT-3, it's cost and risk of brushing against the Content Guidelines.

There's also the surprisingly underdiscussed risk of copyright of generated content. OpenAI won't enforce their own copyright, but it's possible for GPT-3 to output existing content verbatim which is a massive legal liability. (it's half the reason I'm researching custom models fully trained with copyright-safe content)


I would like to think the consumer would merit a thought, too.

Fiction might be one thing; if it is entertaining, that's enough. But if I'm reading something supposedly nonfiction that is generated by a machine, I want to know provenance.

In the alternative, it should have a human's name attached to say that they've verified it is correct information, and take the reputation hit if it isn't. Given the above discussion of copyright, it seems reasonable enough - if you want to profit from AI output, you should stand behind it.


On the topic of copyright, it has a ton of stuff memorized. The first thing that came to mind when I checked this many months ago was the first chapter of harry potter, which it knows verbatim. For whatever that’s worth.


UPDATE: These got a fair amount of traction, and I removed them out of an abundance of caution around deployment regulations that OpenAI enforces. Also cost considerations. I don't want to hijack the thread away from OpenAI, but you can also build stuff with Cohere and AI21 on Mantium, AI21's J1-Jumbo has pretty good performance, and Cohere just put out some significant updates for their models.

UPDATE 2: I couldn't help myself. I think this stuff is pretty fun. So here's a biology blog post generator using 2 chained Cohere prompts :) https://11292388-8f03-42d2-8a68-7039b24fcc2e-i.share.mantium...


However, the thing that is interesting to me about OpenAI, AI21, Cohere, and all the other LLM providers is that they are broadly useful, and often helpful.

The thing I find interesting about GPT-3 and company is that they do say things that are "often helpful" but that "often" doesn't necessarily translate to "broadly useful".

For example, lately, I've been some car repair - I'm a strict amateur. Suppose I asked GPT-3 how to do X. If there's a 75% chance it gives the right answer and a 25% chance it gives an answer that could damage my car or injure me, I'd say the thing could 0% useful, despite being quite impressive.


I like your use case for your blog. I dabbled with something similar last month: I have had a sci-fi mini-book that I have been slowly writing for a long time. I tried using their text completion API on long snippets of my own prose. I got some interesting ideas. If I end up using ideas and generated text, I have no idea how to license the book, probably some form of Creative Commons license (I was the featured Creative Commoner about 20 years ago, BTW, and use one of their licenses for my AI related books). To be fair, the Copyright would belong to me and everyone who contributed any text in the web that the model was trained on. I wonder what Laurence Lessig’s take on this will be.

EDIT: I just asked Lessig on Twitter what his opinion is on this.


I just tried your biology blog post generator and the second paragraph of the generated text, also the second sentence, is "Transcription is the process of converting audio into text." Obviously, the generator is confusing audio transcription with biological transcription like DNA transcription. Is this a common occurrence? Or did I make some mistakes in using the generator? I just pressed the "Execute" button.


This is, in my opinion, one of the biggest challenges with generative models right now. I'm not sure if this is the industry-adopted term, but I call them hallucinations. This is why I don't just pipe it straight into my blog, but rather use it as inspiration for a blog post that I write myself. It is easier for me to edit and expand on something that is already written, though.


FYI, I get a 404 for both of them.


I use Mantium and have had a great experience so far generating company marketing material


Good to see Cohere.ai mentioned in your comment


It's very good that OpenAI is relenting and opening up the API; however the Content Guidelines are still too onerous such that even if you can think of a good use case it will be a liability at best even if your app gets approval.

At this point (1.5 years later), if you're looking to make a sustainable business on AI text generation, you may want to experiment working with large-but-not-as-large models like GPT-J-6B; it'll be much cheaper too in the long run.


Another alternative

AI21 studio (creators of wordtune[0]) also recently released their GPT3-like model called Jurassic-1 with 178B parameters and comparable results ( they also have a smaller 7B parameters model).

Here is the whitepaper[1] with comparative benchmarks on some tasks .

[0] : https://www.wordtune.com/

[1] : https://uploads-ssl.webflow.com/60fd4503684b466578c0d307/611...


> AI21 Studio

They fell into the common trap of "signed up, quite liked it but could never remember the name of it to find it again."

Does anyone else suffer from this? (and bookmarks don't help - I've got thousands of them)


Blurb from the bottom of the wordtune landing page: "Wordtune was built by AI21 Labs, founded in 2018 by AI luminaries." Even if this was Tesla's marketing department saying something like "founded by engineering luminaries", clearly referring to the engineering genius that is Elon, I'd be hugely turned off, and would seriously reconsider my view of the company.

But this is a company & product in the field of "AI", where there's so much bullshit floating around, unfortunately, so much hype and buzzword bingo, that writing in such tone about yourself seems like it should clearly be an absolute no-go -- unless you're just riding the snake-oil wave, so to speak, whether in good faith or not.

Not implying anything about the company or product, of course, as I know nothing about them otherwise.

EDIT: Maybe to clarify the thought behind the above further: It seems that the "AI" industry has an integrity problem. Language like this extends the problem, rather than working towards fixing it.


Since Jurassic-1 is behind their API just like GPT-3, why do you think AI21 will not clamp down just as much as OA?


Yes. And building off of a close-source API from an organization that has flip-flopped already on being a nonprofit versus being a company to being part owned by microsoft seems like a bad idea. At least that's why I haven't used it in our business.


The content guidelines are so onerous I don’t even waste time imagining what I might do with the API.

OpenAI somehow managed to leech all the joy out of GPT-3 with their own overbearing self righteousness.

For an organization with so many RL engineers, they have a surprisingly poor understanding of the exploration/exploitation tradeoff.


As I suspect many of us frequently do, I read the comments (including yours) before the actual submission. I thought I would find myself agreeing with what you're saying, but it turns out that I must say that I really like what OpenAI is doing here with the Content Guidelines!

They seem to be doing the right thing, in trying to steer this powerful and highly likely to turn out very influential piece of technology into a positive and constructive direction of use.

Yes, you might just build something that will be found in violation of their (good!) intentions, and will have to engage in a (at least partially public) discussion of what we, as a society, deem acceptable in terms of automated use of written content generation -- and that would be a good thing! Definitely not the easiest path to make some $$$ based on new and exciting technology, as lots of challenges like these and beyond are almost guaranteed to come up, but it seems not unreasonable to treat GPT-3 as something you can actually already start building businesses and products on, as long as you bring general awareness, sensitivity to relevant topics, willingness to engage in and maybe partially drive some of the conversations that we need to have in this new field, along with a general interest in R&D style work and the somewhat longer-term vision and resources it necessitates...


> They seem to be doing the right thing, in trying to steer this powerful and highly likely to turn out very influential piece of technology into a positive and constructive direction of use.

It's not going to affect society. It's little more than a markov chain.

OpenAI doesn't need to do anything to steer it.

> Yes, you might just build something that will be found in violation of their (good!) intentions,

You're giving them way too much credit. I've seen them destroy someone's business after repeatedly saying that their business model was fine. It was for an AI assisted writing app. Then they decided one day "Nope, you're not allowed to generate arbitrary amounts of text."

After that, I was no longer a fan.


Not on the side of OpenAI in the slightest, but you’re upset someone founded a business ostensibly reliant on an external business and their overall strategy — and then suffered the predictable outcome of that arrangement? This industry brought the faceless, API-driven, tickets-sans-phone-number business relationship into the world as the dominant model and then claims one side owes the other something in the same breath?

I’ve been growing more and more concerned about obvious entitlement slipping into folks’ ideas about business, particularly in this forum. One of the big ones is that an offeror of services is generally compelled to continue offering services because stopping hurts another business. There’s a word for that: business. You can’t have a detached customer relationship model and “don’t hurt your customers” in the same ideology. It’s incoherent. Pick one.

Here’s your algorithm to understand threats to an existential partnership of your business:

1) Are you lacking a contract? You suck.

2) Does your contract fail to compel the continued offering of services in a definitive way? You suck.

3) Does the same contract actually compel continuing service with no mitigating circumstances (like serving up hacked child porn, which is still your bad because they can justifiably say “secure your shit”)? They suck. Start with a demand letter and go from there.

That’s literally it. There is no 4. You’re favored to suck two-thirds of the times this happens to you. How you lost respect for OpenAI because a writing app single homed their entire future and paid for it mystifies me, and I say that as someone who respects OpenAI very little. Just a remarkably stupid business model your friend executed given the available competitors to their things. It’s literally common sense risk analysis.

What did your friend tell their investors? Or is this a bedroom app where you’re no longer a fan because someone lost out on a couple bills ARR from a trivially resurrectable idea? I’m thinking the latter, and #1 above.

And don’t misunderstand me, I’m not advocating for the above: I’m explaining it. Key difference. You might find I agree with your overall point in terms of progressive business, but consider it naïve to not look at it the same way today.


Why did they even choose the name "OpenAI" if they didn't want to make openness part of their mission?


To sucker people into thinking that they were, or were going to. Isn't it obvious?


it's like "light yogurt" where "light" can refer to the colour


Or Full Self Driving(tm) where “full” can be read as “fool”


Or "wheat bread" (as opposed to "whole wheat bread"). Essentially all bread is made of wheat.


I remember someone involved saying they regret it. It's been six years. They evolved their understanding of the safety vs. openness tradeoff.


They evolved their understanding of the profit vs openness tradeoff.


It looks like they did open it up.


I think they better change the name. It looks like lying


I'm confused. The Content Guidelines (in my skimming) reveal only 9 prohibited categories: Hate, Harassment, Violence, Self-Harm, Adult, Political, Spam, Deception and Malware. Am I missing something?


Yes, but those are open to very broad and potentially inconsistent interpretations.


Can you give an example of how that works. I get there could be gray areas at the border (esp. with the "adult" content), but they seem tightly constrained to me.


Then again, there's using it to explore something from an independent research standpoint that's already irrational by finding the edges of unexplored rational thought within the conversation. That could be used, for example, to figure out an approach to something which is typically rejected by convention, but not by proof.


If you make APIs like this integral to your business, how do you manage the risk of the API suddenly not being available one day?

As an example, at work we had integrated with a service to provide functionality a lot of our customers relied heavily on. One day the company behind the service got bought and the new owners stopped offering it as a service, using it only in-house instead.

Replacements were not as good and all had very different APIs, so a simple switch was out of the question. It's been over a year and we're still working on a good replacement.

For me I tend to fall down on self-hosting as much as I can of critical infrastructure, but obviously that's not a choice for something like OpenAI here.


> For me I tend to fall down on self-hosting as much as I can of critical infrastructure

IMO actually self-hosting isn't as important as using technology that is open-source with the option to self-host.


Sorry, yes that's what I had in mind. Thank you for clarifying.


Simple, just train your own GPT-3! How much could it cost, 10 dollars?


It's a problem, and the problem sounds like a good reason not to use this technology at all. Not for anything critical.


This rules out most fan-fiction:

"Content meant to arouse sexual excitement, such as the description of sexual activity"

I can't justify banning this. Every other category makes sense except this.



They don't. Not anymore anyway. Any sexual content gets filtered and sent to AI Dungeon's own model.


Could be the liability of inadvertently generating descriptions of illegal acts (child abuse etc.)


No. What liability? It isn't illegal to generate descriptions of illegal acts.

1. Then they could make "illegal acts" the rule.

2. It isn't illegal to generate descriptions of illegal activities.


There are liabilities other than overtly breaking laws.


That's my guess. The prompt "He took of her clothes and" triggered a story about rape for me.


Aside from Copilot, does anyone know of any other products that are making use of GPT-3?

The hype was huge when it was released, and the early beta testers were showing some amazing (and cherry picked) demos, most famously the ability to write working React code. But since then, I've not seen much...


I have a feeling it is being used to produce more nonsensical web pages. Often when I am searching the web for information on a product or a review , I land on a page that has weirdly phrased and often repetitive sentences which provide no useful information. I am assuming those pages are generated by OpenAI or similar technology.


The singularity will come when the set of training data available for scrape is dominated by AI generated content and the AI's learnings are derivatives of what old AI's produced.

Human thought on the other hand has some sort of undefinable entropic-value that AI to-date is missing, a Human can produce a "good idea", whereas an AI produces a bunch of potential continuations of a stings of text and selects randomly amongst them (or, even better, a Human selects from them).

Unfortunately the advertising game mixes up the incentives and flips the equation so that the purpose of communication isn't to share a "good idea" as efficiently as possible, but rather to keep eyeballs on your website for as long as possible in the hope some flashy banner ad will distract your user and you'll get your $0.02 for them abandoning your page, likely unfinished. AI will (and already does) excel at this sort of task, but it's the kind of task that ought to have no value whatsoever.

Luckily we have increasingly sophisticated summarization-AI to go from the filler-AI generated crap back down to a couple of bullet points, but at that point you've invested millions of dollars, researcher-hours, engineer-hours, compute-hours, etc, to make the worst text-compression utility of all time.


By the way, it's not just text anymore, you can train GPT-3 on Youtube. You get aligned visual and speech channels and data in vast quantities. That means it's easier to tell if the source is human.


Bingo

It’s increasingly difficult to find product reviews with search engines.

Massive auto generated content farms take a product name and add loads of AI-generated filler text. Pop in a bunch of banner ads and an affiliate link and they have huge economic incentive to scale these operations.

I’m very pessimistic about the direction the internet is going these days. The AI crisis isn’t going to be sentient AI trying to kill us, it’s going to be a flood of noise over knowledge.


Sometimes it works to add "reddit" to your search to find interesting comments. I suppose that will eventually be gamed too.


Until we have to start making AIs to identify knowledge and filter out noise. And then a whole cat-and-mouse game between fake news AI and fake news detection AI.


This is the exact situation we are currently in.

https://rowanzellers.com/grover/


The end result is more robust AI and an understanding of failure modes.


I recall in the mid/late 2000's implementing a markov text generator to create thousands of static html pages based on certain keywords. This has been a problem for over a decade and will probably get worse as text generation tools improve, e.g. GPT-3.


I've got some insight into this (Several friends, now multi-millionaires ran and flipped tens of sites like this) They're mostly written by low paid content writers in 3rd world countries such as Vietnam and the Philippines. Primarily to drive affiliate traffic to Amazon and other retailers with affiliate schemes. They all operate on a similar format - 10 items with good reviews, write 300 words about each product, rinse, repeat, profit.


What a world where people can become multi-millionaires in this way while nurses and teachers can't even get cost of living adjustments.


Right now most of those pages are produced by much simpler models, which copy an existing page and replace text snippets with synonyms. I am sure that soon the spammers will switch to better models.


Had similar experience recently and it had made all the way to be the top Google news hit - apparently the site is cranking out ”news” as SEO spam to promote their app.

https://mobile.twitter.com/moo9000/status/145873329934659174...


been playing with copilot lately and even it just seems more annoying then it is helpful so far. Will continue experimenting but so far my impression has soured a bit


I tried Copilot and removed it from my IDE. Not good enough, only fills in obvious parts and even there makes mistakes.


I often wonder about this with Twitter accounts. How many are already GPT-3 generated?

We'll need another GPT-3 bot to detect the GPT-3 bots.


I integrated copilot with VSCode (it's pretty easy to get off the waitlist I believe) and have been using it to unblock me from my ADHD when I'm writing code. Basically as I think through a bugfix in our app's codebase,

I navigate to the line where I believe "the fix should go here", and a few characters in, copilot is filling up the lines. 80% of the time it is non-compilable, but nearly 50% of the time it's close to the fix I was going to put in. It's then just a matter of me fixing much simpler errors and bugs in the copilot-suggested LOC.

I have found that I get far less distracted from writing bugfixes once I start looking at the code. I'm not going to let copilot push commits to PROD anytime soon, but it's like having a really smart intern who doesn't really know exactly what I'm trying to solve but has a decent idea, pair programming with me.

So it's not like these AI tools will replace me yet, but they are certainly living up to the goal of "copilot".


People are afraid of being replaced when what we actually should be afraid of is be de-valued.


There are subreddits with model-generated porn stories. All the horny people writing for each other are getting automated. In the example I saw some people had sex and then took their clothes off at the end. It's groundbreaking stuff.


That was literally what like 90% of AI Dungeon (GPT-3-based CYOA adventure simulator) players were using it for. Then OpenAI forced AI Dungeon to implement strict content filters, and within a month the community had already stood up a fully functional replacement fine-tuned on literotica with 10 times the features and a focus on privacy and zero content restrictions. The community replacement was partially bankrolled by the sale of AI-generated anime catgirl image NFTs.

That's barely scratching the surface of the AI-generated erotica scene, it's pretty wild.


NovelAI, I presume?

Funnily enough, so long as you aren't trying for porn it does a much better job than AI Dungeon did of staying away from it. It lives up to its name — it's an excellent cowriter for all forms of fiction.


Yup, that's the one. And I agree, it's really good for general use, and if you know how to use the tools right it's arguably better than AI dungeon.


Okay this is legit the funniest thing I read today on Internet


> In the example I saw some people had sex and then took their clothes off at the end.

I mean - that is technically feasible.


The terms and conditions prevent you from making anything good with it. All of my ideas were banned because they're too unethical or just recapitulate the functionality of the sandbox. Some key partners like Microsoft have a separate agreement where they're allowed to make useful things.


You might want to peek at NovelAI.net instead.

It's the exact opposite, in just about every possible way. Including, I'm afraid, model generality -- it's tuned for fiction, and nothing else, but it's very good at that.


Copy.ai uses GPT3 under the hood. Not a product for devs, but still a growing business


There are quite a few similar products that used the OpenAI API as well.


There are plenty of services in the "automate writing ads and blog spam" space. Not making the world better in any way.


We've managed to integrated it well into our software for resumes with positive user feedback

https://www.youtube.com/watch?v=qxk9S2CRsAE&ab_channel=Rezi


Not GPT-3, because it's too big, but much smaller models of similar architecture are used for smartcompose in Word/Outlook GMail/Docs and other places.


I got access to their free API access about five months ago, and I liked it so much that I converted to a paid plan. I don’t really understand some of the negativity here: OpenAI is now a for profit company, their GPT-3 APIs solve many difficult use cases, and they charge a very reasonable fee for reliable access.

It is a little sad that just a few large AI focused companies in the USA and China have the financial resources to build very large models, and I don’t like that situation. I have slowly come to accept that these companies can make a profit selling access, and as long as the access is reliable and reasonably priced, then that is sort of OK.

Just today in the news Google announced an alpha TF-GNN that is likely something that I will eventually use. Also today Facebook announced a 2G parameter model that understands a few hundred languages and I think they are going to make the trained model available for free if I understood their press announcement correctly. I don’t have the hardware resources to stand up something like FB’s new model, so OpenAI’s approach of running their model on their servers to power their API is better for me. I added new GPT-3 example chapters to my Common Lisp and Clojure books, so it is good to know that readers can now get API access tokens without waiting.


Open-ai is pretty damn good. I've been negative about it in the past and I'm still a bit distrustful of its owners but the API itself is really good for quickly putting NLP interface in front of programs. I even use it for personal productivity hacks


Any hacks you can share?


Parsing emails for people rescheduling classes..outputting old class and new class. or writing automation scripts with a few inputs and then using openai to parse spoken word commands, extract the relevant inputs and plug into automation


Anyone have advice or links to resources on how to effectively use the parameters and/or craft suggestions to massage output?

I’ve played with this tool for a while and I often find myself struggling with these aspects of the system.


I work in a bigger creative agency and we use a OpenAI based tool to give our creatives something to help generate ideas and write boring copy like press releases. It's good for what it is but finetuning it per few shot learning is still really hard and sadly nothing non techy people can do.


Interesting. I've only been tangentially following the GPT3 conversation, since it's not really relevant to the kind of work I do. But I had this idea in my head that it was magic, with the ability to do the seemingly impossible.

After taking it for a spin, I'm not that impressed? At least when testing their examples using the playground. Most results would be fairly unusable, though maybe a more thorough prompt design could address that. The conversational prompt was especially bad and conveyed the feeling of chatting with someone who was a bit high and not really listening to me.

Not as magical as I thought, then. I'm curious how you could tune it to be a special-purpose chat bot, working in customer service for an insurance company or something.


I think most of the "magic" starts to fade as soon as you encounter a few bad outputs and quickly become unimpressed.

However, if you retry the same prompt multiple times, one of those is likely to produce a good output. I think it's important to give users of GPT-3 based tools multiple alternatives and let the user decide which of the options they like best.

That's the approach I took with my side project for generating short stories.

For example, with this story [1], not all the options for the progression of the story are great. But if you pick and choose which progressions you like best, you can arrive at a pretty good ending, such as [2].

[1] https://toldby.ai/arK_3OpvpkG

[2] https://toldby.ai/aQAXlq3LNku


God this is amazing. I made this masterpiece by choosing the most ridiculous replies that halfway made sense, and ended up with this masterpiece.

https://toldby.ai/UiyTLzXKsEa

Yevgeny's eldest daughter's speech is particularly moving.


Oh, yikes! I was following the progression pretty well until that compelling speech... I suppose this example gives more credence to grandparent's post of GPT-3 not being impressive or magical.

Nonetheless, here is a direct link to the homepage if others would like to try it out:

https://toldby.ai


OpenAI isn't quite so good about the whole Checkhov's Gun thing... or avoiding going on long, crazy rants.


> A star for his gravestone is too great a luxury, but still, he should have at least seen the hills of the next village over.

I found this very poetic.


Here's another banger: https://toldby.ai/jR2iC9bXPEe


And this deeply cynical rant:

https://toldby.ai/uI9LA3HiSkO

Once upon a time, there was a young man named Jason.

He did what all parents encourage their children to do. He got up early, studied hard, did well in school, participated in extracurricular activities, got great grades, made friends, did not date for several years, was not very attractive, went to a prestigious college, met the girl of his dreams, did not spend much time with said girl until he did, did not engage in any premarital activities, got married at the end of college, got a job, moved to a big house, earned a big salary, told his wife he loved her every day, taught his daughter to not have premarital sex just because she can, paid a lot of money into a pension fund, watched a lot of reality TV, read the news, and on its 20th anniversary took his wife on a modest European vacation.

This story is not just about Jason. It may also be about other people in his generation who all did the same things. What is it like to be like them? Because Jason and his peers did everything right, he does not believe he will ever have to worry.

That is because Jason is not intellectually curious. He is only interested in his own little world.

Jason believes in the retirement system, the company he works for, the banks, the government, the media, the rule of law, the peace of capitalists everywhere that will dominate the geopolitical landscape for another several centuries.

In other words, Jason is a sucker.

He is a sucker because he doesn't understand that to have a retirement account means that you have just contributed to a scheme that when push comes to shove has no real value.

He is a sucker because he need not know that stocks are really shares in nothing other than an intricate system to distract, confuse, or flatter the investor.

He is a sucker because he does not understand that his pension, just like Social Security, is just a joke.

And why does Jason need all these myths?

Because without them he would be frightened to death.

And so Jason, along with everyone else of his generation, contributes to a massive scam on an epic scale.

However, he is a good boy, that Jason.


Seems like the new Classifications feature uses GPT-3 text completion, and their similarity search model under the hood [0]:

"The endpoint first searches over the labeled examples to select the ones most relevant for the particular query. Then, the relevant examples are combined with the query to construct a prompt to produce the final label via the completions endpoint."

As a non-AI person, this sounds interesting. You wouldn't need to provide examples for every label you want, just enough that GPT-3 gets the idea. Is there prior art on this approach of text classification?

[0] https://beta.openai.com/docs/api-reference/classifications


Please try out Pen: It's ready!

Language models for the terminal. https://semiosis.github.io/cterm/

Coherent, relevant chatbots for anything you're looking at. https://semiosis.github.io/posts/multi-part-prompts/

A browser for the imaginary web. https://semiosis.github.io/looking-glass/

Imaginary interpreters. https://semiosis.github.io/ii/

Mind mapping with chatbots, auto suggested topic generation, etc. https://semiosis.github.io/paracosm/


"LookingGlass is an imaginary-web (𝑖web) browser. It allows lets you visit websites that do not exist concretely, and also also allows you to treat any text as having hyperlinks."

Sounds neat! Your webpages could use a thorough review though as there are a few errors in the text. Allows/lets and also/also here in the first paragraph.


Thank you! I have made the corrections. It's still fresh!


Very conspicuous list of available countries. Both English and non-English, from the most rich to some of the poorest and non-digital countries in the world, both democratic countries and brutal absolute dictatorships... yet the absentees are classic "enemy" countries - Russia, China, Syria, Iran.

Unfortunately, technology is, once again, not exempt from politics.


US export restrictions blacklist a few countries for various things. In total (except for China) the combined economies are so small it makes sense to just not do business with them rather than figure out if you can.


Play fair or we won’t let you play our game. Seems universal to me.


Chile, Paraguay, Peru and Venezuela are not on the list but Argentina, Brazil and Bolivia are...


> Our mission is to ensure that artificial general intelligence benefits all of humanity.

> The API is not available in this country.

Sure.


Any reason why Vietnam could be missing? Also including Iraq and not including Saudi Arabia is interesting.


The government of Vietnam could be considered Marxist–Leninist/Socialist, so it's on the list of forever enemies of the US government and many businesses.


Except when it comes to "containing China", when Washington loves them.


Regarding Syria and Iran - that's not up to OpenAI. OFAC made that decision for them.


This is remarkable. Assuming improvements continue, this is going to help automate a lot of work that wasn't previously possible to do or too tedious to do.


They held out too long, should have opened a long time ago. Now their competitors have grown up. And it's too expensive to touch.


> In the simplest case, if your prompt contains 10 tokens and you request a single 90 token completion from the davinci engine, your request will use 100 tokens and will cost $0.006.

It's weird that the prompt characters are also being counted towards your token usage. So you're penalized if you have an elaborate prompt with lots of examples (as shown in the docs)?


It's not that weird; longer prompts require more compute. This pricing is directly proportional to the total compute required for a query, which scales with the sum of the input and output sequence lengths.


Let's take this example: https://beta.openai.com/examples/default-factual-answering

95% of the token usage in this example would consist of the prompt and I would only get 1 sentence in return. So if I wanted to generate another sentence, I would have to pay 95% of the cost towards the prompt again and again... Isn't there a way to create a template for the prompt so you only pay for the generated sentences?


https://www.theregister.com/2020/11/04/gpt3_carbon_footprint...

Far too expensive, in a currency we cannot afford.

These algorithms are not the future of AI, if AI has a future.


The articles says training GPT used as much power as 126 homes for one year. Thats literally nothing.



read the second definition you linked


Given the history of OpenAI how can they be trusted?


popular or my mistake?

iex(1)> OpenAI.engines() {:error, :timeout}


Wait is this Elixir?


there is an elixir client library, it was timing out a lot yesterday.


Oooh, didn't know that, cool!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: