OpenAI Swings the Doors Wide Open on ChatGPT

Cheap pricing has already led to a slew of AI-enabled offerings from Instacart, Shopify, and Snap

4 min read
hand holding smartphone with ChatGPT examples on the screen

OpenAI announced the release of the ChatGPT API, which provides access to GPT 3.5 Turbo, on 1 March 2023.

Giulio Benzi/Alamy

On 1 March 2023, OpenAI made an announcement that developers were eagerly anticipating: The company launched the ChatGPT API, giving third-party developers access to the AI model that powers ChatGPT and Microsoft’s Bing Chat.

Access alone is enticing, but OpenAI had an ace up its sleeve—the price. Access to the application programming interface costs just US $0.002 per 1,000 tokens (roughly equal to 750 words in English). At that rate, one dollar buys enough capacity to handle 375,000 words of English text.

“GPT 3.5 Turbo is a huge improvement over the existing GPT-3. It’s extremely rare for a company to release a new version of its API that’s both 10x cheaper and 5x faster,” says Hassan El Mghari, a senior developer advocate at Vercel. “That’s a 50x improvement, unheard of.”


The ChatGPT API is incredibly cheap

The ChatGPT API doesn’t provide access to ChatGPT itself but instead the model it uses: GPT 3.5 Turbo. While the exact differences between GPT 3.5 and GPT 3.5 Turbo are unclear (OpenAI, contrary to its name, doesn’t open-source its models), its use in ChatGPT suggests the model is much more efficient than those previously available.

This efficiency makes it possible for OpenAI to charge less for access. Improved affordability is always a win for developers, of course, but the scale of GPT 3.5 Turbo’s price cut relative to its predecessor is more than a nice discount. It opens opportunities to bring AI features to apps that previously couldn’t even begin to justify the cost.

“Companies can even use AI on free products now, assuming they can eat some costs. Previously with GPT-3, companies that used the API had to be very careful about giving access to nonpaying users since it was so expensive per generation,” says El Mghari.

A screenshot of a user asking Bing Chat how much a panda weighs.The ChatGPT OpenAI gives third-party developers access to AI models similar to those used by Microsoft’s Bing Chat.Matthew S. Smith / Microsoft

GPT 3.5 Turbo’s reach extends beyond developers who want to add an AI chatbot to their app or service. OpenAI’s blog posts claim that GPT 3.5 Turbo’s low cost and improved performance makes it a match for a wide variety of uses, including many previously enabled by GPT 3.5.

“Due to ChatGPT’s rise in popularity because of its chat format, people tend to have a preconception that ChatGPT API can only be used in this casual format,” says Chanyeol Choi, the CEO and cofounder of Publishd. “OpenAI now wants its customers to know that ChatGPT API (gpt-3.5-turbo) can be used in a less casual, non-chat format.”

This connects with two other announcements made alongside the release of the ChatGPT AI—longer context limits and the ability to pin the model snapshot.

Longer context limits allow developers to process more tokens which, in practice, translates to more text. Kyle Shannon, the CEO and founder of Storyvine, says OpenAI’s best dedicated server plans can handle up to 32,000 tokens, which helps developers process much larger chunks of text. The model snapshot, meanwhile, lets developers lock down a version of the model to improve consistency. “We’ll go from ‘you can perform miracles on some documents’ to ‘perform miracles on any data in any configuration’ within three years,” says Shannon.

Controversy hasn’t stopped developer enthusiasm

OpenAI’s announcement was soured by a seemingly unrelated story: The challenge to Section 230 under argument before the Supreme Court of the United States. Justice Neil Gorsuch briefly mused on whether AI-generated content could be included in Section 230 protections.

“Artificial intelligence generates poetry,” said Gorsuch. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected. Let’s assume that’s right.”

Gorsuch’s argument was hypothetical but seems likely to be tested in the courts. It’s currently unclear whether developers who build apps that use generative AI, or the companies building the models developers use (such as OpenAI), can be held liable for what an AI creates.

An demonstration of Instacart's new ChatGPT-powered search feature, available in the Instacart app.Instacart will use the ChatGPT API to help users decide what they could have for lunch.Instacart

“The issue of liability is a very important one which must be carefully thought through, and solutions will come about over time from developers,” says Choi. He believes companies operating in legal, financial, and medical fields are better served by Retrieval-Augmented Language Models (RALMs), which condition a model on a grounding corpus. This improves accuracy to ensure that important details, such as academic citations, are correct. Choi’s company uses this method for Publishd, an AI writing assistant designed for use by academics and researchers. Publishd is currently in closed beta.

This approach may not apply to every field, however—and, given the pace of ChatGPT’s rise, many developers are moving ahead at breakneck speed. This includes Instacart, Shopify, and Snap, all of which recently announced features powered by the ChatGPT API. Developers are keenly aware of the legal challenges that AI could face, but sitting idle is viewed as the greater threat.

“Every industry is going to have to deal with this reality way sooner than I think anyone realizes, so I think there is possible liability and risk from a legal and compliance perspective, but I think the larger risk—by far—is the risk of not jumping into the deep end sooner than you think is reasonable,” says Shannon.

The Conversation (1)
David Tonhofer
David Tonhofer12 Mar, 2023
M

“That’s a 50x improvement, unheard of.”

For a dark take on what it means to have what is basically motormouths on LSD equipped with grammarly generating content:

"You get the Internet you deserve - The race to the bottom is on the Internet of Sh*t is about to get Sh*ttier"

https://www.theregister.com/2022/12/06/internet_ai_gpt_ios/

The era of "search" may well be over. Vast prairies of Artificial Inanity will be seen beyond a few websites we trust with reliable content (none of the MSM or Wikipedia or "Fact checkers" need to apply btw)