Spanish English Eesti French German Italian Portuguese
Social Marketing
HomeTechnologyArtificial IntelligenceOpenAI launches an API for ChatGPT, and dedicated capacity for enterprises

OpenAI launches an API for ChatGPT, and dedicated capacity for enterprises

To call ChatGPT, the free text-generating AI developed by San Francisco-based startup OpenAI, a success is an understatement.

In December, ChatGPT had a estimate more than 100 million monthly active users. It garnered mainstream media attention and spawned countless memes on social media. has been used for write hundreds of e-books in Amazon's Kindle store. And he is credited with co-authoring at least a scientific article.

But OpenAI, being a business, albeit with limited profits, had to monetize ChatGPT somehow, so investors wouldn't get nervous. It took a step towards this with the launch of a premium service, ChatGPT Plus, in February. By introducing an API that will allow any business to embed ChatGPT technology into their apps, websites, products and services it has become much bigger.

An API was always the plan. That's according to Greg Brockman, Chairman and President of OpenAI (and also one of the co-founders).

“It takes us a while to get these APIs up to a certain level of quality,” Brockman said. "I think it's kind of like just being able to meet demand and scale."

Brockman says the ChatGPT API is powered by the same AI model behind OpenAI's popular ChatGPT, called “gpt-3.5-turbo.” GPT-3.5 is the most powerful text generation model that OpenAI offers today through its set of APIs; The nickname “turbo” refers to a more optimized, more responsive model from GPT-3.5 that OpenAI has been quietly testing for ChatGPT.

Priced at $0.002 per 1000 tokens, or about 750 words, Brockman says the API can power a variety of experiences, including “non-chat” apps. Snap, Quizlet, Instacart and Shopify are among the early adopters.

The initial motivation behind the development of gpt-3.5-turbo could have been to reduce the gargantuan computing costs of ChatGPT. OpenAI CEO Sam Altman once called ChatGPT's expenses “mind-blowing,” estimating them to a few cents per chat in computing cost. (With over a million users, presumably that adds up quickly.)

But Brockman says that gpt-3.5-turbo is improved in other ways.

“If you're building an AI-powered tutor, you never want the tutor to just give the student an answer. You want me to always explain it and help them learn — that's an example of the kind of system you should be able to build [with the API],” Brockman said. “We think this is going to be something that just makes the API a lot more useful and accessible.”

The ChatGPT API is the foundation for My AI, Snap's recently announced chatbot for Snapchat+ subscribers, and Quizlet's new Q-Chat virtual tutor feature. Shopify used the ChatGPT API to create a personalized assistant for shopping recommendations, while Instacart leveraged it to create Ask Instacart, an upcoming tool that will let Instacart customers ask about groceries and get “shoppable” answers informed by product data from the company's retail store. partners.

“Grocery shopping can be mentally demanding, with many factors at play, such as budget, health and nutrition, personal tastes, seasonality, cooking skills, preparation time and recipe inspiration,” said the Instacart chief architect JJ Zhuang. “What if AI could take on that mental load and we could help the household leaders who are commonly responsible for grocery shopping, meal planning and putting food on the table, and actually make grocery shopping Are they really fun? Instacart’s AI system, when integrated with OpenAI’s ChatGPT, will allow us to do exactly that, and we’re excited to start experimenting with what’s possible in the Instacart app.”

Ask Instacart OpenAI ChatGPT

Image credits: Instacart

However, those who have been following the ChatGPT saga closely might be wondering if it is ready for release, and with good reason.

At first, users could ask ChatGPT to answer biased questions racist and sexist, a reflection of the data that ChatGPT was initially trained on. (ChatGPT training data includes a wide swath of Internet content, namely eBooks, Reddit posts, and Wikipedia articles.) ChatGPT also makes up facts without disclosing what it is doing, a phenomenon in AI known as hallucination.

ChatGPT, and similar systems, are also susceptible to ad-based attacks, or malicious adversarial ads that force them to perform tasks that were not part of their original targets. entire communities on Reddit have formed around finding ways to “jailbreak” ChatGPT and bypass any protection that OpenAI has implemented. In one of the less offensive examples, a staff member at startup Scale AI was able to make ChatGPT disclose information about its inner technical workings.

Brands would certainly not want to get caught in the spotlight. Brockman is convinced they won't be. Because? One reason, he says, is continued improvements at the back end, in some cases at the expense of Kenyan contract workers. But Brockman emphasized a new (and decidedly less controversial) approach that OpenAI calls Chat Markup Language or ChatML. ChatML sends text to the ChatGPT API as a message stream along with metadata. That's the opposite of standard ChatGPT, which consumes raw text represented as a series of tokens. (The word “fantastic” would be divided into the tokens “fan”, “tas” and “tico”, for example).

For example, when asked “What are some interesting ideas for a party for my 30th birthday?” a developer can choose to add that message with an additional message like “You are a fun conversational chatbot designed to help users with the questions they ask. You must answer honestly and in a fun way!” or “You are a bot” before the ChatGPT API processes it. These instructions help better tailor and filter responses from the ChatGPT model, according to Brockman.

“We are moving to a higher level API. If you have a more structured way of representing the input, where it says, 'this is from the developer' or 'this is from the user'. I have to hope that as a developer it can actually be more robust. [using ChatML] against these types of quick attacks,” Brockman said.

Another change that will (hopefully) prevent the unwanted behavior of ChatGPT is more frequent model updates. With the release of gpt-3.5-turbo, developers will automatically upgrade to the latest OpenAI stable model by default, Brockman says, starting with gpt-3.5-turbo-0301 (released today). However, developers will have the option to stick with an older model if they choose, which might negate the benefit a bit.

Whether they choose to upgrade to the newer model or not, Brockman notes that some customers, mainly large enterprises with correspondingly large budgets, will have deeper control over system performance with the introduction of dedicated capacity plans. First detailed in documentation leaked earlier this month, OpenAI Dedicated Capacity Plans, released today, allow customers to pay for a compute infrastructure allowance to run an OpenAI model, eg gpt-3.5- Turbo. (It's Azure on the back-end, by the way.)

In addition to “full control” over instance load (typically, OpenAI API calls are made on shared compute resources), dedicated capacity gives customers the ability to enable features like longer context limits. Context boundaries refer to the text that the model considers before generating additional text; Longer context boundaries allow the model to essentially “remember” more text. While higher context limits may not solve all bias and toxicity issues, they could lead models like gpt-3.5-turbo to hallucinate less.

Brockman says that dedicated capacity customers can expect gpt-3.5-turbo models with a context window of up to 16k, which means they can accept four times as many tokens as the standard ChatGPT model. That could allow someone to paste pages and pages of the tax code and get reasonable answers from the model, shall we say, a feat not possible today.

Brockman alluded to a general release in the future, but not anytime soon.

“The context windows are starting to get bigger, and part of the reason we're only dedicated capacity customers now is because there are a lot of performance trade-offs on our side,” Brockman said. "Eventually we might offer an on-demand version of the same thing."

Given the mounting pressure on OpenAI to turn a profit after a multi-billion dollar investment from Microsoft, that wouldn't be terribly surprising.

RELATED

Leave a response

Please enter your comment!
Please enter your name here

Comment moderation is enabled. Your comment may take some time to appear.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

SUBSCRIBE TO TRPLANE.COM

Publish on TRPlane.com

If you have an interesting story about transformation, IT, digital, etc. that can be found on TRPlane.com, please send it to us and we will share it with the entire Community.

MORE PUBLICATIONS

Enable notifications OK No thanks