All new GPT-4 turbo, GPT-4 vision and DALL-3 API’S

All new GPT-4 turbo

OpenAI has announced several new models and developer products at their DevDay. Dozens of new additions and improvements, and reduced pricing across many parts of the platform are made: 1) New GPT-4 Turbo model that is more capable, cheaper and supports a 128K context window. 2)New multimodal capabilities in the platform, including vision, image creation (DALL·E 3), and text-to-speech (TTS). 3)New Assistants API that makes it easier for developers to build their own assistive AI apps.

GPT-4 Turbo with 128K context

With data going back to April 2023, GPT-4 Turbo—which is presently accessible through an API preview—has been trained, the company revealed on Monday at its inaugural developer conference. The GPT-4 version that was first released in March could only analyse data up to September 2021. Although it did not provide a specific date, Open AI intends to deliver a production-ready Turbo model in the next weeks, according to The Verge.

For developers, GPT-4 Turbo will also be more affordable to run. With a 128K context window, which Open AI claims is “equivalent to more than 300 pages of text in a single prompt,” GPT-4 Turbo will also be able to ‘see’ more data. Large language models such as GPT are generally able to comprehend more of the query and provide more thoughtful responses within bigger context windows.

Furthermore, according to Open AI, GPT-4 Turbo will be less expensive for developers to utilize than GPT-4, with input tokens costing three times as little as $0.01 and output tokens costing twice as much at $0.03.

What are Custom GPTs?

OpenAI is now rolling out a new product called “GPTs”, which they describe as “custom versions of ChatGPT that you can create for a specific purpose”. OpenAI envisages people building them for tasks at home and in the workplace, and then sharing these creations with others.

DALL·E 3

OpenAI announced the third version of its generative AI visual art platform DALL-E, which now lets users use ChatGPT to create prompts and includes more safety options. 

DALL-E converts text prompts to images. But even DALL-E 2 got things wrong, often ignoring specific wording. The latest version, OpenAI researchers said, understands context much better.

A new feature of DALL-E 3 is integration with ChatGPT. By using ChatGPT, someone doesn’t have to come up with their own detailed prompt to guide DALL-E 3; they can just ask ChatGPT to come up with a prompt, and the chatbot will write out a paragraph (DALL-E works better with longer sentences) for DALL-E 3 to follow. Other users can still use their own prompts if they have specific ideas for DALL-E.

Developers can integrate DALL·E 3, which we recently launched to ChatGPT Plus and Enterprise users, directly into their apps and products through our Images API by specifying dall-e-3 as the model. Companies like Snap, Coca-Cola, and Shutterstock have used DALL·E 3 to programmatically generate images and designs for their customers and campaigns. Similar to the previous version of DALL·E, the API incorporates built-in moderation to help developers protect their applications against misuse.

OpenAI, possibly to avoid lawsuits, will also allow artists to opt their art out of future versions of text-to-image AI models. Creators can submit an image that they own the rights to and request its removal in a form on its website. A future version of DALL-E can then block results that look similar to the artist’s image and style. Artists sued DALL-E competitors Stability AI and Midjourney, along with art website DeviantArt, for allegedly using their copyrighted work to train their text-to-image models. 

Text-to-speech (TTS)

Developers can now generate human-quality speech from text via the text-to-speech API. Our new TTS model offers six preset voices to choose from and two model variants, tts-1 and tts-1-hdtts is optimized for real-time use cases and tts-1-hd is optimized for quality.

For code and queries visit openAI docs or contact Us.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *