News

OpenAI's New GPT-4o Immediately Available in Azure Playground

OpenAI's new GPT-4o, the company's latest/greatest large language model (LLM), is immediately available for Azure developers, Microsoft announced today.

OpenAI, which started the generative AI craze, today announced the new model (with "o" standing for omni), describing it as "a step towards much more natural human-computer interaction -- it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs."

Rollout of the new tool started immediately, with OpenAI saying, "capabilities will be rolled out iteratively (with extended red team access starting today)."

Also starting today is a preview of the hot new tech on the Azure cloud, which Microsoft wasted no time in announcing, perhaps getting a head start on other cloud giants because of a $13 billion investment in OpenAI.

"Azure OpenAI Service customers can explore GPT-4o's extensive capabilities through a preview playground in Azure OpenAI Studio starting today in two regions in the US. This initial release focuses on text and vision inputs to provide a glimpse into the model's potential, paving the way for further capabilities like audio and video."

Early Access Playground with GPT-4o
[Click on image for larger view.] Early Access Playground with GPT-4o (source: Microsoft).

Possibilities opened up with the new tech as listed by Microsoft include:

  • Enhanced customer service: By integrating diverse data inputs, GPT-4o enables more dynamic and comprehensive customer support interactions.
  • Advanced analytics: Leverage GPT-4o's capability to process and analyze different types of data to enhance decision-making and uncover deeper insights.
  • Content innovation: Use GPT-4o's generative capabilities to create engaging and diverse content formats, catering to a broad range of consumer preferences.

Directions on how to use the early access playground are here.

You can read all about the new GPT-4o model at sister publication PureAI in John K. Waters' article "OpenAI Releases New Iteration of GPT-4: 'GPT 4o'."

OpenAI, which started the generative AI craze, today announced the new model (with "o" standing for omni), describing it as "a step towards much more natural human-computer interaction -- it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs."

Rollout of the new tool started immediately, with OpenAI saying, "capabilities will be rolled out iteratively (with extended red team access starting today)."

Also starting today is a preview of the hot new tech on the Azure cloud, which Microsoft wasted no time in announcing, perhaps getting a head start on other cloud giants because of a $13 billion investment in OpenAI.

"Azure OpenAI Service customers can explore GPT-4o's extensive capabilities through a preview playground in Azure OpenAI Studio starting today in two regions in the US. This initial release focuses on text and vision inputs to provide a glimpse into the model's potential, paving the way for further capabilities like audio and video."

Early Access Playground with GPT-4o
[Click on image for larger view.] Early Access Playground with GPT-4o (source: Microsoft).

Possibilities opened up with the new tech as listed by Microsoft include:

  • Enhanced customer service: By integrating diverse data inputs, GPT-4o enables more dynamic and comprehensive customer support interactions.
  • Advanced analytics: Leverage GPT-4o's capability to process and analyze different types of data to enhance decision-making and uncover deeper insights.
  • Content innovation: Use GPT-4o's generative capabilities to create engaging and diverse content formats, catering to a broad range of consumer preferences.

Directions on how to use the early access playground are here.

You can read all about the new GPT-4o model at sister publication PureAI in John K. Waters' article "OpenAI Releases New Iteration of GPT-4: 'GPT 4o'."

About the Author

David Ramel is an editor and writer at Converge 360.

comments powered by Disqus

Featured

  • Hands On: New VS Code Insiders Build Creates Web Page from Image in Seconds

    New Vision support with GitHub Copilot in the latest Visual Studio Code Insiders build takes a user-supplied mockup image and creates a web page from it in seconds, handling all the HTML and CSS.

  • Naive Bayes Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the naive Bayes regression technique, where the goal is to predict a single numeric value. Compared to other machine learning regression techniques, naive Bayes regression is usually less accurate, but is simple, easy to implement and customize, works on both large and small datasets, is highly interpretable, and doesn't require tuning any hyperparameters.

  • VS Code Copilot Previews New GPT-4o AI Code Completion Model

    The 4o upgrade includes additional training on more than 275,000 high-quality public repositories in over 30 popular programming languages, said Microsoft-owned GitHub, which created the original "AI pair programmer" years ago.

  • Microsoft's Rust Embrace Continues with Azure SDK Beta

    "Rust's strong type system and ownership model help prevent common programming errors such as null pointer dereferencing and buffer overflows, leading to more secure and stable code."

  • Xcode IDE from Microsoft Archrival Apple Gets Copilot AI

    Just after expanding the reach of its Copilot AI coding assistant to the open-source Eclipse IDE, Microsoft showcased how it's going even further, providing details about a preview version for the Xcode IDE from archrival Apple.

Subscribe on YouTube

Upcoming Training Events