News

Microsoft's 3 AI Dev Approaches: Code First, No Code and Drag-and-Drop

As a prelude to the big Build developer conference next week, Microsoft has announced a host of new development features, many focusing on the Azure cloud and, in particular, artificial intelligence development with machine learning.

The news includes a new Azure Cognitive Services category called Decision, AI-assisted Azure search, new MLOps (DevOps for machine learning) capabilities, hardware-accelerated ML models and much more.

"We're delivering key new innovations in Azure Machine Learning that simplify the process of building, training and deployment of machine learning models at scale," exec Scott Guthrie announced in a blog post today (May 2). "These include new automated machine learning advancements and an intuitive UI that make developing high-quality models easier, a new visual machine learning interface that provides a zero-code model creation and deployment experience using drag-and-drop capabilities and new machine learning notebooks for a rich, code-first development experience."

Microsoft's Chris Stetkiewicz expounded on the new ML advancements as they relate to development approaches, outlining a three-prong approach espoused by Bharat Sandhu, director of artificial intelligence at Microsoft, to fit different classifications of developers, or "AI authoring models":

  • Code first: use any tools
  • No code: use automated machine learning
  • Drag and drop: make models visually

The Stetkiewicz post explained each:

"First, we have developers and data scientists who like to write code," the post said. "They want to build machine learning models using tools and processes they already know. For them, Azure Machine Learning offers a 'code first model,' where they can use the development tools they like.

"A second group, including business domain experts, may know a lot about data, but they don't know much about machine learning or code. For those customers, Azure Machine Learning's automated machine learning experience is a 'no code' option, accessible without having to write any code.

"A third category of people, who are learning machine learning concepts, they want to make their own models, but they are not coders. This could be IT professionals, or folks with background in statistics or mathematics," Sandhu said. "For those customers, we're offering a drag-and-drop experience to make models visually."

The company also announced MLOps capabilities with Azure DevOps integration, with the goal providing developers with reproducibility, auditability and automation of the end-to-end machine learning lifecycle.

To enable extremely low latency and cost-effective inferencing, Microsoft also announced the general availability of hardware-accelerated ML models that run on FPGAs. Also on tap is ONNX Runtime support for NVIDIA TensorRT and Intel nGraph, facilitating high-speed inferencing on NVIDIA and Intel chipsets.

In the Azure Cognitive Services space -- which provides functionality for human-brain-like capabilities such as seeing, hearing, responding, translating, reasoning and more -- the new Decision category fosters specific recommendations to help with informed and efficient decision-making.

"It's an incredible time to be a developer," said Guthrie, executive vice president, Microsoft Cloud and AI Group, in a news release. "From building AI and mixed reality into apps to leveraging blockchain for solving commercial business problems, developers' skillsets and impact are growing rapidly. Today we're delivering innovative Azure services for developers to build the next generation of apps. With 95 percent of Fortune 500 customers running on Azure, these innovations can have far-reaching impact."

For more on Microsoft's announcements, including information on Internet of Things (IoT), edge, mixed reality, blockchain and other development news, see this article on our sister site, ADTmag.

Microsoft also provides more details on its news and blog sites, with even more coming at next week's Build conference, starting May 6.

About the Author

David Ramel is an editor and writer at Converge 360.

comments powered by Disqus

Featured

  • Windows Community Toolkit v8.2 Adds Native AOT Support

    Microsoft shipped Windows Community Toolkit v8.2, an incremental update to the open-source collection of helper functions and other resources designed to simplify the development of Windows applications. The main new feature is support for native ahead-of-time (AOT) compilation.

  • New 'Visual Studio Hub' 1-Stop-Shop for GitHub Copilot Resources, More

    Unsurprisingly, GitHub Copilot resources are front-and-center in Microsoft's new Visual Studio Hub, a one-stop-shop for all things concerning your favorite IDE.

  • Mastering Blazor Authentication and Authorization

    At the Visual Studio Live! @ Microsoft HQ developer conference set for August, Rockford Lhotka will explain the ins and outs of authentication across Blazor Server, WebAssembly, and .NET MAUI Hybrid apps, and show how to use identity and claims to customize application behavior through fine-grained authorization.

  • Linear Support Vector Regression from Scratch Using C# with Evolutionary Training

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the linear support vector regression (linear SVR) technique, where the goal is to predict a single numeric value. A linear SVR model uses an unusual error/loss function and cannot be trained using standard simple techniques, and so evolutionary optimization training is used.

  • Low-Code Report Says AI Will Enhance, Not Replace DIY Dev Tools

    Along with replacing software developers and possibly killing humanity, advanced AI is seen by many as a death knell for the do-it-yourself, low-code/no-code tooling industry, but a new report belies that notion.

Subscribe on YouTube