News

Microsoft Boosts AI-Driven Vision, Search Services in Azure Cloud

Microsoft announced several enhancements and new offerings for its Cognitive Services cloud APIs that help developers leverage artificial intelligence (AI) capabilities in mobile apps.

Microsoft Cognitive Services provide the base of the company's AI platform, letting developers incorporate AI functionality for vision, speech, language, knowledge and search into iOS and Android apps and edge devices.

Here's a look at what exec Joseph Sirosh announced March 1.

  • A public preview of Custom Vision services available via the Azure Portal. This helps developers train their own classifiers, providing data to group different images in order to create machine learning models for use in apps.

    "We built Custom Vision with state-of-the-art machine learning that offers developers the ability to train their own classifier to recognize what matters in their scenarios," Sirosh said.

    Concrete examples of real-world scenarios in which this technology can be used listed by Sirosh include:

    • A retailer can easily create models to auto-classify catalog images of different kinds of clothing, such as dresses, shoes and so on.
    • A social media site can more effectively filter and classify images of specific products.
    • A national park can detect whether photos captured by automatic cameras include wild animals or not.

  • The already-available Face API -- which can identify faces and other characteristics like emotion, eyeglasses, gender and so on -- has been improved, primarily with million-scale recognition. That means classes of identifiable people or faces can scale up to 1 million.

  • Bing Entity Search is now generally available in the Azure Portal. This API lets developers infuse knowledge search into existing content to help identify entities associated with a search term. Such entities can include: famous people; locations; various types of media such as movies, TV shows, video games, books and so on; nearby local businesses; and more.

    Practical examples enables by this technology include:

    • A messaging app could provide an entity snapshot of a restaurant, making it easier for a group to plan an evening.
    • A social media app could augment users' photos with information about the locations of each photo.
    • A news app could provide entity snapshots for entities in an article.
    • A music app could augment content with snapshots of artists and songs.
    • A camera app could use Computer Vision API to detect entities in an image and then use Entity Search API to provide more context about those entity inline, and so on.

"Today's milestones illustrate our commitment to make our AI Platform suitable for every business scenario, with enterprise-grade tools to make application development easier and respecting customers' data," Sirosh said.

About the Author

David Ramel is an editor and writer for Converge360.

comments powered by Disqus

Featured

  • New Visual Studio Razor Editor 'Close to Being Ready' for Blazor and Other Projects

    The experimental Razor editor for Visual Studio introduced last summer has been updated and is "close to being ready for normal daily development."

  • Microsoft Updates 'Must Have' Xamarin Community Toolkit

    The Xamarin Community Toolkit provides all kinds of effects, views and helpers to complement mobile app development with Microsoft's recently released, open source, cross-platform Xamarin.Forms 5.

  • JetBrains Unveils Plans for WinForms, WPF and More in Rider .NET IDE

    JetBrains announced plans for Windows Forms, Windows Presentation Foundation (WPF), ASP.NET and more tooling in the next release of its popular Rider IDE for .NET development.

  • Multi-Class Classification Using PyTorch: Model Accuracy

    Dr. James McCaffrey of Microsoft Research continues his four-part series on multi-class classification, designed to predict a value that can be one of three or more possible discrete values, by explaining model accuracy.

Upcoming Events