News

ML.NET Now Works on ARM Devices and Blazor WebAssembly

ML.NET now works on ARM64 and Apple M1 devices, and on Blazor WebAssembly, with some limitations for each.

Microsoft regularly updates ML.NET, an open source, cross-platform machine learning (ML) framework for .NET devs, along with the integral Model Builder component that features a simple visual interface for building, training and deploying custom ML models without requiring deep expertise.

In the June 2021 update, the new ARM64 and Blazor WebAssembly functionality was unveiled.

"You can now perform training and inferencing with ML.NET on ARM64 and Apple M1 (in addition to Linux and macOS) devices which enables platform support for mobile and embedded devices as well as ARM-based servers," Microsoft announced.

Training and Inferencing on a Pinebook Pro Laptop Running Manjaro ARM Linux Distribution (source: Microsoft).

Limitations that could throw a DLL not found exception when training and inferencing on ARM include:

  • Symbolic SGD, TensorFlow, OLS, TimeSeries SSA, TimeSeries SrCNN and ONNX are not currently supported for training or inferencing.
  • LightGBM is currently supported for inferencing, but not training.
  • You can add LightGBM and ONNX support by compiling them for ARM, but they don’t provide pre-compiled binaries for ARM/ARM64.

For Blazor WebAssembly -- the client-side component of Blazor, which allows for C#-based web development instead of JavaScript -- some training and inferencing is possible on .NET 6, currently in preview. It has the same limitations as those listed for ARM above, plus two more:

  • You must currently set the EnableMLUnsupportedPlatformTargetCheck flag to false to install in Blazor.
  • LDA and Matrix Factorization are not supported.

According to the release notes for the new update (ML.NET v1.5.5), other new features include:

  • New API allowing confidence parameter to be a double.(#5623) . A new API has been added to accept double type for the confidence level. This helps when you need to have higher precision than an int will allow for.
  • Support to export ValueMapping estimator to ONNX was added (#5577)
  • New API to treat TensorFlow output as batched/not-batched (#5634) A new API has been added so you can specify if the output from TensorFlow is batched or not.

The release notes also detail many bug fixes and documentation updates.

For the Model Builder component specifically, features previously presented in preview are now publicly available, including:

  • Config-based training with generated code-behind files
  • Restructured Advanced Data Options
  • Redesigned Consume step

AutoML, used to automate the time-consuming, iterative tasks of machine learning model development, has also been improved in cooperation with Microsoft Research. Generally, that partnership is expected to provide these benefits:

  • Enabling AutoML support for all ML.NET scenarios
  • Allowing more precise control over the hyperparameter search space
  • Enabling more training environments, including local, Azure, and on-prem distributed training
  • Opening up future collaborations on advanced ML tech, like Network Architecture Search (NAS)

Among the first of specific concrete improvements are better benchmark tests reflecting an increased number of models that are explored for different train times, and other metrics.

The announcement post also includes results of a survey conducted by the dev team, revealing -- among many other things -- that the biggest blockers/pain points/challenges reported by respondents for using ML.NET include:

  • Small ML.NET community
  • Docs and samples (quantity, quality, real world)
  • Insufficient deep learning support
  • Specific ML scenario or algorithm not supported by ML.NET
  • Afraid Microsoft will abandon it

The post details how Microsoft is planning to address those.

About the Author

David Ramel is an editor and writer at Converge 360.

comments powered by Disqus

Featured

  • Compare New GitHub Copilot Free Plan for Visual Studio/VS Code to Paid Plans

    The free plan restricts the number of completions, chat requests and access to AI models, being suitable for occasional users and small projects.

  • Diving Deep into .NET MAUI

    Ever since someone figured out that fiddling bits results in source code, developers have sought one codebase for all types of apps on all platforms, with Microsoft's latest attempt to further that effort being .NET MAUI.

  • Copilot AI Boosts Abound in New VS Code v1.96

    Microsoft improved on its new "Copilot Edit" functionality in the latest release of Visual Studio Code, v1.96, its open-source based code editor that has become the most popular in the world according to many surveys.

  • AdaBoost Regression Using C#

    Dr. James McCaffrey from Microsoft Research presents a complete end-to-end demonstration of the AdaBoost.R2 algorithm for regression problems (where the goal is to predict a single numeric value). The implementation follows the original source research paper closely, so you can use it as a guide for customization for specific scenarios.

  • Versioning and Documenting ASP.NET Core Services

    Building an API with ASP.NET Core is only half the job. If your API is going to live more than one release cycle, you're going to need to version it. If you have other people building clients for it, you're going to need to document it.

Subscribe on YouTube