New Open Source ONNX Runtime Web Does Machine Learning Modeling in Browser

Microsoft introduced a new feature for the open source ONNX Runtime machine learning model accelerator for running JavaScript-based ML models running in browsers.

The new ONNX Runtime Web (ORT Web) was introduced this month as a new feature for the cross-platform ONNX Runtime used to optimize and accelerate ML inferencing and training. It's all part of the ONNX (Open Neural Network Exchange) ecosystem that serves as an open standard for ML interoperability.

Microsoft says ONNX Runtime inference can enable faster customer experiences and lower costs, as it supports models from deep learning frameworks such as PyTorch and TensorFlow/Keras along with classical ML libraries like scikit-learn, LightGBM, XGBoost and more. It's compatible with different hardware, drivers and OSes, providing optimal performance by leveraging hardware accelerators where applicable, alongside graph optimizations and transforms.

The JavaScript-based ORT Web for the ONNX Runtime isn't a brand-new product, however, as it replaces Microsoft's ONNX.js Javascript library for running ONNX models on browsers and on Node.js, supposedly providing an enhanced user experience and improved performance.

Specifically, the new tool is said to provide a more consistent developer experience between packages for server-side and client-side inferencing, along with improved inference performance and model coverage. Key to all that is a two-pronged back-end approach to accelerate model inference in the browser with the help of both machine CPUs and their graphics-oriented GPU counterparts. That involves WebAssembly back-end functionality on the CPU side (it lets C++ code be used in the browser instead of JavaScript), whereas the GPU side of things leverages WebGL, a popular standard for accessing GPU capabilities.

ORT Web Overview
[Click on image for larger view.] ORT Web Overview (source: Microsoft).

"Running machine-learning-powered web applications in browsers has drawn a lot of attention from the AI community," Microsoft said in an announcement early this month. "It is challenging to make native AI applications portable to multiple platforms given the variations in programming languages and deployment environments. Web applications can easily enable cross-platform portability with the same implementation through the browser. Additionally, running machine learning models in browsers can accelerate performance by reducing server-client communications and simplify the distribution experience without needing any additional libraries and driver installations."

Microsoft is soliciting developer feedback on the project, which can be provided on the ONNX Runtime GitHub repo. As it continues to work on and improve the tool's performance and model coverage and introduce new features, one possible enhancement being considered is on-device model training.

About the Author

David Ramel is an editor and writer for Converge360.

comments powered by Disqus


  • 'Dev Home' Update Leads Developer Goodies in AI-Powered Windows 11 Update

    Along with today's new AI-powered Windows 11 update come new goodies for developers, including a new edition of Dev Home, a preview offering described as a "control center" providing coding-focused features and functionality.

  • Community Dev Gives VS Code Python Some YAPF

    The latest update to Python in Visual Studio Code includes a new extension for Python formatting that was contributed by a member of the open source community.

  • Devs Demand Visual Studio 2022 Ditch Old .NET Framework Dependencies

    Developers commenting on a Microsoft post about performance improvements in the upcoming .NET 8 demanded the company end Visual Studio 2022's dependency on the old .NET Framework.

  • Microsoft Remakes Azure Quantum Dev Kit with Rust, 'and It Runs in the Browser!'

    "The' tl;dr' is that we rewrote it (mostly) in Rust which compiles to WebAssembly for VS Code or the web, and to native binaries for Python."

Subscribe on YouTube