BEGIN ARTICLE PREVIEW:
PyTorch has recently released four new PyTorch prototype features. The first three enable mobile machine-learning developers to execute models on the full set of hardware (HW) engines making up a system-on-chip (SOC) system. This allows developers to optimize their model execution for a unique performance, power, and system-level concurrency.
New features incorporate enabling execution on-device HW engines given below:
DSP and NPUs using the Android Neural Networks API (NNAPI) developed in collaboration with Google Android.GPU execution on Android via VulkanGPU execution on iOS via Metal
There is increasing ARM usage in the PyTorch community with Raspberry Pis and Graviton(2) platforms. Hence, the new release also includes developer efficiency benefits with recently launched support for ARM64 builds for Linux.
NNAPI Support with Google Android
PyTorch’s collaboration with the Google Android team enables Android’s Neural Networks API (NNAPI) via PyTorch Mobile. On-device machine learning allows ML models to run locally on the device without transmitting data to a server. This offers lower latency and improved privacy and connectivity. The Android Neural Networks API (NNAPI) is designed for running computationally intensive processes for machine learning on Android gadgets. Thus, machine learning models can now …
END ARTICLE PREVIEW