SenseTime and Qualcomm to develop onboard AI and ML for devices

AI
Image credit: chombosan / Shutterstock.com

Chinese artificial intelligence firm SenseTime Group and Qualcomm Technologies announced plans to collaborate on developing onboard AI and machine learning (ML) for future mobile and IoT products.

The collaboration leverages SenseTime’s ML models and algorithms with Qualcomm Snapdragon premium and high-tier platforms, which Qualcomm says offer advanced heterogeneous computing capabilities for client-based AI.

“To develop an AI ecosystem, it takes efforts from players in multiple industries,” said Dr Li Xu, co-founder and chief executive officer, SenseTime.

Currently, Qualcomm Technologies is focused on optimizing the Snapdragon mobile platform to accelerate myriad AI use cases in the areas of computer vision and natural language processing – for smartphones, IoT and automotive – and is researching broader executions in the areas of wireless connectivity, power management, and photography.

SenseTime has built a proprietary deep learning platform called Parrots that makes it possible to innovate and develop a variety of algorithms with low cost and quick turn-around.

The companies expect to drive the popularity and development of on-device AI in areas such as vision and camera-based image processing.

Devices such as smartphones and connected cameras are becoming more intelligent with the proliferation of AI on the device, which provides a number of advantages over cloud-only implementations, enabling edge devices to provide reliable execution with or without a network connection. Additional benefits of on-device AI include real-time performance, privacy protection and enhanced reliability.

“Many devices shipping today using our Snapdragon mobile platforms already utilize on-device AI,” said Keith Kressin, senior vice president of product management for Qualcomm Technologies.

The Huawei P10 smartphone, Apple iPhone 8/8+, and upcoming Apple iPhone X are examples of consumer products that perform on-device machine learning.

According to ABI Research, on-device ML – a.k.a. edge processing and/or edge learning – is the fastest growing segment of AI. In fact, announcements and early AI/ML-enabled device shipments this year have been sending chipset companies scrambling to develop their own solutions and heavily invest in AI technologies now – somewhat in fear of not getting left behind as AI rockets beyond news headlines to both practical application and market interest.

While many of the announcements will not be seen in commercially available products for one to three years, “The momentum in the AI vendor hardware ecosystem and venture capital investments demonstrates how AI technologies are revolutionizing engagements between humans and machine systems at work and home,” says Jeff Orr, research director at ABI Research.

ABI Research forecasts that 34 million smartphones will utilize on-device learning in 2017. Other products with machine learning on-device this year include Bragi wireless earbuds and possibly the recently-announced Google Clips smart home camera.

ABI also says that about 3% of active AI devices in 2017 will be done on-device, while the rest is in the cloud. By 2022, nearly 49% (2.7 billion units) of active AI-enabled devices will performing data learning on-device – the largest chunk of AI implementations.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.