On-Device AI - What and Why? (introductory video)



Artificial Intelligence, Internet of Things, and Edge Computing, all happy together

- Small IoT devices, wearables, and smartphones do not only sense its environment but also analyze the data and make decisions all by itself by performing AI computations.

Less dependence on the cloud, more intimate interaction with users

- User data is consumed locally, which addresses both privacy and network bandwidth issues.

- A user receives a response to her actions in real time from a nearby edge device, instead of from a cloud far away. 

Enablers

- Low-power AI accelerators, lightweight software platforms, model compression techniques, adaptive and federated learning all enable the magic to happen on the edge.

Adaptive, Compressed, Unsupervised and Federated ML

Training an ML model without sharing raw data and labels. After training, the model should be compressed and adapted for a test domain.

The number of IoT sensors around us is growing rapidly, making various types of data born in edge devices. The amount of data in a small edge device, however, is not big enough to train an ML model.  Simply sending all raw data from the edge to the cloud can generate big data for training, but this causes privacy issues. We investigate how to make an efficient and secure distributed learning framework by using multiple constrained data sources.

On-device AI Systems and Applications

ML model is a cool "module," but we want "holistic" application systems. 

Recent technology development has brought big data, cloud, edge AI chips, IoT sensors, and learning techniques. Given lots of powerful hammers, we investigate how to orchestrate them to enable interesting and useful applications on the edge. In this perspective, it is important to deeply analyze characteristics of applications, sensors and systems and determine when/where to execute AI algorithms for efficient operation without sacrificing user experiences. 

Funded Projects

On-going

Completed