Tommorrow's intelligence must live where the data is born - on tiny cameras, wearables, drones, and factory sensors - yet still behave as reliably as a cloud model. That dual mandate demands three things at once. Compression (size reduction without losing accracy), Robustness (adaptation on the fly and flagging OOD inputs), and Federation (multi-edge learning with sharing experience rather than raw data). Our research sits at the intersection of these three requirements.
HypeMeFed: Federated learning with Resource Heterogeneity
MEBQAT: Meta learning for adaptive quantization
UpCycling, ConcreTizer: 3D object detection for Automous vehicles
FedSIM: Semi-supervised, Meta federated learning
Real-world autonomy begins where pixels meet physics. Our Embodied AI research focuses on making machine perception and decision-making robust to the messiness of the dynamic physical world — changing viewpoints, lighting, sensor noise, and never-before-seen objects. Robotic manipulators, AR devices, and mobile robots must perceive, reason, and act on-device and in real time. Building systems that can continuously sense, adapt, and act under such variability is the central challenge of embodied intelligence.
ImageNet-ES/-Diverse: Large-scale dataset capturing real camera shifts
Lens: Test-time sensor adaptation
MIRROR: On-device Generative AI for Fashion
PointSplit: On-device 3D object detection
MARVEL / SnapLink: Cloud-edge joint design for Mixed Reality
Sleep and respiratory disorders quietly undermine cognition, productivity, and cardiovascular health in nearly one billion people worldwide, yet gold-standard diagnostics like overnight polysomnography remain expensive, capacity-limited, and intrusive. Our vision is to democratize high-fidelity sleep and breathing analytics by moving clinically credible AI from the hospital lab to the patient’s bedside—and eventually to contact-free, everyday environments.