Tommorrow's intelligence must live where the data is born - on tiny cameras, wearables, drones, and factory sensors - yet still behave as reliably as a cloud model. That dual mandate demands three things at once. Compression (size reduction without losing accracy), Robustness (adaptation on the fly and flagging OOD inputs), and Federation (multi-edge learning with sharing experience rather than raw data). Our research sits at the intersection of these three requirements.
HypeMeFed: Federated learning with Resource Heterogeneity
MEBQAT: Meta learning for adaptive quantization
UpCycling, ConcreTizer: 3D object detection for Automous vehicles
FedSIM: Semi-supervised, Meta federated learning
Real-world autonomy begins where pixels meet physics. Our Embodied AI research focuses on making machine perception and decision-making robust to the messiness of the dynamic physical world — changing viewpoints, lighting, sensor noise, and never-before-seen objects. Robotic manipulators, AR devices, and mobile robots must perceive, reason, and act on-device and in real time. Building systems that can continuously sense, adapt, and act under such variability is the central challenge of embodied intelligence.
ImageNet-ES-Luminous/-Natural: Large-scale dataset capturing real camera shifts
Lens: Test-time sensor adaptation
MIRROR: On-device Generative AI for Fashion
PointSplit: On-device 3D object detection
MARVEL / SnapLink: Cloud-edge joint design for Mixed Reality
Sleep and respiratory disorders quietly undermine cognition, productivity, and cardiovascular health in nearly one billion people worldwide, yet gold-standard diagnostics like overnight polysomnography remain expensive, capacity-limited, and intrusive. Our vision is to democratize high-fidelity sleep and breathing analytics by moving clinically credible AI from the hospital lab to the patient’s bedside—and eventually to contact-free, everyday environments.
On-going
Center for Optimizing Hyperscale AI Models and Platforms, NRF Engineering Research Center (ERC), 2023.06 - 2030.02.
Joint Design of Application, Deep Learning and Systems for On-device Deep Video Understanding, PI, NRF Young Researcher (selected as Innovative Research Lab), 2023.03 - 2028.02.
Development of Beyond X-verse Core Technology for Hyper-realistic Interactions by Synchronizing the Real World and Virtual Space, IITP, 2023.01 - 2027.12.
Development of the Artificial Intelligence Technology to Enhance Individual Soldier Surveillance Capabilities, IITP, 2023.04 - 2026.12.
Interpretation of Sleep BioSignals based on Artificial Intelligence, SNU AI-Bio Convergence Research (ABC), 2023. 03 - 2026.02.
Performance Optimization of Federated Learning via Efficient Client Selection in Open-Set Environments, PI, Samsung Electronics, 2024.09 - 2025.08.
Ambient Healthcare: IoT-based Personalized Edge AI System for Remote Patient Monitoring, PI, SNU Creative-Pioneering Researcher, 2022.08 - 2025.06.
Completed
Quantization-Aware Domain Adaptation for Neural Networks, PI, Google, 2023.10 - 2024.09.
Research on Federated Learning with Unclassifiable Data, PI, Samsung Electronics, 2023.09 - 2024.08.
Research on Distributed Learning and Extended-vision based 3D Object Detection Model for Autonomous Driving in 5G Networks, NRF Mid-career Researcher, 2020.09 - 2023.02
Stabilization of Intelligent Marine Transportation System, MOF, 2022.08 - 2023.02.
Self-collected Sleep Data Construction for Monitoring Sleep and Pumonary Disorders based on Artificial Intelligence, NIA Dataset Construction, 2022.05 - 2022.12
Research on Floating Population Estimation using IoT Sensors on Electric Scooters, PI, MaaS Asia, 2021.07 - 2022.06
Development of Structured/Unstructured Data Analysis Model for Advanced DA/BI Services, PI, Hyperlounge, 2021.10 - 2022.02
Research on IoT-based Ambient Artificial Intelligence, PI, SNU New Faculty Startup, 2020.03 - 2021.12
Development of Edge AI Model for SK Magic IoT Devices, SK Networks, 2020.11 - 2021.10
Development of Real-time Detection Algorithm based on Time-series Data, PI, SK Hynix, 2020.12 - 2021.09