Machine Intelligence and Information Learning Laboratory

기계지능 및 정보학습 연구실

관련기사 바로가기
기계지능 및 정보학습 연구실

In our Machine Intelligence and Information Learning laboratory (MIIL lab), we focus on developing novel machine learning algorithms which overcome challenges for future intelligence systems beyond deep-learning algorithms. The current deep learning algorithms are i) dependent on a massive amount of training data, ii) hard to generalize to unseen tasks and iii) difficult to learn a new concept on the learned knowledge. To tackle these problems, members of our lab are interested in following topics: few-shot learning, meta-learning, lifelong learning (continual and incremental learning), meta-reinforcement Learning and causal learning.
Also, our research interests include intelligence information/communication systems such as 6-Generation wireless communications. For the beyond 5/6G communication systems, resources such as storage/communication and computing capabilities should be deployed on highly-distributed and connected devices or servers for supporting intelligent services (autonomous driving, image recognition, natural language processing, etc.). We are researching about theoretical understanding of distributed learning systems while considering the tradeoffs of communications/storage and computing capabilities. Also, we are interested in developing advanced federated learning algorithms which are suitable for future communication systems.

Major research field

Meta-learning, Few-shot learning, Lifelong learning, distributed learning, federated learning, 6G communications

Desired field of research

Meta-learning, Few-shot learning, Lifelong learning, distributed learning, federated learning, 6G communications

Research Keywords and Topics

In a broad sense, my research direction is to develop novel learning algorithms which overcome remaining challenges of deep-learning techniques. The current deep learning algorithms are i) dependent on a massive amount of training data, ii) hard to generalize to unseen tasks and iii) difficult to learn a new concept on the learned knowledge. To tackle these problems, I`m interested in following topics: few-shot learning, meta-learning, lifelong learning (continual and incremental learning), meta-reinforcement Learning and causal learning.

Research Publications

Sung Whan Yoon*, Do-Yeon Kim*, Jun Seo and Jaekyun Moon "XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning," Proceedings of the 37th International Conference on Machine Learning (ICML), Vienna, Austria, PMLR 119, 2020. *equal contribution.
Sung Whan Yoon, Jun Seo and Jaekyun Moon "TapNet: Neural Network Augmented with Task-Adaptive Projection for Few-Shot Learning," Proceedings of the 36th International Conference on Machine Learning (ICML), Long Beach, CA, USA, PMLR 97:7115-7123, 2019.
Jy-yong Sohn, Beongjun Choi, Sung Whan Yoon and Jaekyun Moon, “Capacity of Clustered Distributed Storage,” IEEE Transactions on Information Theory, vol. 65, no. 1, pp. 81-107, Jan. 2019

Patents

[US2] Jaekyun Moon, Soonyoung Kang and Sung Whan Yoon, “Controller and oper-
ating method thereof,” Notice of Allowance (NOA) Sep. 9, 2019, Application Number:US15601039.

[US1] Jaekyun Moon, Beongjun Choi and Sung Whan Yoon, “Controller, semi-
conductor memory system and operating method thereof,” Registration Number: US 10,439,647 Oct. 8, 2019.