“统计大讲堂”第159讲预告:基于梯度匹配的数据集压缩方法
2021-06-10
报告时间:2021年6月14日下午16:00-17:00
报告地点:腾讯会议(会议ID:875 747 618)
报告嘉宾:赵波
报告主题:Dataset Condensation with Gradient Matching
个人简介
In this talk, I will present our recent work about dataset condensation for data-efficient learning. As the state-of-the-art machine learning methods in many fields rely on larger datasets, storing datasets and training models on them become significantly more expensive. We proposes a training set synthesis technique for data-efficient learning, called Dataset Condensation, that learns to condense large dataset into a small set of informative synthetic samples for training deep neural networks from scratch. We formulate this goal as a gradient matching problem between the gradients of deep neural network weights that are trained on the original and our synthetic data. We rigorously evaluate its performance in several computer vision benchmarks and demonstrate that it significantly outperforms the state-of-the-art methods. Finally we explore the use of our method in continual learning and neural architecture search and report promising gains when limited memory and computations are available.
个人简介
Bo Zhao is a PhD student under Dr. Hakan Bilen in School of Informatics, The University of Edinburgh. His research interests include Machine Learning and Computer Vision, especially, Efficient Deep Learning, Meta-learning and Continual Learning. He has published papers in ICLR’21 (Oral), ICML’21,18, ACM TOG’18, SIGGRAPH Asia’16, WACV’21,19, et. al. He served as a reviewer for NeurIPS'21,20, ICLR'21, ICML'21, CVPR'21,20, ICCV'21,19, ECCV'20, AAAI'20, IEEE TNNLS, IEEE TMC, et. al.