新闻公告

首页 / 新闻公告 / 最新通知 /

新闻公告

“统计大讲堂”第261讲预告:Mini-batch Gradient Descent with Buffer

2024-10-14

报告时间:2024年10月18日(周五)

上午10:00-11:00

报告地点:中国人民大学明德主楼1016

报告嘉宾:亓颢博

报告主题:Mini-batch Gradient Descent with Buffer

报告摘要

Mini-batch Gradient Descent with Buffer

In this paper, we studied a buffered mini-batch gradient descent (BMGD) algorithm for training complex model on massive datasets. The algorithm studied here is designed for fast training on a GPU-CPU system, which contains two steps: the buffering step and the computation step. In the buffering step, a large batch of data (i.e., a buffer) are loaded from the hard drive to the graphical memory of GPU. In the computation step, a standard mini-batch gradient descent(MGD) algorithm is applied to the buffered data. Compared to traditional MGD algorithm, the proposed BMGD algorithm can be more efficient for two reasons.First, the BMGD algorithm uses the buffered data for multiple rounds of gradient update, which reduces the expensive communication cost from the hard drive to GPU memory. Second, the buffering step can be executed in parallel so that the GPU does not have to stay idle when loading new data. We first investigate the theoretical properties of BMGD algorithms under a linear regression setting.The analysis is then extended to the Polyak-Lojasiewicz loss function class. The theoretical claims about the BMGD algorithm are numerically verified by simulation studies. The practical usefulness of the proposed method is demonstrated by three image-related real data analysis.

作者简介

图片

亓颢博,男。2023年毕业于北京大学光华管理学院,获经济学博士学位,现为北京师范大学统计学院数理统计系师资博士后。研究方向为统计优化、大规模数据统计分析、网络数据分析与深度学习中的统计理论。在 Journal of Computational and Graphical Statistics、Neurocomputing、Computational Statistics & Data Analysis 等期刊发表多篇论文。