分布式算法能否超越集中式算法?
Can Decentralized Algorithms Outperform Centralized Algorithms?
刘霁   Ji Liu
报告人照片   Ji Liu is currently an assistant professor in Computer Science, Electrical Computer Engineering, and Goergen Institute for Data Science at University of Rochester (UR). He received his Ph.D., Masters, and B.S. degrees from University of Wisconsin-Madison, Arizona State University, and University of Science and Technology of China respectively. His research interests cover a broad scope of machine learning, optimization, and their applications in other areas such as healthcare, bioinformatics, computer vision, and many other data analysis involved areas. His recent research focus is on asynchronous parallel optimization, sparse learning theory and algorithm, reinforcement learning, structural model estimation, online learning, abnormal event detection, feature / pattern extraction, etc. He founded the machine learning and optimization group at UR. He won the award of Best Paper honorable mention at SIGKDD 2010 and the award of Facebook Best Student Paper award at UAI 2015.
  Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.
报告时间:2017年06月25日10时00分    报告地点:西区电二楼208会议室
报名截止日期:2017年06月25日    可选人数:50