Book
Proceedings of Machine Learning and Systems 2 (MLSys 2020)
Edited by:
I. Dhillon and D. Papailiopoulos and V. Sze
- Resource Elasticity in Distributed Deep Learning Andrew Or, Haoyu Zhang, Michael Freedman
- MLPerf Training Benchmark Peter Mattson, Christine Cheng, Gregory Diamos, Cody Coleman, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bittorf, David Brooks, Dehao Chen, Debo Dutta, Udit Gupta, Kim Hazelwood, Andy Hock, Xinyuan Huang, Daniel Kang, David Kanter, Naveen Kumar, Jeffery Liao, Deepak Narayanan, Tayo Oguntebi, Gennady Pekhimenko, Lillian Pentecost, Vijay Janapa Reddi, Taylor Robie, Tom St John, Carole-Jean Wu, Lingjie Xu, Cliff Young, Matei Zaharia
- Checkmate: Breaking the Memory Wall with Optimal Tensor Rematerialization Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Joseph Gonzalez, Kurt Keutzer, Ion Stoica
- Automatically batching control-intensive programs for modern accelerators Alexey Radul, Brian Patton, Dougal Maclaurin, Matthew Hoffman, Rif A. Saurous
- PLink: Discovering and Exploiting Locality for Accelerated Distributed Training on the public Cloud Liang Luo, Peter West, Jacob Nelson, Arvind Krishnamurthy, Luis Ceze
- Attention-based Learning for Missing Data Imputation in HoloClean Richard Wu, Aoqian Zhang, Ihab Ilyas, Theodoros Rekatsinas
- Riptide: Fast End-to-End Binarized Neural Networks Joshua Fromm, Meghan Cowan, Matthai Philipose, Luis Ceze, Shwetak Patel
- PoET-BiN: Power Efficient Tiny Binary Neurons Sivakumar Chidambaram, Pierre Langlois, Jean-Pierre David
- Federated Optimization in Heterogeneous Networks Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith
- MotherNets: Rapid Deep Ensemble Learning Abdul Wasay, Brian Hentschel, Yuze Liao, Sanyuan Chen, Stratos Idreos
- Privacy-Preserving Bandits Mohammad Malekzadeh, Dimitrios Athanasakis, Hamed Haddadi, Ben Livshits
- Blink: Fast and Generic Collectives for Distributed ML Guanhua Wang, Shivaram Venkataraman, Amar Phanishayee, Nikhil Devanur, Jorgen Thelin, Ion Stoica
- Searching for Winograd-aware Quantized Networks Javier Fernandez-Marques, Paul Whatmough, Andrew Mundy, Matthew Mattina
- AutoPhase: Juggling HLS Phase Orderings in Random Forests with Deep Reinforcement Learning Ameer Haj-Ali, Qijing (Jenny) Huang, John Xiang, William Moses, Krste Asanovic, John Wawrzynek, Ion Stoica
- SLIDE : In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale Deep Learning Systems Beidi Chen, Tharun Medini, James Farwell, sameh gobriel, Charlie Tai, Anshumali Shrivastava
- MNN: A Universal and Efficient Inference Engine Xiaotang Jiang, Huan Wang, Yiliu Chen, Ziqi Wu, Lichuan Wang, Bin Zou, Yafeng Yang, Zongyang Cui, Yu Cai, Tianhang Yu, Chengfei Lyu, Zhihua Wu
- Predictive Precompute with Recurrent Neural Networks Hanson Wang, Zehui Wang, Yuanyuan Ma
- OPTIMUS: OPTImized matrix MUltiplication Structure for Transformer neural network accelerator Junki Park, Hyunsung Yoon, Daehyun Ahn, Jungwook Choi, Jae-Joon Kim
- SkyNet: a Hardware-Efficient Method for Object Detection and Tracking on Embedded Systems Xiaofan Zhang, Haoming Lu, Cong Hao, Jiachen Li, Bowen Cheng, Yuhong Li, Kyle Rupnow, Jinjun Xiong, Thomas Huang, Honghui Shi, Wen-Mei Hwu, Deming Chen
- BPPSA: Scaling Back-propagation by Parallel Scan Algorithm Shang Wang, Yifan Bai, Gennady Pekhimenko
- Memory-Driven Mixed Low Precision Quantization for Enabling Deep Network Inference on Microcontrollers Manuele Rusci, Alessandro Capotondi, Luca Benini
- Ordering Chaos: Memory-Aware Scheduling of Irregularly Wired Neural Networks for Edge Devices Byung Hoon Ahn, Jinwon Lee, Jamie Menjay Lin, Hsin-Pai Cheng, Jilei Hou, Hadi Esmaeilzadeh
- Model Assertions for Monitoring and Improving ML Models Daniel Kang, Deepti Raghavan, Peter Bailis, Matei Zaharia
- A Systematic Methodology for Analysis of Deep Learning Hardware and Software Platforms Yu Wang, Gu-Yeon Wei, David Brooks
- Sense & Sensitivities: The Path to General-Purpose Algorithmic Differentiation Mike Innes
- Understanding the Downstream Instability of Word Embeddings Megan Leszczynski, Avner May, Jian Zhang, Sen Wu, Christopher Aberger, Christopher Re
- What is the State of Neural Network Pruning? Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, John Guttag
- Trained Quantization Thresholds for Accurate and Efficient Fixed-Point Inference of Deep Neural Networks Sambhav Jain, Albert Gural, Michael Wu, Chris Dick
- FLEET: Flexible Efficient Ensemble Training for Heterogeneous Deep Neural Networks Hui Guan, Laxmikant Kishor Mokadam, Xipeng Shen, Seung-Hwan Lim, Robert Patton
- A System for Massively Parallel Hyperparameter Tuning Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Jonathan Ben-tzur, Moritz Hardt, Benjamin Recht, Ameet Talwalkar
- Fine-Grained GPU Sharing Primitives for Deep Learning Applications Peifeng Yu, Mosharaf Chowdhury
- Distributed Hierarchical GPU Parameter Server for Massive Scale Deep Learning Ads Systems Weijie Zhao, Deping Xie, Ronglai Jia, Yulei Qian, Ruiquan Ding, Mingming Sun, Ping Li
- Willump: A Statistically-Aware End-to-end Optimizer for Machine Learning Inference Peter Kraft, Daniel Kang, Deepak Narayanan, Shoumik Palkar, Peter Bailis, Matei Zaharia
- Improving the Accuracy, Scalability, and Performance of Graph Neural Networks with Roc Zhihao Jia, Sina Lin, Mingyu Gao, Matei Zaharia, Alex Aiken
Do not remove: This comment is monitored to verify that the site is working properly