NIPS 2017 presentations from the Optimization session
The Marginal Value of Adaptive Gradient Methods in Machine Learning Can Decentralized Algorithms Outperform Centralized Algorithms A Case Study for Decentralized Parallel Stochastic Gradient Descent Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization Stochastic Optimization with Variance Reduction for Infinite Datasets with Finite Sum Structure Processconstrained batch Bayesian optimization Safe Adaptive Importance Sampling Beyond Worstcase: A Probabilistic Analysis of Affine Policies in Dynamic Optimization Straggler Mitigation in Distributed Optimization Through Data Encoding
|
|