NIPS 2017 presentations from the Optimization session
On the Optimization Landscape of Tensor Decompositions Robust Optimization for NonConvex Objectives Bayesian Optimization with Gradients Gradient Descent Can Take Exponential Time to Escape Saddle Points Nearlinear time approximation algorithms for optimal transport via Sinkhorn iteration Limitations on VarianceReduction and Acceleration Schemes for Finite Sums Optimization Implicit Regularization in Matrix Factorization Linear Convergence of a FrankWolfe Type Algorithm over TraceNorm Balls Acceleration and Averaging in Stochastic Descent Dynamics When Cyclic Coordinate Descent Beats Randomized Coordinate Descent
|
|