Skip to main content

Research


Working Papers

On the Infimal Sub-differential Size of Primal-Dual Hybrid Gradient Method, with Jinwen Yang. [arXiv]

On the Sparsity of Optimal Linear Decision Rules in Robust Inventory Management, with Bradley Sturt. [arXiv]

From Online Optimization to PID Controllers: Mirror Descent with Momentum, with Santiago Balseiro, Vahab Mirrokni and Balasubramanian Sivan. [arXiv]

A J-Symmetric Quasi-Newton Method for Minimax Problems, with Azam Asl. [arXiv]

On the Linear Convergence of Extra-Gradient Methods for Nonconvex-Nonconcave Minimax Problems, Saeed Hajizadeh, Haihao Lu and Benjamin Grimmer. [arXiv]

Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for Linear Programming, with Jinwen Yang. [arXiv]

Regularized Online Allocation Problems: Fairness and Beyond, with Santiago Balseiro and Vahab Mirrokni. (A preliminary version was accepted in ICML 2021). [arXiv]

Infeasibility Detection with Primal-Dual Hybrid Gradient for Large-Scale Linear Programming, with David Applegate, Mateo Diaz and Miles Lubin. [arXiv]

Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems, Shuaiwen Wang, Wenda Zhou, Arian Maleki, Haihao Lu and Vahab Mirrokni (A preliminary version was accepted in ICML 2018). [arXiv]

Journal Publications (reverse chronological order)

The Landscape of Nonconvex-Nonconcave Minimax Optimization, Benjamin Grimmer, Haihao Lu, Pratik Worah and Vahab Mirrokni, to appear in Mathematical Programming. [arXiv]

Faster First-Order Primal-Dual Methods for Linear Programming using Restarts and Sharpness, with David Applegate, Oliver Hinder and Miles Lubin, to appear in Mathematical Programming. [arXiv]

Frank-Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning, Haoyue Wang, Haihao Lu and Rahul Mazumder, to appear in SIAM Journal on Optimization[arXiv]

The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems, with Santiago Balseiro and Vahab Mirrokni, to appear in Operations Research. [link]

An O(s^r)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to the Linear Convergence of Minimax Problems, Haihao Lu, to appear in Mathematical Programming[arXiv] [slides]

  • Winner of 2021 INFORMS Optimization Society Young Researcher Prize.

Randomized Gradient Boosting Machines, Haihao Lu and Rahul Mazumder, SIAM Journal on Optimization 30(4), 2780-2808, 2020. [link]

Generalized Stochastic Frank-Wolfe Algorithm with Stochastic 'Substitute' Gradient for Structured Convex Optimization, Haihao Lu and Robert M. Freund, Mathematical Programming Vol. 187, No. 1: 317-349, 2021. [link] [slides]

“Relative-Continuity” for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent, Haihao Lu, INFORMS Journal on Optimization, 1.4 (2019): 288-303. [link]

Relatively-Smooth Convex Optimization by First-Order Methods, and Applications, Haihao Lu, Robert M. Freund and Yurii Nesterov, SIAM Journal on Optimization 28(1), 333–354, 2018. [link]

New Computational Guarantees for Solving Convex Optimization Problems with First Order Methods, via a Function Growth Condition Measure, Robert M. Freund and Haihao Lu, Mathematical Programming Vol. 170, No. 2: 445–477, 2018. [link] [slides]

Stochastic Linearization of Turbulent Dynamics of Dispersive Waves in Equilibrium and Non-equilibrium State, Shixiao W Jiang, Haihao Lu, Douglas Zhou and David Cai, New Journal of Physics 18.8 (2016): 083028. [pdf] [link]

Renormalized Dispersion Relations of β-Fermi-Pasta-Ulam Chains in Equilibrium and Nonequilibrium States, Shixiao W Jiang, Haihao Lu, Douglas Zhou and David Cai, Physical Review E 90.3 (2014): 032925. [pdf] [link]

Conference Publications (reverse chronological order)

Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via Continuous-Time Systems, Benjamin Grimmer, Haihao Lu, Pratik Worah and Vahab Mirrokni, ALT 2022. [arXiv]

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient, with David Applegate, Mateo Díaz, Oliver Hinder, Miles Lubin, Brendan O'Donoghue and Warren Schudy, NeurIPS 2021. [arXiv]

Regularized Online Allocation Problems: Fairness and Beyond, with Santiago Balseiro and Vahab Mirrokni, ICML 2021. [arXiv]

Contextual Reserve Price Optimization in Auctions, Joey Huchette, Haihao Lu, Hossein Esfandiari and Vahab Mirrokni, NeurIPS 2020. [arXiv]

Dual Mirror Descent for Online Allocation Problems, with Santiago Balseiro and Vahab Mirrokni, ICML 2020. [arXiv] [Link]

A Stochastic First-Order Method for Ordered Empirical Risk Minimization, with Kenji Kawaguchi, AISTATS 2020. [arXiv] [Link]

Accelerating Gradient Boosting Machines, Haihao Lu, Sai Praneeth Karimireddy, Natalia Ponomareva and Vahab Mirrokni, AISTATS 2020. [arXiv] [Link]

Accelerating Greedy Coordinate Descent Methods, Haihao Lu, Robert M. Freund and Vahab Mirrokni, ICML 2018. [arXiv] [Link] [slides]

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions, Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki and Vahab Mirrokni, ICML 2018. [arXiv] [Link]

Technical Reports

Depth Creates No Bad Local Minima, Haihao Lu and Kenji Kawaguchi, Technical Report. [pdf] [arXiv]