Research


Journal Publications

Infeasibility Detection with Primal-Dual Hybrid Gradient for Large-Scale Linear Programming, with David Applegate, Mateo Diaz and Miles Lubin, to appear in SIAM Journal on Optimization. [arXiv]

A J-Symmetric Quasi-Newton Method for Minimax Problems, with Azam Asl and Jinwen Yang, to appear in Mathematical Programming. [arXiv]

The Landscape of Nonconvex-Nonconcave Minimax Optimization, Benjamin Grimmer, Haihao Lu, Pratik Worah and Vahab Mirrokni, Mathematical Programming 201.1-2 (2023): 373-407. [link]

Faster First-Order Primal-Dual Methods for Linear Programming using Restarts and Sharpness, with David Applegate, Oliver Hinder and Miles Lubin, Mathematical Programming 201.1-2 (2023): 133-184. [link]

On the Linear Convergence of Extra-Gradient Methods for Nonconvex-Nonconcave Minimax Problems, Saeed Hajizadeh, Haihao Lu and Benjamin Grimmer, to appear in INFORMS Journal on Optimization[arXiv]

Frank-Wolfe Methods with an Unbounded Feasible Region and Applications to Structured Learning, Haoyue Wang, Haihao Lu and Rahul Mazumder, SIAM Journal on Optimization 32.4 (2022): 2938-2968. [link]

The Best of Many Worlds: Dual Mirror Descent for Online Allocation Problems, with Santiago Balseiro and Vahab Mirrokni, Operations Research 71.1 (2022): 101-119. [link]

  • Winner of 2022 INFORMS Michael H. Rothkopf Junior Researcher Paper Prize.
  • Winner of 2023 INFORMS Revenue Management and Pricing Prize.

An O(s^r)-Resolution ODE Framework for Discrete-Time Optimization Algorithms and Applications to the Linear Convergence of Minimax Problems, Haihao Lu, Mathematical Programming 194 (2022): 1061–1112.[link] [slides]

  • Winner of 2021 INFORMS Optimization Society Young Researcher Prize.

Randomized Gradient Boosting Machines, Haihao Lu and Rahul Mazumder, SIAM Journal on Optimization 30(4), 2780-2808, 2020. [link]

Generalized Stochastic Frank-Wolfe Algorithm with Stochastic 'Substitute' Gradient for Structured Convex Optimization, Haihao Lu and Robert M. Freund, Mathematical Programming Vol. 187, No. 1: 317-349, 2021. [link] [slides]

“Relative-Continuity” for Non-Lipschitz Non-Smooth Convex Optimization using Stochastic (or Deterministic) Mirror Descent, Haihao Lu, INFORMS Journal on Optimization, 1.4 (2019): 288-303. [link]

Relatively-Smooth Convex Optimization by First-Order Methods, and Applications, Haihao Lu, Robert M. Freund and Yurii Nesterov, SIAM Journal on Optimization 28(1), 333–354, 2018. [link]

New Computational Guarantees for Solving Convex Optimization Problems with First Order Methods, via a Function Growth Condition Measure, Robert M. Freund and Haihao Lu, Mathematical Programming Vol. 170, No. 2: 445–477, 2018. [link] [slides]

Stochastic Linearization of Turbulent Dynamics of Dispersive Waves in Equilibrium and Non-equilibrium State, Shixiao W Jiang, Haihao Lu, Douglas Zhou and David Cai, New Journal of Physics 18.8 (2016): 083028. [pdf] [link]

Renormalized Dispersion Relations of β-Fermi-Pasta-Ulam Chains in Equilibrium and Nonequilibrium States, Shixiao W Jiang, Haihao Lu, Douglas Zhou and David Cai, Physical Review E 90.3 (2014): 032925. [pdf] [link]

Conference Publications

Online Ad Procurement in Non-stationary Autobidding Worlds, with Jason Liang and Baoyu Zhou, NeurIPS 2023. [arXiv] [Link]

Limiting Behaviors of Nonconvex-Nonconcave Minimax Optimization via Continuous-Time Systems, Benjamin Grimmer, Haihao Lu, Pratik Worah and Vahab Mirrokni, ALT 2022. [arXiv]

Practical Large-Scale Linear Programming using Primal-Dual Hybrid Gradient, with David Applegate, Mateo Díaz, Oliver Hinder, Miles Lubin, Brendan O'Donoghue and Warren Schudy, NeurIPS 2021. [arXiv]

Regularized Online Allocation Problems: Fairness and Beyond, with Santiago Balseiro and Vahab Mirrokni, ICML 2021. [arXiv]

Contextual Reserve Price Optimization in Auctions, Joey Huchette, Haihao Lu, Hossein Esfandiari and Vahab Mirrokni, NeurIPS 2020. [arXiv]

Dual Mirror Descent for Online Allocation Problems, with Santiago Balseiro and Vahab Mirrokni, ICML 2020. [arXiv] [Link]

A Stochastic First-Order Method for Ordered Empirical Risk Minimization, with Kenji Kawaguchi, AISTATS 2020. [arXiv] [Link]

Accelerating Gradient Boosting Machines, Haihao Lu, Sai Praneeth Karimireddy, Natalia Ponomareva and Vahab Mirrokni, AISTATS 2020. [arXiv] [Link]

Accelerating Greedy Coordinate Descent Methods, Haihao Lu, Robert M. Freund and Vahab Mirrokni, ICML 2018. [arXiv] [Link] [slides]

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions, Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki and Vahab Mirrokni, ICML 2018. [arXiv] [Link]

Working Papers

Optimizing Scalable Targeted Marketing Policies with Constraints, with Duncan Simester and Yuting Zhu. [arXiv]

cuPDLP.jl: A GPU Implementation of Restarted Primal-Dual Hybrid Gradient for Linear Programming in Julia, with Jinwen Yang. [arXiv]

A Practical and Optimal First-Order Method for Large-Scale Convex Quadratic Programming, with Jinwen Yang. [arXiv]

On the Convergence of L-shaped Algorithms for Two-Stage Stochastic Programming, with John Birge and Baoyu Zhou. [arXiv]

On the Geometry and Refined Rate of Primal-Dual Hybrid Gradient for Linear Programming, with Jinwen Yang. [arXiv]

A Field Guide for Pacing Budget and ROS Constraints, with Santiago Balseiro, Kshipra Bhawalkar, Zhe Feng, Vahab Mirrokni, Balasubramanian Sivan and Di Wang. [arXiv]

Analysis of Dual-Based PID Controllers through Convolutional Mirror Descent, Santiago Balseiro, Haihao Lu, Vahab Mirrokni and Balasubramanian Sivan. [arXiv]

On a Unified and Simplified Proof for the Ergodic Convergence Rates of PPM, PDHG and ADMM, with Jinwen Yang. [arXiv]

On the Sparsity of Optimal Linear Decision Rules in Robust Inventory Management, with Bradley Sturt. [arXiv]

On the Infimal Sub-differential Size of Primal-Dual Hybrid Gradient Method, with Jinwen Yang. [arXiv]

Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for Linear Programming, with Jinwen Yang. [arXiv]

Regularized Online Allocation Problems: Fairness and Beyond, Santiago Balseiro, Haihao Lu and Vahab Mirrokni. (A preliminary version was accepted in ICML 2021). [arXiv]

Approximate Leave-One-Out for High-Dimensional Non-Differentiable Learning Problems, Shuaiwen Wang, Wenda Zhou, Arian Maleki, Haihao Lu and Vahab Mirrokni. [arXiv]

Technical Reports

Depth Creates No Bad Local Minima, Haihao Lu and Kenji Kawaguchi, Technical Report. [pdf] [arXiv]