I am an Applied Scientist at Amazon Search (formerly known as A9), working on applying machine learning methods to solve diverse user-centric problems for search engine rankings.

I received my Ph.D. degree in Statistics at the Department of Statistics, UC Davis, advised by Prof. Krishna Balasubramanian. Prior to that, I received my B.S. degree in Statistics from Zhejiang University. My research interests lie in the mathematics of data science, with a focus on the interface of optimization, statistics, and machine learning.

My current research at Amazon focuses on deep learning for ranking, multi-objective optimization, uncertainty quantification, and the application of large language models (LLMs) in information retrieval.

In addition to my primary work, I have been working on stochastic optimization, robustness in deep learning models, neural architecture search, and reinforcement learning.

Recent News

  • Our paper on multi-objective DPO has been accepted in NeurIPS 2024 Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability.
  • Our paper on stochastic bilevel optimization under relaxed smoothness conditions has been accepted in JMLR.
  • One paper has been accepted in ICLR, 2024; one at AISTATS, 2024.
  • I start my journey as an applied scientist at Amazon on July 17, 2023
  • One paper has been accepted in UAI, 2023.
  • I successfully passed my Ph.D. dissertation defense on May 2nd, 2023.

Experience

  • Applied Scientist at Amazon, Search Science and AI.
    July 2023 - Now, Palo Alto, CA

  • Applied Scientist Intern at Amazon, Search Science and AI.
    June 2022 - September 2022, Palo Alto, CA

  • Research Scientist Intern at ByteDance, Applied Machine Learning.
    June 2021 - November 2021, Mountain View, CA

Preprints and Publications

(“*” indicates equal contribution)

  • Orbit: A Framework for Designing and Evaluating Multi-objective Rankers Chenyang Yang*, Tesi Xiao*, Michael Shavlovsky*, Christian Kästner, Tongshuang Wu. International Conference on Intelligent User Interfaces (IUI), March 2025 (to appear) [pdf]

  • HyperDPO: Conditioned One-Shot Multi-Objective Fine-Tuning Framework Yinuo Ren, Tesi Xiao, Michael Shavlovsky, Lexing Ying, Holakou Rahmanian. NeurIPS Workshop on Fine-Tuning in Modern Machine Learning: Principles and Scalability, 2024 [pdf]

  • Optimal Algorithms for Stochastic Bilevel Optimization under Relaxed Smoothness Conditions Xuxing Chen*, Tesi Xiao*, Krishnakumar Balasubramanian. Journal of Machine Learning Research, 2024 [pdf]

  • A Sinkhorn-type Algorithm for Constrained Optimal Transport Xun Tang, Holakou Rahmanian, Michael Shavlovsky, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying. ArXiv preprint, 2024 [pdf]

  • Multi-Objective Optimization via Wasserstein-Fisher-Rao Gradient Flow Yinuo Ren, Tesi Xiao, Tanmay Gangwani, Anshuka Rangi, Holakou Rahmanian, Lexing Ying, Subhajit Sanyal. AISTATS, 2024 [pdf][code][poster]

  • Accelerating Sinkhorn Algorithm with Sparse Newton Iterations Xun Tang, Michael Shavlovsky, Holakou Rahmanian, Elisa Tardini, Kiran Koshy Thekumparampil, Tesi Xiao, Lexing Ying. ICLR, 2024 [pdf]

  • Towards Sequential Counterfactual Learning to Rank Tesi Xiao, Branislav Kveton, Sumeet Katariya, Tanmay Gangwani, Anshuka Rangi. SIGIR-AP. 2023. [pdf]

  • A One-Sample Decentralized Proximal Algorithm for Non-Convex Stochastic Composite Optimization Tesi Xiao*, Xuxing Chen*, Krishnakumar Balasubramanian, Saeed Ghadimi. UAI, 2023 [pdf] [code] [poster]

  • A Projection-free Algorithm for Constrained Stochastic Multi-level Composition Optimization Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi. NeurIPS, 2022 [pdf] [poster]

  • Field-wise Embedding Size Search via Structural Hard Auxiliary Mask Pruning for Click-Through Rate Prediction Tesi Xiao, Xia Xiao, Ming Chen, Youlong Cheng. CIKM, 2022, DL4SR (Deep Learning for Search and Recommendation) workshop [pdf]

  • Improved Complexities for Stochastic Conditional Gradient Methods under Interpolation-like Conditions Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi. Operations Research Letters, 2022 [pdf]

  • Statistical Inference for Polyak-Ruppert Averaged Stochastic Zeroth-order Gradient Algorithm Yanhao Jin*, Tesi Xiao*, Krishnakumar Balasubramanian. ArXiv preprint (2021) [pdf]

  • How Does Noise Help Robustness? Explanation and Exploration Under the Continuous Limit Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh. CVPR, 2020 (Oral Presentation, 5.7% out of 5,865) [pdf]

  • Neural SDE: Stabilizing Neural ODE Networks with Stochastic Noise Xuanqing Liu, Tesi Xiao, Si Si, Qin Cao, Sanjiv Kumar, Cho-Jui Hsieh. ArXiv preprint (2019) [pdf]

Professional Activities and Services

  • Paper Reviewer: NeurIPS (2020, 2022, 2023), ICML (2021, 2022, 2023, 2025), ICLR (2024, 2025), UAI (2023), AISTATS (2021, 2024) COLT (2020, cohort), SIAM Journal on Optimization (SIOPT)

  • Top Reviewer: NeurIPS 2023, UAI 2023

  • Invited and Contributed Talks
    • Amazon Shopping Science Summit, Nov 2024
    • 1st International ACM SIGIR Conference on Information Retrieval in the Asia Pacific, Nov 2023
    • Amazon Machine Learning Conference, Seattle, Oct 2023
    • International Conference on Stochastic Programming, Davis, July 2023
    • CeDAR Annual Research Symposium at UC Davis, Mar 2022
    • INFORMS Annual Meeting, Indianapolis, Oct 2022
    • CIKM Wokrshop on Deep Learning for Search and Recommendation, Atlanta, Oct 2022
  • Attended Workshops
    • Advances in Stein’s Method and its Applications in Statistical Learning and Optimization at the Banff International Research Station, April 2022
    • Multi-Agent Reinforcement Learning and Bandit Learning at the Simons Institute for the Theory of Computing, May 2022