Tianyu Wang
I'm an assistant professor at Shanghai Center for Mathematical Science (SCMS), Fudan University. Before joining Fudan, I obtained my doctorate degree from Duke University and my bachelor's degree from the Hong Kong University of Science and Technology (HKUST).
Email: wangtianyu AT fudan DOT edu DOT cn
Research summary:
What I've done (and will continue working on):
- learning problems over non-Euclidean spaces.
- classic optimization problems (e.g., stochastic optimization problems).
What I'm working on (by learning it first):
I am also interested in certain problems at the intersection of combinatorics and probability theory, such as (statistical) properties of random tensors, spin glass problems.
Students: I am fortunate to work with the following students: Yasong Feng, Yu Liu, Yunlu Shu, Zicheng Wang
My publication list is here.
Some Recent Papers/Preprints
- Zeroth-order Low-rank Hessian Estimation via Matrix Recovery, Preprint
Tianyu Wang, Zicheng Wang, Jiajia Yu
- The Anytime Convergence of Stochastic Gradient Descent with Momentum: From a Continuous-Time Perspective
, Preprint
Yasong Feng, Yifang Jiang, Tianyu Wang, Zhiliang Ying
- A Lipschitz Bandits Approach for Continuous Hyperparameter Optimization, Preprint
Yasong Feng, Weijian Luo, Yimin Huang, Tianyu Wang
- Convergence Rates of Zeroth-order Gradient Descent for Lojasiewicz Functions, Accepted to INFORMS Journal on Computing
Tianyu Wang and Yasong Feng
- Yasong Feng and Tianyu Wang, “Stochastic zeroth-order gradient and Hessian estimators: variance reduction and refined bias bounds,” Information and Inference: A Journal of the IMA, vol. 12, no. 3, 2023
- Tianyu Wang, “On sharp stochastic zeroth-order Hessian estimators over Riemannian manifolds,”
Information and Inference: A Journal of the IMA, vol. 12, no. 2, pp. 787–813, 2023
- Yasong Feng, Zengfeng Huang, and Tianyu Wang, “Lipschitz bandits with batched feedback,” in
Advances in Neural Information Processing Systems, 2022, pp. 19836–19848.
--Spotlight Presentation, long version accepted to IEEE Trans. Inf. Theory
|