Fengxiang He

Lecturer
University of Edinburgh

Google Scholar | LinkedIn | University Email | Gmail

Bio

I am a Lecturer at Artificial Intelligence and its Applications Institute, School of Informatics, University of Edinburgh, and an Affiliate of Edinburgh's Institute for Adaptive and Neural Computation, Edinburgh Future Institute, and Edinburgh Centre for Financial Innovations. I received my BSc in statistics from the University of Science and Technology of China, MPhil and PhD in computer science from the University of Sydney. My research interest is in trustworthy AI, particularly deep learning theory and explainability, theory of decentralised learning, privacy in machine learning, symmetry in machine learning, learning theory in game-theoretical problems, and their applications in economics. I am a member of MLCommons and IEEE's Global Initiative on XR Ethics, AI/ML Terminology and Data Formats Working Group, Decentralized Metaverse Initiative, and Ethical Assurance of Data-Driven Technologies for Mental Healthcare. I am an Area Chair of ICML, NeurIPS, UAI, AISTATS, and ACML.

Bulletin

Featured Publications

Any comment is welcome. Please see the full list at my Google Scholar.

  1. Fengxiang He, Lihao Nan, and Tongtian Zhu, Imagining a Democratic, Affordable Future of Foundation Models: A Decentralised Avenue. In Cathy Yi-Hsuan Chen and Wolfgang Karl Härdle (editors), Handbook of Blockchain Analytics, Springer, in press. NEW

  2. Shi Fu, Fengxiang He et al. Convergence of Bayesian Bilevel Optimization. International Conference on Learning Representations (ICLR), 2024. [paper] [bib] NEW

  3. Zhihao Hu, Yiran Xu, Mengnan Du, Jindong Gu, Xinmei Tian, Fengxiang He. Boosting Fair Classifier Generalization through Adaptive Priority Reweighing. [paper] [code] [bib]

  4. Tongtian Zhu, Fengxiang He✉ et al. Decentralized SGD and Average-direction SAM are Asymptotically Equivalent. International Conference on Machine Learning (ICML), 2023. [paper] [code] [bib]

  5. Mengnan Du, Fengxiang He et al. Shortcut Learning of Large Language Models in Natural Language Understanding. Communications of the ACM, 2023. [paper] [bib]

  6. Tian Qin*, Fengxiang He* et al. “Benefits of Permutation-Equivariance in Auction Mechanisms.” Advances in Neural Information Processing Systems (NeurIPS), 2022. [paper] [bib]

  7. Tongtian Zhu, Fengxiang He✉ et al. Topology-aware Generalization of Decentralized SGD. International Conference on Machine Learning (ICML), 2022. [paper] [code] [bib]

  8. Shaopeng Fu*, Fengxiang He* et al. Knowledge Removal in Sampling-based Bayesian Inference. International Conference on Learning Representation (ICLR), 2022. (Part of Shaopeng Fu*, Fengxiang He* et al. Bayesian Inference Forgetting.) [paper] [code] [bib]

  9. Shaopeng Fu, Fengxiang He et al. Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning. International Conference on Learning Representations (ICLR), 2022. [paper] [code] [bib]

  10. Fengxiang He*, Bohan Wang* et al. Tighter generalization bounds for iterative differentially private learning algorithms. Conference on Uncertainty in Artificial Intelligence (UAI), 2021. [paper] [bib]

  11. Zeke Xie, Fengxiang He et al. Artificial neural variability for deep learning: On overfitting, noise memorization, and catastrophic forgetting. Neural Computation, 2021. [paper] [bib]

  12. Fengxiang He et al. Recent advances in deep learning theory. 2020. [paper] [bib]

  13. Zhuozhuo Tu, Fengxiang He et al. Understanding Generalization in Recurrent Neural Networks. International Conference on Learning Representation (ICLR), 2020. [paper] [bib]

  14. Fengxiang He*, Bohan Wang* et al. Piecewise linear activations substantially shape the loss surfaces of neural networks. International Conference on Learning Representations (ICLR), 2020. [paper] [website] [poster] [bib]

  15. Fengxiang He et al. Why ResNet works? Residuals generalize. IEEE Transactions on Neural Networks and Learning Systems (TNNLS). 2020. [paper] [bib]

  16. Fengxiang He et al. Control batch size and learning rate to generalize well: Theoretical and empirical evidence. Advances on Neural Information Processing (NeurIPS), 2019. [paper] [poster] [bib]

* Co-first authors.

Last update: Wed 17 Apr 2024