Lecturer
University of Edinburgh
Google Scholar | Email | CSRankings
I am a Lecturer at Artificial Intelligence and its Applications Institute, School of Informatics, University of Edinburgh, a Fellow of Edinburgh's Generative AI Laboratory, and an Affiliate of the Institute for Adaptive and Neural Computation, Edinburgh Future Institute, and Edinburgh Centre for Financial Innovations. I received my BSc in statistics from the University of Science and Technology of China, MPhil and PhD in computer science from the University of Sydney. My research interest is in trustworthy AI, particularly deep learning theory and explainability, theory of decentralised learning, privacy in machine learning, symmetry in machine learning, learning theory in game-theoretical problems, and their applications in economics. I am an Area Chair of ICML, NeurIPS, UAI, AISTATS, ECAI, and ACML, and an Associate Editor of IEEE Transactions on Technology and Society.
Fully funded PhD studentships are available from the Centres for Doctoral Training in Machine Learning System, Designing Responsible NLP, and AI for Biomedical Innovation. Visitors are welcome. Candidates please email me, possibly with your CV, transcripts, papers, or anything you are proud of.
A fully-funded PhD project is availabel in Developing LLM Agents for Resilient, Efficient, and Ethical Capacity Modelling in Health Care Provision.
How to pronounce my name :)
Any comment is welcome. Please see the full list at my Google Scholar.
Guanpu Chen, Gehui Xu, Fengxiang He✉, Yiguang Hong, Leszek Rutkowski, and Dacheng Tao, Global Nash Equilibrium in Non-convex Multi-player Game: Theory and Algorithms.
IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI), 2024. [paper] [code]
Fengxiang He, Lihao Nan, and Tongtian Zhu, Imagining a Democratic, Affordable Future of Foundation Models: A Decentralised Avenue.
In Cathy Yi-Hsuan Chen and Wolfgang Karl Härdle (editors), Handbook of Blockchain Analytics, Springer, in press. [paper]
Shi Fu, Fengxiang He et al. Convergence of Bayesian Bilevel Optimization.
International Conference on Learning Representations (ICLR), 2024. [paper] [bib]
Tongtian Zhu, Fengxiang He✉ et al. Decentralized SGD and Average-direction SAM are Asymptotically Equivalent.
International Conference on Machine Learning (ICML), 2023. [paper] [code] [bib]
Mengnan Du, Fengxiang He et al. Shortcut Learning of Large Language Models in Natural Language Understanding.
Communications of the ACM, 2023. [paper] [bib]
Tian Qin*, Fengxiang He* et al. “Benefits of Permutation-Equivariance in Auction Mechanisms.” Advances in Neural Information Processing Systems (NeurIPS), 2022. [paper] [bib]
Tongtian Zhu, Fengxiang He✉ et al. Topology-aware Generalization of Decentralized SGD.
International Conference on Machine Learning (ICML), 2022. [paper] [code] [bib]
Shaopeng Fu*, Fengxiang He* et al. Knowledge Removal in Sampling-based Bayesian Inference.
International Conference on Learning Representation (ICLR), 2022. (Part of Shaopeng Fu*, Fengxiang He* et al. Bayesian Inference Forgetting.
) [paper] [code] [bib]
Shaopeng Fu, Fengxiang He et al. Robust Unlearnable Examples: Protecting Data Privacy Against Adversarial Learning.
International Conference on Learning Representations (ICLR), 2022. [paper] [code] [bib]
Fengxiang He*, Bohan Wang* et al. Tighter generalization bounds for iterative differentially private learning algorithms.
Conference on Uncertainty in Artificial Intelligence (UAI), 2021. [paper] [bib]
Zeke Xie, Fengxiang He et al. Artificial neural variability for deep learning: On overfitting, noise memorization, and catastrophic forgetting.
Neural Computation, 2021. [paper] [bib]
Fengxiang He et al. Recent advances in deep learning theory.
2020. [paper] [bib]
Zhuozhuo Tu, Fengxiang He et al. Understanding Generalization in Recurrent Neural Networks.
International Conference on Learning Representation (ICLR), 2020. [paper] [bib]
Fengxiang He*, Bohan Wang* et al. Piecewise linear activations substantially shape the loss surfaces of neural networks.
International Conference on Learning Representations (ICLR), 2020. [paper] [website] [poster] [bib]
Fengxiang He et al. Why ResNet works? Residuals generalize.
IEEE Transactions on Neural Networks and Learning Systems (TNNLS). 2020. [paper] [bib]
Fengxiang He et al. Control batch size and learning rate to generalize well: Theoretical and empirical evidence.
Advances on Neural Information Processing (NeurIPS), 2019. [paper] [poster] [bib]
* Co-first authors.
PhD Students
MScR Students
Interns
Last update: Mon 25 Nov 2024