Contact
UVA Rice Hall, 503
Charlottesville, VA 22904
Personal webpage Google Scholar

About

Dr. Hadi Daneshmand works on trustworthy AI, focusing on the reliability and explainability of AI models. His research lies at the intersection of theoretical and applied machine learning. He leverages advanced tools from mathematical programming, probability theory, and mathematical physics to characterize the strengths and vulnerabilities of AI methods, develop reliable AI algorithms, and explain the underlying mechanisms of influential AI models.

Dr. Hadi Daneshmand joined the University of Virginia in December 2024 as an Assistant Professor of Computer Science. Before his appointment, he was a Postdoctoral Researcher at the Foundations of Data Science Institute (FODSI), jointly hosted by MIT and Boston University, and INRIA Paris, where he worked under the mentorship of Professor Francis Bach. He earned his Ph.D. in Computer Science from the Machine Learning Institute at ETH Zurich.

 

Education

Ph.D. in Computer Science, ETH Zurich, 2020

Research Interests

Artificial Intelligence Trustworthy AI, Generative AI, Mechanistic Analysis
Machine Learning Theoretical Foundations, Reliability and Explainability, Learning Dynamics
Mathematical Foundations of AI Optimization, Probability, and Statistical Learning Theory

Selected Publications

Linear Transformers Implicitly Discover Unified Numerical Algorithms Patrick Lutz, Aditya Gangrade, Hadi Daneshmand, and Venkatesh Saligrama
NeurIPS 2025
Transformers Learn to Implement Preconditioned Gradient Descent for In-context Learning Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, Suvrit Sra
NeurIPS 2023
Batch Normalization Orthogonalizes Representations in Deep Random Networks Hadi Daneshmand, Amir Joudaki, Francis Bach
NeurIPS 2021
Local Saddle Point Optimization: A Curvature Exploitation Approach Leonard Adolphs, Hadi Daneshmand, Aurelien Lucchi, Thomas Hofmann
AISTATS 2019

Courses Taught

Machine Learning Fall 2025
Neural Networks Spring 2025

Awards

Rising Stars Award Conference on Parsimony and Learning (CPAL), Stanford University, 2025
Spotlight Award ICML Workshop on In-context Learning, 2024
Postdoc Fellowship of FODSI NSF
Early Postdoc Mobility Grant Swiss National Science Foundation (SNSF)