United States Air Force Academy

Go to home page

Dr. Daniel Holman

Research Scientist

Department of Behavioral Sciences & Leadership

Dr. Daniel Holman
Contact Information

(719) 333-0332

Email

LinkedIn

Bio

Daniel Holman is a researcher specializing in human-robot interaction and cognitive systems. His work explores how people perceive, trust, and sometimes overtrust artificial intelligence, particularly in high-stakes and emergency scenarios. Drawing from psychology, robotics, and virtual reality, his recent projects involve modeling behavior in robot-guided evacuations and investigating the implications of AI recommendations on moral decision-making. He also has experience in cognitive modeling and maintains an active interest in how emerging technologies interface with human judgment and ethics. Daniel earned his Ph.D. from the University of California, Merced, and currently works in the Warfighter Effectiveness Research Center developing experiments that combine VR, robotics, and AI.

Education

Ph.D. Cognitive Science, University of California, Merced (2012-2018)

M.A. Interdisciplinary Studies/Cognitive Science, DePaul University (2007-2010)

B.A. Liberal Studies, Shimer College (2004-2007)

Professional Experience

Associate Specialist, University of California, Merced (2019-2024)

Adjunct Lecturer, California State University, Stanislaus (2019-2022)

Research and Scholarly Interests

Human-robot interaction

Trust and overtrust in autonomous systems

AI ethics

Mixed-reality simulation

Emergency evacuation modeling

Cognitive modeling

Explainable AI

Publications

Holbrook, C., Holman, D., Clingo, J., & Wagner, A. R. (2024). Overtrust in AI recommendations about whether or not to kill: Evidence from two human-robot interaction studies. Scientific Reports, 14(1), Article 19751.

Yin, Y., Nayyar, M., Holman, D., Lucas, G., Holbrook, C., & Wagner, A. (2024). Validation and evacuee modeling of virtual robot-guided emergency evacuation experiments. [Conference paper].

Holbrook, C., Holman, D., Clingo, J., & Wagner, A. (2023). Overtrust in AI recommendations to kill. OSF Preprints. https://osf.io

Holbrook, C., Holman, D., Wagner, A. R., Marghetis, T., Lucas, G., Sheeran, B., et al. (2023). Investigating human-robot overtrust during crises. In Proceedings of the Workshops at the Second International Conference on

Sheeran, B., Wagner, A. R., Holbrook, C., & Holman, D. (2023). Robot guided emergency evacuation from a simulated space station. AIAA SciTech 2023 Forum, 0156.

Wagner, A. R., Holbrook, C., Holman, D., Sheeran, B., Surendran, V., & Armagost, J. (2022). Using virtual reality to simulate human-robot emergency evacuation scenarios. arXiv Preprint arXiv:2210.08414.

Krishnamurthy, U., Holbrook, C., Maglio, P. P., Holman, D., Wagner, A., & Clingo, J. (2021). The impact of anthropomorphism on trust in robotic systems. OSF Preprints.

Holbrook, C., Holman, D., Clingo, J., Lobato, E. J. C., Wagner, A., & Graser, R. (2021). HRI, anthropomorphism and trust under uncertainty. OSF Preprints.

Holman, D. M. (2018). What am I supposed to do? Problem finding and its impact on problem solving. [Doctoral dissertation, University of California, Merced].

Holman, D., & Spivey, M. J. (2016). Connectionist models of bilingual word reading. In Methods in Bilingual Reading Comprehension Research (pp. 213–229).

Rigoli, L. M., Holman, D., & Spivey, M. J. (2014). Spectral convergence in tapping and physiological fluctuations: Coupling and independence of 1/f noise in the central and autonomic nervous systems. Frontiers in Human Neuroscience, 8, 713.