Work Experience
Learning Solutions Manager
June 2024 - Present
Instructional Designer
January 2023 - June 2024
PhD Candidate / Lecturer
August 2017 - December 2022
Student Success Coordinator
August 2014 - June 2017
Classroom Teacher
August 2010 - June 2014
Education
PhD, Philosophy
University of Kentucky, 2022
MA, Philosophy
University of Kentucky, 2019
MA, Teaching
Frostburg State University, 2010
BS, Sociology
Frostburg State University, 2007
Research
Doctoral Dissertation
Contextualizing Artificial Intelligence: The History, Values, and Epistemology of Technology in the Philosophy of Science
Artificial intelligence (AI) and other advanced technologies pose new questions for philosophers of science regarding epistemology, science and values, and the history of science. I will address these issues across three essays in this dissertation. The first essay concerns epistemic problems that emerge with existing accounts of scientific explanation when they are applied to deep neural networks (DNNs). Causal explanations in particular, which appear at first to be well suited to the task of explaining DNNs, fail to provide any such explanation. The second essay will explore bias in systems of automated decision-making, and the role of various conceptions of objectivity in either reinforcing or mitigating bias. I focus on conceptions of objectivity common in social epistemology and the feminist philosophy of science. The third essay probes
the history of the development of 20th century telecommunications technology and the relationship between formal and informal systems of scientific knowledge production. Inquiring into the role that early phone and computer hackers played in the scientific developments of those technologies, I untangle the messy web of relationships between various groups that had a lasting impact on this history while engaging in a conceptual analysis of “hacking” and “hackers.”
Grimsley, Christopher, Elijah Mayfield, and Julia R.S. Bursten. “Why Attention is Not Explanation: Surgical Intervention and Causal Reasoning about Neural Models.” In Proceedings of The 12th Language Resources and Evaluation Conference, 1780-1790. Marseille, France: European Language Resources Association, 2020
Publications
Abstract: As the demand for explainable deep learning grows in the evaluation of language technologies, the value of a principled grounding for those explanations grows as well. Here we study the state-of-the-art in explanation for neural models for NLP tasks from the viewpoint of philosophy of science. We focus on recent evaluation work that finds brittleness in explanations obtained through attention mechanisms. We harness philosophical accounts of explanation to suggest broader conclusions from these studies. From this analysis, we assert the impossibility of causal explanations from attention layers over text data. We then introduce NLP researchers to contemporary philosophy of science theories that allow robust yet non-causal reasoning in explanation, giving computer scientists a vocabulary for future research.
Conference Papers
“Causal and Non-Causal Explanations of Artificial Intelligence” Presented at Philosophy of Science Association (PSA) Baltimore, MD. 11 Nov 2021.
Abstract: Deep neural networks (DNNs), a particularly effective type of artificial intelligence, currently lack a scientific explanation. The philosophy of science is uniquely equipped to handle this problem. Computer science has attempted, unsuccessfully, to explain DNNs. I review these contributions, then identify shortcomings in their approaches. The complexity of DNNs prohibits the articulation of relevant causal relationships between their parts, and as a result causal explanations fail. I show that many non-causal accounts, though more promising, also fail to explain AI. This highlights a problem with existing accounts of scientific explanation rather than with AI or DNNs.
https://psa2020.philsci.org/program-schedule/sponsor-lounge/program/105/explainable-ai