Abstract
This paper articulates a theoretical approach to the question of which aspects of higher education should incorporate AI large language models (LLMs) and which should not, using ideas from recent work in the epistemology of understanding. I exploit an extended analogy between walking and driving, using it to reject two extreme positions: the technophobic position (walking is aways better and one should never drive; LLMs have no place in higher ed) and the technophilic position (driving is always better and we no longer need to practice walking; we should completely reorient higher ed by incorporating AI as much as possible). I also use the driving and walking analogy to caution that changes to our epistemic practices in light of AI must take account of their embeddedness in broader educational infrastructures—especially limitations imposed by administrators. While LLMs may have changed our epistemic practices aimed at knowledge, and thus even changed what we take knowledge to be, they have not and cannot effect a parallel change in understanding. Focusing on understanding rather than knowledge can help us avoid the rush to problematic, short-term solutions, and instead find a thoughtful middle ground between technophobia and technophilia. This will involve a partial reorientation away from the focus on content and the mastery of factual information, and toward a focus on skills of understanding. I give an account of understanding as a grasping of nonpropositional structure, and show how this is of special relevance for the situational, contextual, and analogical thinking higher education ought to promote. Finally, I home in on one particular skill of understanding: questioning.