Essay 2 | Intuition in the Age of AI
Ellison Carter Essay Series: “The Next IT: Intuition and Taste in the Age of AI”
Essay 2 | Intuition in the Age of AI
Only a few years ago, most of us had never interacted with an AI system directly. Then, almost overnight, we began carrying access to reasoning engines in our pockets. Between late 2022 and now, large-language and multimodal models from OpenAI, Anthropic, Google, Meta, and Mistral have evolved from experimental chatbots into broadly capable partners that we can work with to draft policy memos, interpret images, write and troubleshoot code, design molecules, and reason across domains. Every week brings new demonstrations that stretch what we think intelligence is and what we believe these systems can actually do. The shock isn’t just how powerful these systems have become; it’s how quickly they’ve redrawn the boundary between human and machine thought.
As compute and artificial intelligence capabilities grow exponentially, so does the terrain of what can be represented, modeled, or optimized. The expansion of computation makes another kind of intelligence visible in a new way; that is, intuition, or a mode of knowing that synthesizes pattern, context, and feel before it can be spelled out in code or words. It is not knowledge of facts so much as sensitivity to structure in the way recognition precedes explanation. Philosopher-scientist Michael Polanyi called this tacit knowledge, the background sense through which deliberate thought takes shape (The Tacit Dimension, 1966), and Daniel Kahneman later described it as System 1 thinking—fast, associative, and experience-based (Thinking Fast and Slow, 2011). Cognitive scientists Hugo Mercier and Dan Sperber argue in The Enigma of Reason (2017) that reasoning evolved to make intuition shareable to test, refine, and exchange those inarticulate hunches within a community. Intuition and reasoning develop in tandem, each extending the reach of the other.
The Interface Between Human and Machine Thought
When large models complete a sentence or design a new compound, they perform pattern recognition so dense it can feel intuitive. What humans experience as embodied understanding is represented in machines through vast networks of learned associations and layers of attention that capture relationships across text, image, and code. As Melanie Mitchell notes, these systems capture correlations at immense scale without yet developing the flexible abstraction that gives human reasoning its context (Artificial Intelligence: A Guide for Thinking Humans, 2020)—an assessment that remains accurate for today’s models, at least for now. Murray Shanahan, a cognitive scientist at DeepMind, describes large language models as ‘compressed mirrors of human discourse,’ reflecting patterns of communication without the embodied grounding that gives human thought its texture (DeepMind Research Blog, 2023). The rise of such models invites reflection on how intuition works and why it seems newly indispensable in a world that can now externalize so much reasoning.
The ability to externalize reasoning at this scale changes how we work with thought itself. What once stayed private and tacit is now visible, editable, and scalable. As models expose their inner architecture through open weights and interpretability tools, we’re beginning to see a kind of map of reasoning’s infrastructure and, with it, a new form of collaboration between intuition and computation.
Computer scientists describe computational thinking as the discipline of expressing problems and solutions in ways a computer can act on. The term gained prominence through Jeannette Wing’s 2006 paper in Communications of the ACM, but the concept traces back to Seymour Papert’s 1980 book Mindstorms, which framed computation as a medium for creative learning. Computational thinking teaches decomposition, abstraction, and algorithmic design, and these skills turn intuition into structured logic. It’s not about coding; but rather, about formalizing insight. Physicist Steven Wolfram takes the idea further, describing computation not as a tool but as a property of nature itself, i.e., a universal substrate where simple rules, when iterated, produce complex behavior. If that’s true, intuition may be our embodied way of recognizing those patterns before we can formalize them. In a computational universe, intuition functions at the interface between human experience and computational structure, like an API, sensing an underlying order before we can prove it exists. Intuition is not mysticism, but a provisional form of knowing that motivates deeper inquiry.
The arrival of open-weight models makes this relationship tangible. In machine-learning systems, a model’s weights are the billions of numerical parameters that store what the network has learned, tiny adjustments that capture patterns across language, images, and data. In an open-weight model, those parameters are released for others to study, adapt, or build on, even if the underlying code or training data remain closed. This openness marks a quiet but significant shift. For the first time, people outside major labs can engage directly with the internal machinery of reasoning at scale. Working with these systems demands more than technical skill; it calls for awareness of how we ourselves reason, where our intuitions guide us, where they mislead us, and how they might interact with computation. In that sense, open-weight models make the practice of intuition visible, shared, and, increasingly, participatory. With visibility comes influence, where the intuitions we externalize can shape others’ reasoning, making discernment itself a collective responsibility. The effects of these shifts will unfold unevenly, with some impacts already visible today, others just beginning to surface, and many that will define how we work and reason over the coming decade.
Practicing Intuition in a Computational World
The practical consequences of this convergence are already appearing. Anyone who has spent time working with these systems knows the feeling: the result is close—astonishingly close—but not quite right. We adjust, rewrite, refine, sensing where meaning slips or tone drifts. While micro-adjustments like these are easy to overlook, they represent the front line of cognitive adaptation. That sense of almost is intuition at work. It’s not a formal check but it is a felt calibration. In this sense, intuition acts as a quality-control mechanism of meaning. Users are developing what might be called prompt proprioception: an internal sense of balance and feedback in dialogue with machine reasoning. It’s messy, iterative, and deeply human. Reasoning is a whole-body experience; our eyes, hands, and nervous systems participate in the loop. As AI tools enter writing, design, research, and education, intuitive discernment of what feels right, what fits, what aligns may confer competitive advantage when raw computational power no longer does.
We can already see this sensibility forming in creative, technical, and scientific work alike. Engineers describe sensing when a model’s output is off before the metrics say so. Writers and educators use phrasing and tone as diagnostic tools. Designers describe ‘feeling the edge’ of what a model understands. These are early signs of a new kind of collaboration that depends less on commanding machines than on listening to them, perceiving how they interpret, represent, and sometimes distort our intent. Here, intuition is not a relic of pre-digital thought, but a human counterpart to computation. As computational capability becomes abundant, perceptual fluency emerges as a defining skill.
Intuition and the Acceleration of Discovery
The next horizon is already forming inside research and discovery. Leaders of the major model labs, including OpenAI, DeepMind, Anthropic, and research teams at Microsoft and Google, have begun to describe scientific progress as the natural extension of AI’s trajectory. The same systems now design proteins, generate hypotheses, and simulate experiments that used to require entire teams. The language is bold (e.g., curing disease, ending scarcity), and the ambitions may sound extravagant, but the intent to transform the practice of scientific discovery clear. The scientific process of searching, testing, and refining, is increasingly mediated through computation.
For scientists, this shift lands close to home. Work that previously began in conversation, field notes, or a half-formed question is now increasingly co-authored by algorithms that suggest, filter, or even decide what to test next. The researcher’s task becomes less about stepwise control and more about initiation and steering, using intuition to sense which paths are promising, which are distractions, and which are artifacts of the model’s assumptions or its lack of contextual awareness. Intuition, long the hidden partner of scientific reasoning, is re-surfacing as a means to navigate and interpret what computation reveals.
This near-term future will test how adaptable our institutions are. Grant reviews, peer feedback cycles, and academic training have been structured around human reasoning as we have defined it, emphasizing analytic skill, incremental progress, safe gains, and predictable timelines. Yet the pace and rhythm of discovery is happening outside of those norms. Anima Anandkumar, the Bren Professor of Computing and Mathematical Sciences at Caltech, has noted that some AI-driven simulations now operate over a million times faster than traditional methods. That kind of speed alters the balance between reflection and reaction, placing a higher value on knowing when to trust a model’s pattern recognition, when to question it, and when to pause. Dario Amodei, CEO of Anthropic, describes this moment as the “compressed twenty-first century,” a phase in which centuries of potential progress may occur in a fraction of the time. Across fields, this compression of discovery is creating new feedback loops between human judgment and computational systems.
Recent advances show that pairing intuitive judgment with computational systems improves real-world performance and speeds progress. In brain–computer interface research, participants learn to move cursors or prosthetic limbs through iterative feedback, gradually aligning intention and control until the action feels natural (Nature, 2023). Neuralink and the Danish company Corti use similar adaptive co-training loops, where both human and system adjust to each other’s signals over time. Robotics programs at Boston Dynamics and the Toyota Research Institute have developed shared-control systems in which human operators guide robots not through explicit commands but through intuitive correction and embodied feedback. In medicine, AI-assisted systems in radiology and surgery are refining professionals’ intuitive pattern recognition by surfacing anomalies or blind spots in real time, while in aviation, pilots have calibrated human and automated responses through repeated exposure, developing a sensory feel for when to intervene. Research on expertise supports this picture. Chi (2020) and Ericsson and Pool (2016) describe how tacit knowledge strengthens through deliberate practice with feedback, precisely the kind of loop that human–AI collaboration now provides. Together these examples show how intuitive and computational processes complement one another, achieving forms of precision and adaptability that neither could accomplish alone. What we are seeing is intuition evolving into a trainable skill, shaped by repeated feedback loops rather than by flashes of inspiration. Institutions built for slower cycles of discovery and education will need to adapt to a much faster, more iterative rhythm of learning.
Rethinking How We Educate for Intuition
If intuition can be trained, the question becomes how. The evidence from research and practice points to a shared principle: intuition develops through feedback. Yet most of our educational systems still privilege analytic demonstration over perceptual refinement. We continue to educate students to reproduce established reasoning rather than cultivate their own. In many programs, traditional pedagogy is not just insufficient for intuitive development; it is frequently counterproductive because it rewards certainty over exploration, penalizes ambiguity, and outsources problem framing through pre-defined problem sets. Worse, these habits do more than neglect intuition; they build barriers to it, conditioning students to distrust their own sense of pattern or possibility. Those habits can take years to unlearn, if they are unlearned at all. Most courses still rely on problems that arrive fully specified, sparing students the more uncertain and generative work of identifying what the problem is, how it might be framed, and why it matters. As a result, learners practice the execution of reasoning far more often than the perception of when reasoning holds together, leaving little room for the intuitive sensitivity that complex systems now demand. Education, at every level, will need to recognize intuition as a faculty that can be practiced.
The frontier ahead lies in how we train intuition to match the speed and scale of computation. The computational systems we build will keep learning at extraordinary pace; our challenge is to refine how we learn with them. Each act of collaboration, correction, and recalibration is part of that training. The goal is to evolve intuition until it can operate alongside computation without losing the depth of human understanding, grounded in context, coherence, and care.
Anandkumar, A. (2023). AI-powered simulations and the future of computation. California Institute of Technology. [Public lecture and commentary; paraphrased from talks and interviews].
Amodei, D. (2024). Machines of Loving Grace. Anthropic Research Commentary.
Boston Dynamics. (2023). Shared Control and Human–Robot Interaction Systems. [Research reports and demonstrations].
Chi, M. T. H. (2020). Thinking, Learning, and Problem Solving. New York: Routledge.
Corti. (2023). Human–AI Co-Training for Real-Time Clinical Decision Support. Copenhagen: Corti Research Division.
Ericsson, A., & Pool, R. (2016). Peak: Secrets from the New Science of Expertise. New York: Houghton Mifflin Harcourt.
Kahneman, D. (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Mercier, H., & Sperber, D. (2017). The Enigma of Reason. Cambridge, MA: Harvard University Press.
Mitchell, M. (2020). Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus and Giroux.
Nature. (2023). Iterative Training and Intuitive Control in Brain–Computer Interfaces. Nature Neuroscience, 26(8), 1132–1139.
Neuralink. (2023). Neural Co-Adaptation and Feedback in Human–Machine Interfaces. Neuralink Technical White Paper Series.
Papert, S. (1980). Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books.
Polanyi, M. (1966). The Tacit Dimension. Garden City, NY: Doubleday.
Shanahan, M. (2023). Large Language Models as Compressed Mirrors of Human Discourse. DeepMind Research Blog. https://deepmind.google/discover/blog
Toyota Research Institute. (2023). Shared Autonomy and Human–Robot Collaboration Research. Cambridge, MA.
Wing, J. (2006). Computational Thinking. Communications of the ACM, 49(3), 33–35.
Wolfram, S. (2002). A New Kind of Science. Champaign, IL: Wolfram Media.