Neurosculptor

Fay Cai

Advisor: Tiri Kananuruk

Neurosculptor is an interactive installation that transforms everyday behavior into a symbolic simulation of brain activity, revealing how our actions shape—and can potentially enhance—our cognitive architecture through light, motion, and real-time feedback.

Project Website Presentation
A sculptural brain installation made of illuminated nodes and fiber-optic lines.

Project Description

What if you could see your brain respond to your habits—moment by moment?

Neurosculptor is an interactive installation that transforms everyday digital behavior into a living simulation of brain activity. Informed by cognitive neuroscience and symbolic modeling, the system translates user interactions into psychobiologically grounded changes in neurotransmitter levels, brain region activation, and large-scale network dynamics. It is powered by a symbolic behavior-to-brain-state model inspired by cognitive architectures like ACT-R, made tangible through light, motion, and feedback loops.

At its core, Neurosculptor invites us to rethink our relationship with technology—not as passive consumers, but as co-authors of our own neural architecture. Each signal, gesture, and moment of attention becomes a stroke in the evolving sculpture of the mind.

Illuminated nodes represent key brain regions involved in attention, emotion, and decision-making—symbolizing gray matter, the brain’s processing centers. Fiber-optic connections mimic white matter, the pathways that carry signals between regions. The result is a kinetic “brainscape” that responds and adapts in real time, revealing how behavior sculpts biology.

Rooted in research on attention, neuroplasticity, and cognitive architecture—from Cal Newport to Lisa Feldman Barrett—Neurosculptor reflects an ongoing inquiry: How might we not only interpret the brain, but enhance it?

It is both a scientific metaphor and a poetic prototype—an invitation to make the invisible visible, and to reclaim authorship over our cognitive future.

Technical Details

The project combines Python-based symbolic modeling with real-time sensor input (touch) and multi-modal output (lights, animation, fiber optics). It runs on a Raspberry Pi, enabling real-time data processing and physical feedback. The system forms a closed-loop architecture that simulates cognitive dynamics and renders them through an interactive, embodied interface.

Research/Context

Neurosculptor emerges at the intersection of cognitive psychology, neuroscience, and symbolic systems design. The project is grounded in an urgent question: How is technology reshaping the way we think, focus, and feel—and what might it look like to reclaim agency over that process?

Drawing from foundational works like Deep Work by Cal Newport and The Shallows by Nicholas Carr, this project is informed by a growing body of research on digital distraction, attention fragmentation, and cognitive erosion in a hyperconnected world. Daniel Kahneman’s Thinking, Fast and Slow offers a theoretical lens through which to model dual-process cognition, while Lisa Feldman Barrett’s How Emotions Are Made helps integrate emotional dynamics into a symbolic cognitive framework.

At the neural level, the project incorporates insights from key brain network models: the Default Mode Network (DMN), Salience Network (SN), and Central Executive Network (CEN)—informed by work from Raichle, Menon & Uddin, and Barrett & Satpute. It also draws from Karl Friston’s Free-Energy Principle and Poeppel’s research on brain signal recording to understand adaptive mental processing and real-time neural feedback.

Technically and conceptually, Neurosculptor builds upon symbolic cognitive architectures such as ACT-R and Newell’s Unified Theories of Cognition, while exploring newer frontiers in neuro-symbolic AI (Lake et al., 2017). These ideas are synthesized into a physical system that simulates brain-like processes through sensor input, symbolic translation, and light-based output.

This work is an extension of my original research, The Evolution of Cognitive Representation: The Visualization of the Mind (2024), and shaped by my experience studying cognitive neuroscience at NYU alongside Dr. David Poeppel’s Ph.D panel course.