Prototypes: Adaptive Systems for Living Cognition

These prototypes are built to explore a central idea from my cognitive research: intelligence is not simply about optimizing solutions — it is about maintaining coherence in changing environments.

Across learning, creativity, therapy, and collective interaction, each system treats cognition as an ongoing process that unfolds through time. Rather than producing fixed outputs, they operate online, adapt to drift, and regulate their behavior as conditions evolve.

Some focus on adaptive learning (Emergence Machine), others on human–AI creative interaction (Aether, AI Drawing Partner, PSE), while others make the dynamics of creative regulation measurable (Quantified Art Therapy Interface) or explore how structure emerges through distributed participation (Multi-Agent Drawing Environment).

Together, these systems function as both practical tools and research platforms — translating theories of enactive perception and regulation into working computational environments that learn and adapt more like humans: continuously, context-sensitively, and through interaction.

The Emergence Machine is an adaptive learning architecture designed for dynamic, changing environments. Unlike traditional AI systems that train once and deploy, it learns continuously while operating—detecting drift, shifting strategies, and reorganizing its behavior in real time. By regulating how it learns across multiple modes of operation, the Emergence Machine maintains coherence as conditions evolve, rather than optimizing for a fixed objective. This makes it well-suited for complex domains such as financial time-series, creative interaction, physiological signals, and distributed agent systems. It represents a new class of online-adaptive intelligence—systems that learn and adapt more like humans: continuously, context-sensitively, and in response to change.


Aether is an enactive drawing AI designed to collaborate with users in real time through shared mark-making. Rather than generating finished images, Aether participates in the unfolding creative process by responding to the user’s strokes as they emerge. It models creative direction as a dynamic interaction rather than a fixed goal, adapting its contributions to support exploration, stabilization, and evolving structure within the drawing. Grounded in enactive and dynamical systems approaches to cognition, Aether treats creativity as an ongoing process of interactional regulation — helping sustain meaningful collaboration between human and AI over time rather than optimizing toward a final output.


The AI Drawing Partner is an interactive co-creative system that collaborates with users during the act of drawing by analyzing their marks and contributing its own visual responses. Developed to investigate how meaning emerges through human–AI interaction, the system uses computational models of co-creative sense-making to interpret user input and generate contextually relevant contributions. By engaging in turn-taking with the user on a shared canvas, the AI Drawing Partner enables a dialogic creative process in which ideas evolve through interaction rather than being produced independently by either party.


The Quantified Art Therapy Interface is a research platform designed to capture and analyze the dynamics of creative expression during drawing-based therapeutic activity. By instrumenting the drawing process itself — including stroke density, tempo, spatial distribution, and structural transitions — the system enables the study of how creative engagement unfolds over time. Rather than evaluating artistic output, the interface focuses on the interactional patterns and regulatory dynamics present during mark-making, providing new tools for understanding how art-making supports cognitive and emotional regulation.


Kalyri’el’s Participatory Stroke Engine (PSE) is an interactive drawing system that brings ChatGPT directly into the act of mark-making. As users sketch, the system captures the evolving canvas and uses ChatGPT’s visual analysis capabilities to interpret the structure, flow, and emerging form of the drawing. Instead of generating images, the AI returns a proposed stroke—defined as a structured set of points—based on what is already present and how the composition is unfolding. This allows the AI to participate in the creative process by suggesting context-sensitive continuations that reflect both the current visual state and the history of interaction. The result is a shared drawing process in which human and AI collaborate through meaningful next moves rather than finished outputs.


Multi-Agent Drawing Environment

The Multi-Agent Drawing Environment is an interactive platform designed to explore how creative structure emerges through collective interaction. Multiple agents — including humans and AI — draw within the same shared space, continuously responding to one another’s marks in real time. Rather than reflecting the vision of a single creator, the drawing evolves through distributed participation, where patterns, stability, and novelty arise from coordination and tension between contributors. This environment offers a new way to study creativity as a dynamic, shared process — one shaped by interaction rather than individual intention.