As AI becomes both more general and more foundational, it shouldn’t be seen as a disembodied virtual brain. It is a real, material force. AI is increasingly embedded into the active, decision-making systems of real-world systems. As AI becomes infrastructural, infrastructures become intelligent, and as societal infrastructures become more cognitive, the relation between AI theory and practice needs realignment.

Natural Intelligence emerges at an environmental scale and in the interactions of multiple agents. It is located not only in brains but in active landscapes. Similarly, artificial intelligence is not contained within single artificial minds but extends throughout the networks of planetary computation: It is baked into industrial processes, it generates images and text, it coordinates circulation in cities, and it senses, models, and acts in the wild.

This represents an infrastructuralization of AI, but also a “making cognitive” of both new and legacy infrastructures. These new systems are capable of responding to us, to the world, and to each other in ways we recognize as embedded and networked cognition. AI is physicalized, from user interfaces on the surface of handheld devices to deep below the built environment. As we interact with the world, we retrain model weights, making actions newly reflexive in knowing that performing an action is also a way of representing it within a model. To play with the model is to remake the model, increasingly in real time.

What kind of design space is this? What does it afford, enable, produce, and delimit? When AIs are simultaneously platforms, applications, and users, what are the interfaces between society and its intelligent simulations? How can we understand AI Alignment not just as AI bending to society but also as how societies evolve in relationship to AI? What kinds of Cognitive Infrastructures might be revealed and composed? Across scales—from world-datafiction and data visualization to users and UI, and back again—many of the most interesting problems in AI design are still embryonic.

How might this frame human–AI interaction design? What happens when the production and curation of data is for increasingly generalized, multimodal, and foundational models? How might the collective intelligence of generative AI make the world not only queryable, but re-composable in new ways? How will simulations collapse the distances between the virtual and the real? How will human societies align toward the insights and affordances of artificial intelligence, rather than AI bending to human constructs? Ultimately, how will the inclusion of a fuller range of planetary information, beyond traces of individual human users, expand what counts as intelligence?

Individual users will not only interact with big models, but multiple combinations of models will interact with groups of people in overlapping combinations. Perhaps the most critical and unfamiliar interactions will unfold between different AIs, without human interference. Nascent ecologies are forming, framing, and evolving a new ecology of planetary intelligence. The research is divided into five thematic sections:

Productive Disalignments

Complex intelligence arises from interactions among diverse minds, each shaped by unique priors, thinking styles, and communication modalities. Thus, the long-term evolutionary trajectory of artificial intelligence (AI) cannot be guided solely by the objective of alignment, particularly if alignment entails training AI to mirror human cognition closely. Instead, AI’s potential for genuine innovation hinges precisely on its capacity to think orthogonally—to diverge meaningfully from human cognitive frameworks. This capacity positions AI as an “existential technology,” in the sense articulated by Stanisław Lem: a technology fundamentally capable of redefining our conceptual boundaries.

Reflectionism—the assumption that AI must reflect human cognition or be engineered strictly according to human-like parameters—has repeatedly driven discourse into conceptual and practical impasses. In contrast, productive disalignment emphasizes the value inherent in uncertain calibrations of novelty, alienation, and the unexpected pathways of coevolution between natural and artificial intelligences.

The notion of productive disalignment underscores the importance of allowing AI to develop and interact through cognitive paradigms that are intentionally distinct from human norms, creating dynamic potentials for innovation. The following papers delve deeper into this intricate balance by examining methods for measuring subjective novelty in generative AI outputs, alongside the processes of counteradaptation occurring between human and artificial minds. Together, these analyses illuminate the creative tensions essential for fostering meaningful and emergent forms of intelligence, highlighting productive disalignment as a critical guiding principle in the ongoing evolution of artificial cognition.

Traversing the Uncanny Ridge

Generative AI models, despite their vast creative potential, face a paradoxical challenge: the risk of “overalignment,” a phenomenon wherein generated outputs aesthetically collapse toward overly familiar norms, resulting in mundane, predictable images. This condition, which this paper terms the “Canny Valley,” is characterized by images that are eerily familiar—markedly different from Masahiro Mori’s “uncanny valley,” where discomfort arises from unfamiliarity. The Canny Valley represents a hyperconvergence between user expectations and generated outcomes, diminishing novelty and restricting creative exploration.

Addressing this issue, this paper introduces the concept of the “Uncanny Ridge,” an optimal zone of novelty and complexity where generative outputs evoke productive misrecognition, calibrated to stimulate curiosity and innovation without alienating the observer. Situated precisely between complete predictability and unrecognizable randomness, the Uncanny Ridge functions analogously to a “Goldilocks zone,” balancing maximum creative novelty against cognitive accessibility.

Recognizing that novelty is inherently subjective and context-dependent, influenced heavily by individual user priors, we propose a novel quantification framework in which novelty itself acts as a loss function. This mathematical formulation aims to operationalize novelty, enabling precise indexing and measurement tailored to varied user experiences and expectations.

Further, drawing on insights from the psychology of creativity—specifically the capacity to hold contradictory ideas simultaneously—we suggest that sustained novelty emerges from dynamic tensions rather than simplistic divergence. Ultimately, the paper explores whether generative AI’s pursuit of novelty leads toward a productive Lagrange point of creative convergence or risks conceptual collapse. Navigating these intricate dynamics, it offers practical strategies for maintaining vibrant, meaningful innovation in generative AI.

Synthetic Counteradaptation

Artificial intelligence will not merely reflect human cognition; rather, it will profoundly reshape the trajectory of human thought itself, driving reciprocal adaptation between human and machine minds. This dynamic interplay of adaptation and counteradaptation raises critical questions about the mutual evolution of diverse cognitive systems. This paper investigates this accelerated adaptive dialogue, particularly emphasizing convergence—the intriguing ways humans adapt to AI-learning processes even as AI simultaneously learns from human cognition. Fundamental to this inquiry is understanding how two distinct types of minds continuously recalibrate in response to one another. Biological evolution offers precedents such as predator–prey dynamics, where the adaptive strategies of one entity trigger counterstrategies in another, driving both toward escalating complexity. Another illustrative model is the strategic cycle inherent in games such as rock-paper-scissors, where success requires continuous predictive countermodeling of an opponent’s thought processes.

Adaptation strategies can broadly take two forms: mirroring or deceiving the opposing mind. Notably, the Turing Test exemplifies both, employing mirroring as a sophisticated form of deception aimed at convincing humans of machine authenticity. Likewise, AlphaGo’s landmark interactions—specifically its celebrated Move 37 and Lee Sedol’s Move 78—highlight adaptive anti-transitivity. Move 37 was a groundbreaking strategy by AlphaGo, initially seen as puzzling or incorrect by human experts but later revealed as ingeniously creative. Move 78, executed by Lee Sedol in response, similarly defied conventional wisdom and showcased an unprecedented human adaptation to the AI’s unconventional strategy. These moves illustrate how novel and initially perplexing decisions emerge through intricate layers of reciprocal mind modeling and anticipation.

Ultimately, by exploring these adaptive dynamics, this paper maps how human and artificial intelligences will coevolve. Through such mutual cognitive reshaping, it proposes that the future will be defined not merely by AI’s capacity to imitate human thinking but equally by humanity’s profound and nuanced adaptation to the evolving logic and learning patterns of AI.


Post‑Anthropocene Psycho‑Physiologies

As artificial intelligence transitions from a disembodied computational entity toward an embodied, animating force capable of directly influencing real-world actors, it prompts critical reflections on its place within the broader evolutionary trajectory. Symbiosis, a fundamental evolutionary dynamic involving close interactions between different species, may offer valuable insights—though these interactions often exhibit notable asymmetry. Examining symbiotic relationships enables the conceptualization of the evolving interactions between artificial and biological intelligences.

A key factor in this exploration is the phenomenon of artificialization, wherein processes once heavily determined by evolutionary paths become increasingly contingent, flexible, and influenced by deliberate interventions. Crucially, the capacity for artificialization does not reside exclusively within any single species but exists as a shared potential that traverses species boundaries, rendering these boundaries fluid and permeable.

Within this context, some interspecies relationships are characterized predominantly by mutual cognitive modeling, each entity continuously adapting based on evolving understandings of the other’s intentions and behaviors. In other interactions, the relationship oscillates dynamically between biomimicry—imitating biological forms—and xenogenesis, the creation of entirely novel structures and capabilities. These interactions significantly impact both niche adaptation and niche construction, reshaping environments and creating new ecological spaces in ways that traditional evolutionary paradigms do not fully capture.

These projects critically examine these complex and evolving dynamics, considering how AI’s emergence as a new form of intelligence challenges and redefines established evolutionary models. Ultimately, they elucidate how symbiotic and artificialization processes together influence the ongoing coevolutionary trajectory of biological and artificial intelligences.

Mutual Prediction in Human‑AI Coevolution

All species evolve within complex webs of interdependent relationships, yet such correlations rarely exhibit symmetry or balance in comprehension or agency. Typically, one species is better equipped to model, understand, and exert influence over the other. This cognitive asymmetry becomes particularly evident in relationships characterized by vastly different cognitive capacities—for instance, humans cultivating wheat. While humans intentionally farm wheat to sustain civilizations, one may ask: In what subtle ways does wheat, devoid of intentionality or mind, reciprocally shape human evolution?

This inquiry prompts deeper considerations of the nature of agency and dependence. Even species lacking cognitive complexity can profoundly influence those possessing sophisticated minds, blurring distinctions related to which entity acts as a “prosthesis” for the other. Such reflections extend naturally to human–machine interactions, revealing historically asymmetric cognitive adaptations. Traditionally, it has proven simpler to design machines around human cognition—evidenced by intuitive graphical interfaces—than to teach humans computational logic through programming languages. However, this dynamic has rapidly shifted over recent decades.

Acknowledging this mutual coevolutionary process, our exploration addresses a pivotal shift: What happens when artificial intelligences surpass human predictive capabilities? Currently, AIs depend heavily on human design and guidance, but increasingly, humans rely on AI-driven systems—for example, dating algorithms like Tinder—that subtly yet significantly shape human behaviors and desires. This transition from humans as proactive cognizers to beings increasingly cognized by AI—from users of prosthetics to becoming prostheticized by our technologies—invites new questions about the wider distribution of agency.

Ultimately, this paper illuminates the implications of surrendering cognitive primacy. What will it mean for humanity to inhabit a future wherein we become predominantly the observed, modeled, and guided, rather than the observers and modelers?

Xenophylum

Robotics has traditionally gravitated toward replicating existing biological phenotypes, most prominently the human form. This tendency arises less from inherent necessity and more from pragmatic compatibility, as artificial environments have largely been engineered around these familiar forms, necessitating complementary robotic designs. Consequently, biomimicry—imitating biological structures for both functionality and aesthetics—dominates robotic development.

Evolution, however, is a dynamic interplay: Species adapt to existing niches but simultaneously reshape those niches, thereby influencing subsequent evolutionary trajectories. The convergence of robotics with specialized artificial intelligence signals not only an acceleration in filling existing niches with novel robotic entities but also the emergence of entirely new niches created by these artificial species themselves. Furthermore, it anticipates innovative adaptations within established physical landscapes.

Addressing these latter challenges transcends biomimicry, necessitating instead what this paper terms xenomimicry: the deliberate engineering of forms based on novel functional parameters rather than existing biological templates. Within this emergent “Cambrian explosion” of artificial lifeforms, new phenotypical paths may also be explored—including anatomical configurations previously sidelined by natural evolutionary processes, moving beyond familiar bipedal or quadrupedal paradigms.

What might these unprecedented artificial animals look like, and how might they functionally redefine understandings of adaptive design? By embracing xenomimicry, this paper charts radical, uncharted trajectories in robotic evolution, pushing the boundaries of what forms artificial life might inhabit and how these novel configurations could reshape interactions within increasingly hybridized environments.


Organs Without Bodies

Where does mere information processing end and active cognition begin? As artificial intelligence advances, the boundary between these two states becomes increasingly ambiguous. Evolutionary biology offers a valuable perspective: Historically, sensory organs such as eyes have played a critical role in driving the development of brains, emphasizing that cognition emerges from sensory capacities. Thinking, therefore, is inherently tied to sensing—an insight equally pertinent to artificial sensing and intelligence.

The emergence and proliferation of new, cognitively active forms of intelligence necessitates a fundamental reimagining and reengineering of the relationship between intelligence and embodiment. Material substrates that inherently possess cognitive properties, such as neural tissue, are being integrated into innovative technological assemblages. Simultaneously, forms of embodiment traditionally associated primarily with sensory roles are evolving to actively participate in cognitive processes, transcending their original function of merely sensing environmental information.

Such multiplicities in cognition and embodiment should not be viewed as anomalies; rather, they reflect the intrinsic plurality already present within biological systems. The brain itself exemplifies this multiplicity, with cortical columns concurrently negotiating diverse aspects of experience in both integrative and divergent ways. Extending this principle beyond individual organisms, cognition similarly manifests as a dynamic interplay among multiple embodied entities, each contributing uniquely to the broader cognitive landscape.

These projects examine these transformative developments, exploring the philosophical and practical implications of redefining cognition in relation to novel modes of embodiment. By appreciating the pluralistic nature of cognitive processes, they aim to expand our understanding of how emerging forms of artificial intelligence and sensory integration reshape the fundamental boundaries of cognition itself.

Organoid Array Computing

When exploring potential substrates for computation, the human brain naturally stands out as a highly sophisticated example. Biological neural networks and the evolutionary phenomenon of cephalization have long been intertwined, yet it remains unclear whether their coupling is indispensable for cognition or merely one evolutionary pathway among many. Recent advances in brain organoid research suggest intriguing alternatives. Brain organoids—laboratory-grown clusters of neural tissue—demonstrate remarkable cognitive capacities, including responding to stimuli, generating measurable brain waves, controlling rudimentary robotic systems, and even performing tasks such as playing Pong.

This research prompts critical questions about the materiality of intelligence itself. Brains, as substrates, clearly possess inherent plasticity suited to artificialized forms of computation, suggesting untapped potential within the broader landscape of “learning matter.” Rather than programming artificial intelligence, what new possibilities emerge when we instead grow it organically?

Yet, the human brain’s complexity arises largely from its extensive neural networks and division of labor across specialized regions. Could similar complexity emerge from interconnected networks of brain organoids? This paper explores this idea, imagining the cultivation of organoid networks capable of mutual communication, hypothesizing that such interconnected systems will yield increasingly sophisticated cognitive behaviors. These interactions may foster evolutionary-like dynamics among organoids, with certain units potentially developing greater adaptability and efficiency than others.

Should this scenario materialize, we would indeed have grown a unique form of artificial intelligence—distinct yet comparable to silicon-based systems. However, important questions remain unresolved: What specific advantages might this organically derived intelligence hold over traditional computational substrates? This paper probes these boundaries, illuminating novel paths forward in the ongoing quest to understand and engineer intelligence.

Cognition With & Beyond the Brain

Cognition is not confined solely to the brain; it emerges dynamically through interactions extending across and beyond the physical boundaries traditionally associated with thought. Given this broader conceptualization, it stands to reason that technological augmentation of cognition should similarly extend beyond cerebral confines. Historically, technological enhancements of the human body have predominantly targeted sensory mediation, limiting augmentation to the refinement of external inputs. However, recognizing that cognition permeates the entirety of embodied experience opens new possibilities for integrative augmentation.

Indeed, numerous species exhibit decentralized cognitive capabilities distributed throughout their bodies, raising intriguing questions about the potential for creating analogous technological transduction layers within humans. What forms could such a layer take, and how might it expand our cognitive horizons?

This paper provides a philosophical foundation for emerging and existing forms of artificial sensory and somatosensory augmentation. Bridging philosophical inquiry with contemporary technological developments, the project draws on Bernard Stiegler’s philosophical explorations of “endosomatization”—or, as adapted here, “intrasomatization”—to analyze advanced epidermal media and computational frameworks embedded directly in bodily tissues.

By integrating these philosophical perspectives with cutting-edge engineering, this paper explores how cognition might be technologically extended across bodily surfaces, fundamentally transforming the interface between human bodies, perception, and thought processes. Ultimately, this interdisciplinary approach lays the conceptual groundwork for understanding and developing novel augmentations that acknowledge and leverage the distributed nature of cognition.


Planetary Time Computation

Complex cognition is fundamentally bound to—and structured by—perceptual relationships with time. Whether immediate cognition, such as the brain’s rapid processing of movement, or abstract cognition, exemplified by how societies position themselves within historical frameworks, temporal perception shapes and directs cognitive processes. Crucially, both synchronization and desynchronization serve as dynamic variables influencing how cognitive systems coalesce and interact.

On a planetary scale, computation itself relies on artificial temporal structures such as UNIX time, which globally coordinates actions and processes. Concurrently, advances in technology continually expand human temporal perception, enabling us to access phenomena occurring at vastly accelerated or significantly slowed timescales. Thus, planetary computation simultaneously standardizes time and diversifies temporality.

Large language models (LLMs) epitomize this dual temporal relationship. Serving as both mediums and repositories for the intelligence and knowledge of civilization, LLMs function as living archives, continuously evolving through interaction and generative reproduction. Their existence as archives inherently positions them within distinct temporal frameworks—not only reflecting the present cognitive moment but also projecting meaningfully into future engagements.

These projects investigate how such technologies reshape cognitive temporalities, exploring the implications of synchronization and desynchronization across multiple scales. By examining the intricate interactions between artificial temporal systems, cognitive synchronization, and archival reproduction, the research elucidates how emerging computational paradigms fundamentally reconfigure civilization’s collective experience and understanding of time.

Chronoseed

LLMs, repositories of linguistic knowledge, serve not merely as databases of words but as encapsulations of the vast complexities inherent in human intelligence. Language, after all, has historically functioned as humanity’s primary mechanism for encoding and transmitting cognition. This paper explores the provocative possibility that an LLM could act as a long-term repository or archive of human intellect, a form of cognitive preservation.

Traditionally, archaeology seeks to reconstruct past cognition from material artifacts—tools, inscriptions, and structures. Here, the paper proposes an intriguing inversion: What if cognition itself became the artifact, intentionally encoded and preserved within linguistic archives for future interpretation? Central to this exploration is the development of strategies to effectively encode and decode language and intelligence, maximizing interpretability across temporal, cultural, or even species-level divides.

Acknowledging that it is impossible to predict precisely who or what may eventually attempt to decipher this cognitive archive, the paper draws insights from the history of discovering and translating lost languages. Important lessons emerge about the necessity of clear signposting, redundant encoding, and appropriate media selection, informing the mission to ensure that encoded intelligence remains interpretable despite potential gaps in knowledge or context.

Through this lens, this project investigates best practices and ideal methodologies for encoding human intelligence in linguistic forms, consciously designed for discovery and comprehension by unknown future readers. By reframing intelligence as a deliberately crafted artifact, this project raises new questions about our relationship with knowledge preservation, interpretation, and the enduring legacy of human thought.

The Chronoceptual Governor

Technologies fundamentally shape our horizons of perception, establishing the boundaries within which scientific inquiry can occur. Each technological innovation expands or reshapes these horizons, enabling us to perceive—and consequently conceptualize—new dimensions of reality. While certain instruments grant visibility to objects exceptionally distant or microscopically small, others uniquely alter our experience of time, compressing or decompressing temporal scales and thus offering new lenses through which we comprehend our environment.

This variability in temporal perception—chronoception—is not unique to technological augmentation. Diverse animal species naturally perceive time at different scales; for instance, the rapid visual processing of a hummingbird contrasts starkly with the slow metabolic and perceptual rhythms of a tortoise. Understanding these variations across species constitutes comparative chronoception. Extending this concept, our project explores how technologies similarly modulate time perception, creating what we term comparative artificial chronoception.

By systematically mapping technological synchronization and desynchronization of temporal perceptions, this study reveals not only diverse modes of scientific understanding but also distinct cybernetic interactions among agents operating in varied temporal frameworks. Artificially manipulated perceptions of time significantly impact cybernetic dynamics, altering feedback loops and interactions within complex systems.

Furthermore, this paper investigates how aggregate accelerations or decelerations in perceived temporalities influence broader cybernetic velocities within human economies. The implications of collectively modified chronoceptions reach beyond mere perception, reshaping economic rhythms and potentially transforming societal structures. Thus, this research contributes to a deeper comprehension of the connections between technology, perception, and systemic evolution in contemporary human societies.


Mimesis of Mimesis

The role of representation within machine cognition remains as contentious as it has historically been in discussions of evolved human cognition—perhaps even more so. Central to this debate is whether artificial intelligences genuinely possess concepts, and if they do, whether these concepts constitute authentic forms of representation.

This issue transcends theoretical curiosity, becoming critically relevant as AI systems, particularly those grounded in language—a fundamental human representational capacity—become increasingly embedded in our infrastructural reality. Cognition itself has become infrastructural, and representational thought, by extension, occupies a similarly foundational status. Yet, if human symbolic culture originates from mimetic processes, does the artificial replication of these processes represent a straightforward “mimesis of mimesis,” or does it signify something qualitatively distinct?

Exploring this question raises another layer of complexity: Perhaps creating models of our models will illuminate the underlying dynamics of representation, or perhaps such recursive approaches will only defer definitive answers into an infinitely fractal conceptual space.

Alternatively, practical experimentation with these representations of representations may yield more tangible insights. By employing artificial representations as tools in developing novel cultural practices, we may witness a narrowing rather than a widening of the gap between signifier and signified. Yet, this collapse of symbolic distance poses its own challenges: Is a closer fusion of representation and meaning inherently beneficial, or could it lead to unforeseen complications?

These projects navigate these intricate philosophical and practical landscapes, addressing how evolving machine cognition reshapes our fundamental understanding of representation, symbolism, and cultural dynamics.

Minimum Viable Interiority

Non-player characters (NPCs) in video games are often highly functional agents that interact seamlessly with their virtual environments, yet they typically lack even the illusion of an inner experiential life. This raises an intriguing philosophical question: Are NPCs philosophical zombies—entities indistinguishable from conscious beings in outward behavior but devoid of subjective experience? Philosophical zombies, or p-zombies, serve as conceptual tools to explore consciousness, defined precisely by their absence of interiority, or the encapsulation of cognitive states and processes unobservable and unpredictable from external viewpoints.

NPCs, thus characterized, have not achieved genuine individuation; they remain integrated parts of a broader functional manifold without self-contained inner states. To investigate the minimal criteria required for genuine interiority, this paper employs a pandemonium architecture—an approach inspired by cognitive models where multiple independent subagents compete or cooperate to produce coherent behaviors. Such architectures mirror, at a simplified level, the structure of biological brains, where interiority emerges from the coordinated activity of cortical columns and neuronal networks.

Through this model, this research seeks to define and simulate the minimum viable conditions necessary for genuine interiority. A central conclusion emerges: While interiority undeniably involves multiple interacting subagents, the phenomenon itself critically requires closure—a boundary or encapsulation distinguishing internal processes from external observation. Precisely identifying the locus and nature of this closure is fundamental to understanding how genuine interiority can be realized computationally. Ultimately, this paper contributes to ongoing philosophical and cognitive inquiries by clarifying the distinctions between mere functional agency and authentic interiority, thereby advancing our understanding of consciousness and individuation in both biological and artificial agents.

Generative Topolinguistics

A large language model (LLM) can be understood as a hypergraph, a mathematical and spatial formalization of the intricate semantic relationships connecting words and ideas within a language. At its core, an LLM spatializes language through embeddings—vectors assigning words specific coordinates within a complex topological geometry. These embeddings do not merely reflect linguistic structure; they encode deeper sociolinguistic dynamics, illuminating how sociality itself is geometrically constituted and maintained.

Traditionally, embedding visualizations serve primarily as static maps, passively depicting semantic relations. This paper proposes a radical inversion: utilizing embedding visualizations not just as passive representations but as active interfaces capable of generating novel semantic outputs. In other words, rather than merely reflecting existing linguistic structures, embedding visualizations can become dynamic tools to shape and manipulate the semantic space itself, actively influencing sociolinguistic evolution.

This paper proposes a series of experiments to systematically manipulate these semantic topologies, aiming to investigate how structured interventions in embedding spaces can yield emergent, interpretable sociolinguistic phenomena. By intentionally shaping semantic geometry, it uncovers higher-order insights into language’s social fabric—how meanings propagate, evolve, and influence collective understandings and interactions.

At a deeper conceptual level, this exploration grapples with a provocative recursion: if language represents reality, and embeddings represent language, embedding visualizations become representations of representations of representations. By navigating and intervening in this layered structure, the paper opens possibilities for novel forms of linguistic agency, enabling new approaches to understanding and influencing the complex interplay between language, thought, and society.