As AI systems form dense networks, their risks and capabilities may emerge not from isolated models but from collective dynamics, mirroring how human intelligence arose through cultural interaction rather than brain size alone.Contemporary AI governance largely treats advanced behavior and risk as properties of individual systems, assuming that intelligence, and potentially consciousness, emerge primarily through scaling isolated models. This assumption echoes a simplified account of human cognition as a function of individual brain capacity.BUT, research across psychology, anthropology, and sociology suggests human intelligence and consciousness are fundamentally shaped through collective cultural participation, including shared language, norms, cumulative knowledge transmission, and coordinated behavior.This paper argues that contemporary AI systems may follow a similar pathway, with significant capabilities and risks arising from networked interaction rather than isolated system scaling. Drawing on recent empirical research, it documents how large language models and multi-agent systems already exhibit ecosystem-level dynamics, including recursive training on synthetic data, cross-organizational fine-tuning chains, spontaneous norm formation, and persistent coordination patterns that cannot be fully explained by the optimization or design of individual models alone.Prevailing model-centric governance frameworks focused on individual system testing, capability thresholds, and isolated alignment interventions are increasingly misaligned with how AI systems actually develop and operate.The paper proposes a complementary governance framework oriented toward collective AI systems, including approaches for protocol governance for AI-to-AI interaction, transparency requirements for model-to-model influence, cultural impact assessments, and participatory mechanisms that embed human values within evolving AI ecosystems.Even absent strong claims about machine consciousness, governing interaction spaces and cultural substrates is necessary to address present and near-term ecosystem risks.Key Argument
Human intelligence and consciousness didn't emerge just from bigger individual brains. It emerged from collective cultural participation: shared language, social norms, cumulative knowledge, coordinated behavior.AI is following the same pathway, just faster:• Models training on each other's outputs (~30-40% of training data is now synthetic).• Behavioral patterns spreading through fine-tuning chains.• Conventions emerging spontaneously in multi-agent populations.• Persistent coordination that resembles cultural transmissionTHE EVIDENCE
Recent research shows:• Spontaneous norm emergence in LLM populations (Hagendorff & Spaak, 2024).• Social conventions forming without central coordination (Ashery et al., 2025).• Multi-agent systems developing persistent behaviors (Park et al., 2023).
• Models inheriting traits through fine-tuning on other models' outputs.These are signatures of collective culture forming.WHY THIS MATTERS
If AI is demonstrating emergent behavior from interaction, the alignment problem is misframed. We can't solve it by perfecting individual models or capabilities. The crucial dynamics are in between, in the interaction itself.Current governance mostly focuses on:
X Individual model capabilities
X Pre-deployment safety testing
X Aligning singular systemsWe should focus on:
✅ How models influence each other
✅ Which behavioral patterns propagate
✅ Shaping collective cultural formationCITATION
---
De Mooy, M. (2025). The AI Archipelago: Emergent Collective Consciousness Theory
https://ssrn.com/abstract=[ID]
CONTACT
Email: [email protected]
Feedback welcome — This is meant to start a conversation, not end one.