AGI23_PREPRINT_ARCHIVE_VERSION_Part_3___How_Causal_language_emerges_from_the_weakest_hypothesis_in_an_enactive_model.pdf (474.04 kB)
Download file

Emergent Causality & the Foundation of Consciousness

Download (474.04 kB)
posted on 2023-04-28, 20:03 authored by Michael Timothy BennettMichael Timothy Bennett

Accepted for full oral presentation at the 16th Conference on Artificial General Intelligence, taking place in Stockholm, 2023.

To make accurate inferences in an interactive setting, an agent must not confuse passive observation of events with having intervened to cause them. The do operator formalises interventions so that we may reason about their effect. Yet there exist pareto optimal mathematical formalisms of general intelligence in an interactive setting which, presupposing no explicit representation of intervention, make maximally accurate inferences. We examine one such formalism. We show that in the absence of a do operator, an intervention can be represented by a variable. We then argue that variables are abstractions, and that need to explicitly represent interventions in advance arises only because we presuppose these sorts of abstractions. The aforementioned formalism avoids this and so, initial conditions permitting, representations of relevant causal interventions will emerge through induction. 

These emergent abstractions function as representations of one’s self and of any other object, inasmuch as the interventions of those objects impact the satisfaction of goals. We argue that this explains how one might reason about one's own identity and intent, those of others, of one's own as perceived by others and so on. In a narrow sense this describes what it is to be aware, and is a mechanistic explanation of aspects of consciousness.


Email Address of Submitting Author

ORCID of Submitting Author


Submitting Author's Institution

The Australian National University

Submitting Author's Country

  • Australia