TechRxiv
AGI23___Computable_AGI_Part_1__Enactivism___Objectively_Optimal_Super_Intelligence(30).pdf (478.55 kB)
Download file

Enactivism & Objectively Optimal Super-Intelligence

Download (478.55 kB)
preprint
posted on 2023-03-14, 04:21 authored by Michael Timothy BennettMichael Timothy Bennett

Software's effect upon the world hinges upon the hardware that interprets it. This tends not to be an issue, because we standardise hardware. AI is typically conceived of as a software "mind'' running on such interchangeable hardware. This formalises mind-body dualism, in that a software "mind'' can be run on any number of standardised bodies. While this works well for simple applications, we argue that this approach is less than ideal for the purposes of formalising artificial general intelligence (AGI) or artificial super-intelligence (ASI). The general reinforcement learning agent AIXI is pareto optimal. However, this claim regarding AIXI's performance is highly subjective, because that performance depends upon the choice of interpreter. We examine this problem and formulate an approach based upon enactive cognition and pancomputationalism to address the issue. Weakness is a measure of plausibility, a "proxy for intelligence'' unrelated to compression or simplicity. If hypotheses are evaluated in terms of weakness rather than length, then we are able to make objective claims regarding performance (how effectively one adapts, or "generalises'' from limited information). Subsequently, we propose a definition of AGI which is objectively optimal given a "vocabulary'' (body etc) in which cognition is enacted, and of ASI as that which finds the optimal vocabulary for a purpose and then constructs an AGI.

History

Email Address of Submitting Author

michael.bennett@anu.edu.au

ORCID of Submitting Author

0000-0001-6895-8782

Submitting Author's Institution

The Australian National University

Submitting Author's Country

  • Australia