loading page

Enactivism & Objectively Optimal Super-Intelligence
  • Michael Timothy Bennett
Michael Timothy Bennett
The Australian National University

Corresponding Author:[email protected]

Author Profile

Abstract

Software’s effect upon the world hinges upon the hardware that interprets it. This tends not to be an issue, because we standardise hardware. AI is typically conceived of as a software mind running on such interchangeable hardware. The hardware interacts with an environment, and the software interacts with the hardware. This formalises mind-body dualism, in that a software mind can be run on any number of standardised bodies. While this works well for simple applications, we argue that this approach is less than ideal for the purposes of formalising artificial general intelligence (AGI) or artificial super-intelligence (ASI).
The general reinforcement learning agent AIXI is pareto optimal. However, this claim regarding AIXI’s performance is highly subjective, because that performance depends upon the choice of interpreter. We examine this problem and formulate an approach based upon enactive cognition and pancomputationalism to address the issue. Weakness is a measure of simplicity, a “proxy for intelligence” unrelated to compression. If hypotheses are evaluated in terms of weakness, rather than length, we are able to make objective claims regarding performance. Subsequently, we propose objectively optimal notions of AGI and ASI such that the former is computable and the latter anytime computable (though impractical).
19 Apr 2024Submitted to TechRxiv
26 Apr 2024Published in TechRxiv