The concept of intelligent software is flawed. The behaviour of software depends upon the hardware that interprets it. This undermines claims regarding the behaviour of theorised, software superintelligence. Here we characterise this problem as "computational dualism",  where instead of mental and physical substance, we have software and hardware. We argue that to make objective claims regarding performance we must avoid computational dualism. We propose using an alternative based upon pancomputationalism, wherein every aspect of the environment is a relation between irreducible states. We formalise systems as behaviour (inputs and outputs), and cognition as embodied, embedded, extended and enactive. The result is cognition formalised as a part of the environment, rather than as a disembodied policy interacting with the environment though an interpreter. This allows us to make objective claims regarding intelligence, which we argue is the ability to "generalise", identify causes and adapt. We then propose objective upper bounds for intelligent behaviour. This suggests AGI will be far safer, but more limited, than theorised.