Abstract
In this paper, we address reports of “selfless” experiences from the perspective of active inference and predictive processing. Our argument builds upon grounding self-modelling in active inference as action planning and precision control within deep generative models – thus establishing a link between computational mechanisms and phenomenal selfhood. We propose that “selfless” experiences can be interpreted as (rare) cases in which normally congruent processes of computational and phenomenal self-modelling diverge in an otherwise conscious system. We discuss two potential mechanisms – within the Bayesian mechanics of active inference – that could lead to such a divergence by attenuating the experience of selfhood: “self-flattening” via reduction in the depth of active inference and “self-attenuation” via reduction of the expected precision of self-evidence.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright (c) 2020 Jakub Limanowski, Karl Friston