By accepting the fundamental, unequivocal logical fact that our experiential existence is necessarily, entirely mental in nature, and accepting the unambiguous scientific evidence that supports this view, we can move on to the task of developing a functioning and useful theory of mental reality. I will attempt to roughly outline such a theory here, with the caveat that trying to express such a theory in language that is thoroughly steeped in external, physical world ideology is at best difficult. Another caveat would be that, even though the categorical nature of the theory probably cannot be disproved (mental reality would account for all possible experiences,) some models might prove more useful and thus be better models.
IMO, the phrase “we live in a mental reality,” once properly understood, is realized as a self-evident truth. Self-evident truths cannot be “disproved.”
For any particular theory to even get off the ground, there must be a structure that can organize it into something comprehensible, testable (for usefulness), and which corresponds to current experience while making predictions and retrodictions.
There are at least two indisputable structures to mind and how it generates experience; logic and mathematics. These may be two different ways of expressing the same universal principle of mind. In this model, this “mathlogical” principle is that which takes a set of information and processes it into experience. I’m going to simplify the term and say it this way: experience is the algorithmic expression of a data set.
The data set that the algorithm processes can be roughly stated as that set of data which represents the mental structures we identify as individuals. No two individuals are comprised by the exact same identity set or they would be the same person, which follows the logical principle of identity.
And so, no two people experience the same exact thing even though the algorithm follows the same rules for expression. Two individuals can be connect to the some or even much of the same data, but not all of it. Note: there are infinite varieties of data sets because there is infinite information available that can be arranged an infinite number of ways.
Innumerable individuals can have included in their individual data sets large blocks of arranged information which they are, essentially, sharing. The algorithmic expression of such data blocks, even with innumerable individual variances of data not contained in the shared data block, could result in what we observe as a shared, external, physical world. In fact, it may be that the “external physical world” is a data block that acts as filtering information that other individual information is processed through – at least to a large degree.
And so, we experience what seems to be a consistent, shared “world” that is governed by logic and math. However, the model is fundamentally incomplete unless we bring in another fundamental quality of experience: free will.
In this model, free will is precisely defined as the capacity to unilaterally, free of both the data and the algorithmic process, direct one’s attention. It is absolutely free and unfettered, and as such it is also ineffable. Free will represents a single variable in the algorithm. Although this variable cannot change the principles by which the algorithm processes the data into experience, the variable establishes what information is included in the data set the algorithm is procedurally processing into experience.
Usually, people use their free will capacity in no other way than to provide an experience-sustaining feedback loop. We focus our attention on the current expression of the data set and largely limit our attention to that which is logically implied by what the algorithm is already producing. We’re usually trapped in our own feedback loop because we identify with the algorithmic expression we experience as the very definition of what is real. Oddly, as a result of confusing cause and effect, we erroneously think that our experience is caused by what we experience, when that can’t possibly be the case. It’s logically absurd.
In this model, we actually have the free will capacity to put our attention on any information, even if it is “outside” of our current identity data set and outside of what we’re experiencing as “shared physical reality.” We can set this variable of the algorithm to refer back to any information we want out of infinite information available. We call this capacity our “imagination.”
Boom! Mental reality without a trace of solipsism.