Given that many interpretations of probability have been considered and rejected
*, it is not clear that a simple interpretation of probability exists. For example, the rhetorical flourishes of the simple interjection “Fat chance!” are unlikely to be captured in a rational
* discussion. Furthermore, even if such an interpretation did exist, it is not at all obvious that a paper should be written about it. For example, a study of monkeys, who share 93% of DNA with humans
*, shows that they do not use writing in the wild. Monkeys are nonetheless able to learn simple mathematics
* and English
*. We thus defer discussion of the subject by treating the paper as a preliminary mere-exposure experiment
*; it does not need to be approved in advance due to essays falling under normal educational practices
*.
Our interpretation may be termed immediacy. Consider the statements “Alex is wearing a red sweater” and “Alex is wearing a blue sweater”, and assume that the former was observed yesterday but the latter is present today. Qualitatively, these perceptions are different. We can interact with the blue sweater, for example by splashing paint on it; we term this “near”. In contrast, the red sweater exists only in our mind; we may have recollected incorrectly
*, for example, if Alice was wearing a red sweater but swapped desks with Alex; we term this “far”. The far reality is mentally constructed, with no direct perceptual connection.
Far reality is fragile. Consider a child with no experience of elevators, taking their first ride alone. The door closes on their parents, a number changes, the elevator beeps, and when the door opens again a new space is presented, with complete strangers and no parents in sight. Confusion results, only resolved by introducing a notion of “floor” and its corresponding sensations and notifications of vertical movement.
A reasonable reaction is to ask is how to avoid the possibility of being confused. Unfortunately, we run into the problem of incompleteness
*, and in particular the problem of other humans
*. Although a problem may not necessarily have a solution
*, it may have an approximation; furthermore, we can categorize the techniques used to approximate the solutions, in this paper termed “mental constructions".
What are mental constructions? Again, an example: “The visual system, although highly complex
*, in its most basic elements appears to function as a recording and processing device
*.” The previous sentence is simply a reflection of the dominance of the computational paradigm
*. A better model than simple computation is the act of writing itself, wherein letters are sequentially produced on a page by some complex process with memory
*. We of course lose aspects of humanity when we attempt to describe it in writing, but it is futile to expect to solve these problems via more writing
*, so we stop there. Thus, for our purposes, mental constructions are recurrent processes that result in the production of writing. Finally, then, we arrive at our interpretation of probability; given several possible processes with which to produce writing, select one (for this paper, a randomized variant of the autofocus system
*).
One particular strength of our interpretation is its similarity to the etymology of probability
*. The word probability originates from the Proto-Indo-European root *pro-bʰwo-, meaning “to in front”. This came to be associated with the Latin
probus, meaning good, noble, and virtuous. After morality became a common good
*, there were then techniques of
probo to test and certify what was good. Hypothetically this then became the Latin
probābilis, meaning provable, credible, and finally probability as a measure of provability. In our case, provability is determined by a measure of verbosity; in particular, we measure proof as an estimate of the amount of processing done, via an analysis of the amount of references and obscure concepts used.
Another strength is its self-referential nature, more precisely homiconicity
*; by writing a paper interpreting probability using a probabilistic paper-writing process, we achieve a similar representation of both paper and probability. Our interpretation thus covers a vast conceptual space in a small amount of time by utilizing foundational tools repeatedly, particularly the Axiom of Choice
*. Although this might appear to be a weakness, in that the two are mutually recursive and thus could lead to an infinite regress, there are well-established base cases of an empty paper and not writing a paper. Furthermore, the homoiconic property ensures that the presented concepts can be reproduced by reproducing the paper.
A final strength of our interpretation is its synthesis of concepts present in frequentist and Bayesian probability. By its emphasis on writing, we mimic the emphasis on enumeration of possibilities found in frequentism. However, whereas a frequentist approach might focus on the space of possibilities, e.g. H and T in a coin flip, we fix the choice of possibility space (a sequence of English characters) and instead focus on the choice of process. Furthermore, we leave the general method of choice purposely ambiguous, thus implicitly introducing subjectivist notions of probability. But unlike Bayesian probability we successfully avoid the need to enumerate all possible choices of process, by introducing a stopping procedure (detecting the presence of self-reference). Our interpretation of probability is thus strictly constructionist
* and does not introduce problems of computability or practicality.
Our interpretation does have a flaw, though, in that it does not guarantee optimality in any sense. For example, it does not preclude the possibility of accepting a Dutch book. Although it heavily emphasizes writing, references, and consideration of the means of production
*, it addresses the possibility of an unconsidered approach only reactively. Furthermore, it does not necessarily produce agreement among different individuals
*; they may make different choices in process. We submit three responses: first, that optimality is a theoretical concern
* of no particular relevance to applied mathematics
*; second, that our approach is optimal in terms of a suitably defined global measure of cognitive costs
*; and third, that one can always postulate an independent third party that would use the process of considering and synthesizing all other processes
*, but such a third party does not necessarily exist in practice.
Although any experiment is subject to uncertainty, we have attempted to minimize it through the use of a controlled environment and hypothetical situations. The most pressing concern is that the experiment could fail to produce any measurable effect. Particularly, its focus on references and concepts makes it conceptually difficult to understand, and thus it may be rejected altogether. Furthermore, the use of randomized concepts may make transitions in the paper appear stilted or disjointed. Time constraints forced the summarization, omission, and consequent misunderstanding of many concepts that require an interpreting paper in their own right. Nonetheless we look forward to seeing the results.