# Quantum measure theory generalizes classical probability theory

There is a quantum measure theory (an extension to the mathematical discipline called “measure theory”) that goes as follows:

If $M$ is a quantum measure and $\Omega$ is the universe set then:

1. $M(\varnothing) = 0$,
2. $M(\Omega) = 1$,
3. For any disjoint sets (measurable in the quantum sense) $A, \ B$ and $C$: $M(A \cup B \cup C) = M(A \cup B) + M(B \cup C) + M(A \cup C) - M(A) - M(B) - M(C)$

Notice that, if $A$ and $B$ are disjoint sets then, in some quantum experiments, $(A \cup B)$ cannot be always measured from the measurements of each isolated piece $A$ and $B$ as is usually considered in the classical measure theory. In these cases, we must compute a specific measure for the set $(A \cup B)$. Naturally, if

$M(A \cup B) = M(A) + M(B)$

for all disjoint measurable sets $A$ and $B$, then the usual probability measure emerges, but it is not the case in quantum experiments. The axiom 3. is called grade-2 additivity

There is a connection between M and the wave function. For more on this, just google it: “quantum measure theory”.

Best,
Alexandre Patriota

# Why is studying measure theory important to statisticians?

Measure theory, as many others branches of mathematics, is much important to formalize and understand more profoundly the theory of statistics mainly for theoreticians. It deals with how to measure parts of a set of interest.

In statistics, we always use random variables to make inferences about the unknown quantities of interest (parameters). These random variables are just functions that transport the elements of an abstract set to the real line (i.e., $X: \Omega \to \mathbb{R}$, where $X$ is a random variable, $\Omega$ is an abstract set and $\mathbb{R}$ is the set of real numbers), since it is much easier to work by using the real line rather than an abstract set. The probability space is the triplet

$(\Omega,\mathcal{F},\mu)$,

where $\mathcal{F}$ is a collection of subsets of $\Omega$ and $\mu$ a set function which gives a probability for each set in $\mathcal{F}$), i.e., $\mu(\varnothing) =0, \ \mu(\Omega) =1$ and if $A, B \in \mathcal{F}$ are disjoints, then $\mu(A\cup B)= \mu(A) + \mu(B)$ (it is infinite additive, but for easiness of presentation I just consider finite additivity).

The statistical model is strictly related with the probability one, the difference is in the third term of the triplet $(\Omega,\mathcal{F},\mu)$. When the probability measure $\mu$ is unknown, we may find a family of probability measures that possibly fit adequatelly the observed data, say $\mathcal{P}$. The statistical model is then defined as

$(\Omega,\mathcal{F},\mathcal{P})$           (*)

The inferencial process is the procedure of finding a subfamily (possibly one element) of $\mathcal{P}$ that contains all the “best” cases according to some criteria. Knowing measure theory, one can propose coherent methodologies of estimations, predictions and so on. Note that, any statistical models can be written as (*).

Much controversies would be avoided in statistical hypothesis testing if all envolved quantities were formally defined, for instance, the informal definition of p-values provide many fruitless discussions (see, Patriota, 2013).

When you are proposing a new statistical methodology, you have to keep in mind the main theorems  and lemmas of measure theory in order to give a solid theoretical base for your proposal.

References:

Patriota, AG (2013). A classical measure of evidence for general null hypotheses, Fuzzy Sets and Systems, In Press http://dx.doi.org/10.1016/j.fss.2013.03.007