# e-value and s-value: possibility rather than probability measures

Let $H$ be a statistical null hypothesis (it contains statements about the probability distribution of the observable data x). In the classical statistics, a p-value can be employed to test this null hypothesis, see for instance this post. In the Bayesian paradigm, the posterior distribution is used, however, if $H$ is a sharp hypothesis (it is formed by a set with measure zero), then the posterior probability of $H$ given the observed data is zero. Let $\pi(.|x)$ be a posterior probability, it is clear that the following sentence is false

$\pi(H|x)=0$$\Rightarrow$  “$H$ is impossible to occur, given x”.

That is, zero probability does not mean impossibility of the null hypothesis. In order to measure possibility and impossibility of a hypothesis, one need to use other measures, e.g.: e-value and s-value. The former is built under a Bayesian paradigm and the latter under the classical one.

The e-value and s-value (notations: $ev(.|x)$ and $s(.|x)$, respectively) have the same behavior: they are possibility measures rather than probability ones. They provide a degree of contradiction between the observed data x and the null hypothesis H and have the following interpretations:

1. “$s(H|x) = 1$$\Rightarrow$$x$ does not contradict $H$”,
2. “$s(H|x) = 0$$\Rightarrow$$x$ fully contradicts $H$”,
3. “$s(H'|x) < s(H''|x)$$\Rightarrow$$x$ contradicts more $H'$ than $H''$”.

It is possible to have $s(H|x) = ev(H|x) = 1$ and $\pi(H|x) = 0$ for the very same data and hypothesis. It just means that the observed data bring information that does not contradict a hypothesis formed by a set of measure zero. For the s-value, if the maximum likelihood estimative lies in the null set, then $s(H|x) = 1$. For the e-value, if the mode of the posterior probability lies in the null set, then $ev(H|x) =1$. It is straightforward to show that either $s(H|x) = 1$ or $s(\neg H|x) = 1$, the same for the e-value, where $\neg H$ is the negation of $H$.

In order to accept/reject a hypothesis H (assuming that the universe of hypotheses is closed), one should compute the s/e-value for the negation of H, that is

4. if $s(H|x) = 1$ and $s(\neg H| x) = a$, one can accept $H$ if “$a$” is sufficient small
5. if $s(H|x) = b$ and $s(\neg H|x) = 1$, one can reject $H$ is “$b$” is sufficient small
6. if a (or b) is not sufficient small, then more data are necessary to have a decision.

By this prescription, one will never accept a hypothesis formed by a set of Lebesgue measure zero (for both the s- and e-values).

# Referencies:

Pereira, CAB, Stern, J., Wechsler, S. (2008). Can a significance test be genuinely Bayesian?, Bayesian Analysis, 3, 1, 79-10.

Patriota, AG. (2013). A classical measure of evidence for general null hypotheses, Fuzzy sets and Systems, 233, 74-88.