My response to What do you learn from p=.05? This example from Carl Morris will blow your mind:

Carl Mooris presented three hypothetical scenarios with different sample sizes in an election race between two candidates, namely Mr. Allen and Mr. Backer. A sample of n voters is taken and let Y be the voters favoring Allen. He would like to test H_0: \theta \geq 0.5 against H_1: \theta <0.5. The three scenarios are

  1. Y = 15 and  n=20,
  2. Y = 115 and n=200,
  3. Y =1046 and n=2000.

The p-values are about 0.021 for all scenarios and the ICs are:

  1. [0.560,0.940],
  2. [0.506,0.640],
  3. [0.501,0.545].

He asked which one of the three scenarios is most encouraging to candidate Allen, see the article. Andrew Gelman presented a discussion of this in his blog.

I argue here that the comparison of observed ICs and observed p-values are not appropriated, since ICs are random intervals and, as such, they are subject to random variabilities. Their observed values alone do not signify much without their dispersion measures. It is like comparing the observed values of two estimators without regarding their standard errors or other measures. P-values can also be regarded as random variables. Identical p-values could be compared together with a measure of their variabilities.

For instance,  let H_0: \theta \in M_0 be the null hypothesis. A p-value is defined by

p(T(x),M_0) = \sup_{\theta \in M_0} P_{\theta}(T(X) \geq T(x)),

where T(x) is the observed value of the test statistics T(X), X = (X_1, \ldots, X_n) is the random sample and P_\theta is the joint probability measure of the statistical model.

Define p(x) := p(t(x), M_0), then p(X) is a random variable whose distribution depends on M_0, \theta and n (of course that it depends on the adopted statistical model).

It is possible to compute, e.g., E_\theta(p(X)^k) = m(k, \theta). Then by plugging the estimative of \theta, we got one possible measure of variability

m(2, \hat{\theta}) - m(1, \hat{\theta})^2.

Other measures can be implemented by using this method.

Notice that, if a problem occurs in the first theory level, then you go to a meta-theory level to solve the problem, if a problem occurs in the meta-theory level, then you go to a meta-meta-theory level and so on and so forth.

It is too easy to find `apparent holes’ in the classical statistical theory, since it is a language with huge number of concepts that go far beyond the probabilistic knowledge. Unfortunately, the general recipe is: “if it appears to be probabilistically incoherent, it must be incoherent in a broadly sense and should be avoided´´. This recipe is too intellectually weak. If you do not use an appropriate language to treat these concepts that requires other non-probabilistic tools, you are doomed to interpret the classical concepts in a very narrow way as it seems the rule nowadays.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s