**Causal Inference from
Multiple Studies**

**Anwers**

** **

** **

1.

a.We don't think so. Any company which initiated such a testing program using a
pain killer no more effective than the competitor's would have, assuming the
studies are done independently, a 1-(.95)^{12}=46% chance of at least
one study yielding statistical significance just by chance. If the FDA approves
the claim, it is doing so essentially on the basis of a test at a=50% rather
than 5%. That's not strong enough evidence for us, and we hope it's not for
you.

b. Could be. Then either the companies produce it more effective or a somewhat
unusual event (a 1 in 25 chance) has occurred. In other words, p=.04 can then
be treated as the genuine p-value.

c. Obviously, it would severely compromise the process.

2. In the
first instance, no, because we would expect a random one of the twenty tests on
average to be statistically significant due to chance, and it is not unlikely
that two would be so. In the second instance, sure, because statistical
significance has been demonstrated on both of two measures chosen in advance as
primary in reaching the conclusion. Had a random two measures popped up as
statistically significant, it is extremely improbably that these would have
been the two singled out before any data were selected. The other
eighteen measures are essentially supplementary information to the two which
are of primary interest for substantive reasons. Of course, if the investigator
concealed the existence of the eighteen non-significant tests from which two
had been selected and emphasized, it would strongly effect our interpretation,
and appropriately so. For this reason, such an approach to publication is
scientifically and ethically improper.

3. c

4. d

5. e

6. d

7. a

8. See FFW, Ch. 11.

9. e

10. a