What does this tell us for areas, where we have no or few clinical trials to correct? Toxicology is a prime example, where regulatory decisions for products traded at $10 trillion per year are taken only on the basis of such testing. Are we sorting out the wrong candidate substances? Aspirin would today likely fail the preclinical stage. Rats and mice predict each other for complex endpoints with only 60% and predicted together only 43% of clinical toxicities observed later. New approaches under the name Toxicology for the 21st Century are currently emerging, which rely on molecular pathways of human toxicity.
For drug efficacy testing, doubt as to animal models is also increasing: A National Academy of Science panel analyzed recently the suitability of animal models to assess the human efficacy of countermeasures to bioterrorism: It could neither identify suitable models nor did it recommend their development, but called for the establishment of other human relevant tools. In line, about $200 million have been made available by NIH, FDA and DoD agencies over the last year to start developing a human-on-a-chip approach.
There is no reason to assume that other preclinical animal research is more predictive than the one carried out in drug industry. Begley and Ellis reported recently that only 6 out of 53 landmark studies in cancer could be reproduced by industry and similarly Bayer one year earlier reported only about 25% reproducibility. The conclusion: Publish less but of better quality and do not rely on the face value of animal studies.