Elsevier

Surgical Neurology

Volume 67, Issue 2, February 2007, Pages 211-214
Surgical Neurology

Editorial
Part I: Public health, social science, and the scientific method

https://doi.org/10.1016/j.surneu.2006.11.013Get rights and content

Introduction

During the years 2002 to 2004, I served in the Injury Research Grant Review Committee (more recently the “Initial Review Group”) of the Centers for Disease Control and Prevention (CDC)—more specifically, the National Center for Injury Prevention and Control (NCIPC).

I participated not only in the major meetings in Atlanta, but also in on-site reviews and inspections of Injury Research Centers reviewing thousands of pages of grant applications requesting funding for medical and public health scientific research proposals. I have deliberately let some time elapse before writing this analysis with the purpose of being able to take a step back and write from a distance, objectively.

I should also inform the reader that I must write in generalities, for I cannot disclose by CDC rules specific details of any grant proposal requesting funding, or discuss the content of the review of any specific grant application in which I participated or that came to my knowledge while working at the CDC in the capacity of grant reviewer. This secrecy seems, in retrospect, even more stringent than those that were in place at Los Alamos during the Manhattan Project! So my discussion necessarily will be lacking specific examples to illustrate the thread of my arguments. Nevertheless, I ask you to bear with me, gentle reader, for enough general material discussing points of scientific interest will, I think, make it worth your while—that is, if you have an interest in the present interrelationship between public health, social science, and the purported relationship these disciplines bear today with medicine, including neuroscience, and the scientific method.

Before proceeding, as a further introduction, I would like to quote several excerpts from a magnificent article entitled, “Statistical Malpractice” by Bruce G. Charlton, MD, of the University of Newcastle upon Tyne. It is perhaps no coincidence that Dr Charlton is associated with the same great university that gave us Dr John Snow, the illustrious physician who in 1849 successfully applied the scientific method to epidemiology. (In the process, Dr Snow proved that cholera is a waterborne disease. This discovery led to the conquest of epidemic diseases such as dysentery and typhoid fever.) Dr Charlton's comments that follow cite the growing misuse of pure epidemiology and statistics as science. As my narrative unfolds, the relevance of these momentous passages to my narrative will become obvious.

Section snippets

Congressional authorization

Perhaps the biggest problem of all has been created and promoted by Congress in the allocation of ever-increasing amounts of taxpayer dollars to public health “research” in the area of injury control that, frankly, in many dismaying instances is of questionable scientific validity and even less cost-effectiveness. Oversight, accountability, and clear demonstration of cost effectiveness have been clearly lacking. And yet, the Department of Health and Human Services shares some of the blame as

Simple statistical tools frequently missing

From the scientific point of view, a trend most troubling is the misuse or nonuse of the very simple but very helpful traditional statistical tools in the statistician armamentarium. I refer to the useful methodology of relative risks (RRs), confidence intervals (CIs), and the increasingly ignored P values. These traditional statistical parameters are essential in determining the strength of statistical associations. These tools are actually tough tests that are applied to statistical studies

Fishing expeditions in search of social problems

Not infrequently I found it difficult to discern in any of these ever-proliferating health (social) proposals strong statistical associations leading to groundbreaking, scientific research. Fishing expeditions in hypothesis searching and solutions in search of social problems are frequent, whereas hypothesis testing is poorly formulated. One reason for this misuse of statistics universally ignored is that epidemiology should be applied to rare diseases that occur at high rates in a defined

Premature disclosure of “scientific findings”

It is no wonder that the media picks up on these reports prematurely and sensationalizes conclusions that often contradict one another as soon as they are published, sometimes in the same issue of the same journal! Disconcertingly, we have learned from researchers and the media that coffee can cause cancer as well as prevent cancer, and that silicone breast implants are harmless and that they are not, etc! We continue to be bombarded with prematurely reported headline grabbing studies, day

Relative risk

Although RRs do not establish cause-and-effect relationships, it is an invaluable tool in statistics. Relative risk is used to determine whether there is a difference in the rate of a disease process or injury between 2 given populations. A RR of 1.0 signifies no difference. A RR of 2.0 means that the exposed population has twice (100%) the rate of disease (a positive association) as compared to the other population. Statistics is not science, and a 100% increase in this context is a very small

References (8)

  • Arnett JC. Book review: Junk science judo by J. Steven Milloy (ed). Med Sentin...
  • T.D. Brock

    Biology of microorganisms

  • B.G. Charlton

    Statistical malpractice

    J R Coll Physicians Lond

    (1996)
There are more references available in the full text version of this article.

Cited by (2)

View full text