Focus on supporting stable, long term gains.
From editor Jonathan Gitlin at Ars Technica, responding to the recent U.S. State of the Union (SOTU) address:
Science needs steady sustainable boring growth, not flashy ill-formed initiatives.
Done correctly, history shows that lofty scientific and engineering challenges can work. The actual moonshot for example, or the Human Genome Project. Both of those had one thing in common: a clear and well-defined goal at the beginning. “Before 1970, fly someone to the Moon and return them safely.” “Sequence the entire human genome.”
Nebulous concepts like “end all cancer” get good applause—curing all cancers is right up there with sunshine and puppies. But such concepts are effectively meaningless.
Hey, reality check: There isn’t just one thing called “cancer,” as we have learned. It is hundreds of different bad things rogue cells can start to do.
Cancer is more like chronic street crime than war, and battle strategies should reflect that.
Stop giving the system more money than it can safely absorb
So what’s wrong with this idea, and why am I coming off like a cranky old man shouting at the clouds? For one thing, history has shown us that giving science a large slug of cash in a very short amount of time has horrible—some might say disastrous—consequences. This was plain to see after the NIH budget got doubled between 1998 and 2003 (something I and my colleagues wrote about extensively here at Ars). It was even more obvious once the two-year bolus of money from the American Recovery and Reinvestment Act (2009-2011) was spent.
Think about the way a sudden influx of nutrients causes algae to bloom and then die off in rivers and oceans, leaving dead zones behind. Rapid injections of cash into the research enterprise create intense periods where there’s lots of money available for lots of new scientists to get hired. But once those initial grants run out, there is no more funding to support them.
As a result of the past booms in funding, you will find empty lab after empty lab in research institutes and universities all over the land. We’ve trained far more scientists than we have money to sustainably support. More.
Part of the boom in science-gone-wrong (retractions, etc.) may be fuelled by the desperation that boom and bust cycles creates.
The academic pecking order is based on the number of papers a scientist gets published in high impact factor journals, that is, journals whose papers are heavily cited by other scientists. And yet, “The vast majority of scientific publications are never cited. There are something like 30,000 [published papers] a week.” How many of those can be first rate? How much second- and third-rate science is being funded? And how can we know?
The surge of publications is a direct result of the tsunami of money that washed over the industry between 1998 and 2003 with the doubling of the National Institutes of Health (NIH) budget. “It was a gold rush,” says Oransky. “What scientists did was grow their labs so they can produce more papers so they can get more grants.”
But, like a real gold rush, the bonanza couldn’t go on forever. So, asks Oransky, “What happens to all those folks when the doubling stops?” NIH funding has currently leveled out at between $30 billon and $40 billion a year, depending on how you count it. In an era of stagnant budgets, competition for limited funding has become cutthroat. That has led some scientists to take shortcuts. And that, in turn, has caused retractions to soar.
Besides which, downturns mean there is less funding for the much needed replication studies that could separate the gold from the lead, and enabe more focused gains sooner.
In my final year at NIH, I saw all the consequences all too well. Colleagues lost weeks of time to planning meetings at a time when we were already understaffed for the day-to-day challenge of keeping the wheels on the science bus. All the while, funding rates for NIH grants dropped into the single digits, and labs closed up shop as scientists gave up on their dreams and went to work in more stable careers.
See also: Do researchers own their data? A lack of transparency hinders research.
Missing mice produce questionable data (Researchers: Of those that did report nos, around 30% (53 experiments) reported they had dropped rodents from their study analysis, but only 14 explained why.)
Mice studies often meaningless for humans? Researcher: “Animal models are limited in their ability to mimic the extremely complex process of human carcinogenesis, physiology and progression.”
January 18, 2016
Follow UD News at Twitter!
Hat tip: Stephanie West Allen at Brains on Purpose