Healthy Soils Article #3
Where Is All the Cutting-Edge Research on Soil Health?
Some of our best minds in agriculture agree that what we are learning today about soil is revolutionary and that we are only beginning to understand its potential to address numerous challenges we face in agriculture and beyond.
Even with this limited understanding, we do know that healthy soil provides many benefits: drought resilience, reduced flooding, improved water quality, mitigation of weather extremes, increased nutritional value in food, long-term sustainability of food production, and more. So, given its significance, why is so little research on the best techniques to build healthy soils coming from our universities, particularly from agricultural land grant colleges?
The first main reason is found by following the money. The trail begins in 1862, when the federal government created land-grant universities by deeding tracts of land to every state to pursue agricultural research. For more than a century, these public universities helped develop better seeds, new plant varieties, and improved farming tools that focused on farmers’ needs and interests. But starting in the 1980s, the Bayh-Dole Act and other federal policies began encouraging land-grant schools to partner with the private sector on agricultural research. Large companies such as Monsanto and Syngenta responded with enthusiasm, providing private funds to these public institutions – and reaping the benefits that included not only patented seeds, but the loyalty of researchers indebted to them on all fronts.
By the early 1990s, industry funding surpassed USDA funding of agricultural research at land-grant universities. In 2009, corporations, trade associations, and foundations (often related to corporate interests) provided $822 million for research at land-grants – 27% more than the research dollars provided by U.S. Department of Agriculture, the leading source of government funding.
It’s not surprising that privately funded research tends to produce published papers that favor the funder’s interests. After all, a study that doesn’t produce the desired results can either be manipulated (see more below) or simply never published, so that no one outside the researcher and corporation ever know about it.
For example, in 2020, the group U.S. Right to Know obtained documents through public records requests that showed close ties between Monsanto and professors who promote genetically engineered crops and their associated herbicides and pesticides. According to documents obtained by Right to Know, a senior Monsanto executive sought out several top agriculture experts from universities in 2013, offering assistance (although not money) for them to author papers. The executive provided specific topics and headlines, and Monsanto hired a public relations firm to set deadlines for the professors and then promote the articles, all of which touted the benefits of GMOs. The link to Monsanto was not disclosed in several of those resulting articles.
Although it should be easy to know who funded or solicited a study, it often isn’t. While scientific journals normally require study authors to disclose potential conflicts of interest (COIs), this requirement often is not stringently enforced. Non-disclosure is a common occurrence, according to a study led by Johan Diels at the Biotechnology College of the Portuguese Catholic University comparing industry-funded and non-industry-funded studies on genetically engineered crops. Out of 94 studies examined, 49 did not identify funding sources. In 41 of the studies, at least one of the authors had industry ties. And out of 44 studies determined to have financial or professional COIs, 43 produced results favorable to the sponsor.
Another review looked at the frequency of declared COIs in research on genetically engineered or modified (GM) crops and found that “ties between researchers and the GM crop industry were common, with 40% of the articles considered displaying conflicts of interest.” The review also found that “compared to the absence of COI, the presence of a COI was associated with a 50% higher frequency of outcomes favorable to the interests of the GM crop company.”
A 2012 report from Food & Water Watch documented numerous conflicts of interest and the impacts of these research dollars. And in a movement that the non-profit watchdog calls “branding the campus,” the report cites the influence of corporations on universities when donations toward buildings result in corporate names on prominent display: the Monsanto Student Services Wing at Iowa State University’s main agriculture building, Monsanto Auditorium at University of Missouri, and research laboratories at Purdue University, named after Kroger and ConAgra, for example. This branding normalizes the idea that the universities should look to the companies for financial support and serves to remind professors every day that much of their funding depends on staying in the companies’ good graces.
Moreover, the privately controlled research dollars not only encourage pro-industry research but discourage any independent research that might be critical of the industrial model. The industry funding also discourages research into the broader issues, such as the negative environmental, health, and socio-economic impacts of the corporations’ approach. For example, an analysis of agricultural research in the very broad sense – covering not only the development of agricultural methods, but research on the impacts of policies and programs – revealed that only a minority of studies even consider social impacts and very few (2%) looked at environmental impacts.
If the lack of funding for research into the negative issues with industrial agriculture isn’t enough, the industry can resort to outright harassment and intimidation. A 2019 article on The Counter website reported on the harassment of a researcher who had been studying the impacts of industrial-scale hog operations on health for the University of North Carolina.
Attorneys from the state’s powerful Pork Council demanded the researcher hand over his data about health problems in residents living near the facility. Fearing that community members he’d interviewed would face retribution if their names were revealed, he refused. But University attorneys told him the data belonged to the state, not to the researcher, and that he could be arrested. He eventually turned over heavily redacted documents but, The Counter reported, “went to his deathbed” still experiencing harassment.
A general rule of academia is that the amount of funding brought in by a professor has a significant influence on their salary and opportunity for tenure. In other words, a professor who angers the big industry players will not only risk losing funding for specific projects, but risks their entire career. At an agricultural workshop in 2001, one of the instructors shared a disturbing personal experience. She earned her master’s in agriculture at a land grant university in the mountain states. Her master’s project was testing several herbicides to see which resulted in the greatest increase in crop production. Partway through, she told her supervising professor that, candidly, none of them were producing good results. His response was that she could not write up those findings because it would jeopardize other funding he had, as well as potentially put the funding for other professors at risk – the department as a whole relied heavily on the support of the companies who made those herbicides. He told her to do whatever she needed to do with her statistical analysis to find some sort of positive result for one of the chemicals.
Which brings us to the second major reason that there is so little research into soil health: the way statistics are used to determine what is and is not a valid, publishable result. If a researcher can’t get “statistically significant” results, then he or she can’t publish the study – and both their academic status and their funding suffer.
For most people, that sounds reasonable because they believe that this protects the quality of the research. But is that so? Researchers use something called the “P value” to determine whether or not the result of an experiment is “statistically significant.” This method is demonstrably flawed, but research (and by extension, professors’ careers) continues to live or die based on this flawed system.
Consider one of the most basic problems: The P value relies heavily on the sample size. When the sample size is small, the statistical formulas can easily miss a real impact because of lack of what is called “statistical power.” This is easier to envision in the medical arena. If a study testing a new medicine has only a few hundred people, then even a doubling of risk can be missed. By the same token, when there is a huge sample, even a tiny difference can be statistically significant – even if it’s not meaningful in real life.
Consider again the problem posed to the graduate student who found that none of the herbicides she was studying was of much use. By increasing the sample size and playing with different formulas, she would probably be able to find some “statistically significant” improvement by at least one of them, even if it wouldn’t really matter to a farmer in real world conditions. (See sidebar for a more detailed explanation of P values and the problems with them.)
These problems with statistical analysis are an issue in any field of study. Agriculture poses particular problems, however. The classic structure of a study that is most likely to provide statistically significant results means holding everything constant except for one variable. Looking at even two variables makes it much less likely that the statistical formula will provide a “statistically significant” result that could be published in a scientific journal. And changing 3 or 4 things at one time makes it all but impossible to achieve results that will be published by a scientific journal.
Yet agricultural systems are complex. If you want to study the benefits of a regenerative approach to soil health, then you first need to implement healthy soil methods. That means reducing or eliminating tillage, planting cover crops in off-seasons or otherwise ensuring that the soil remains covered, diversifying what crops are grown, and incorporating livestock (either through manure and compost applications or, preferably, through rotational grazing). Yet each of these is its own variable. So, the classic research studies only take one step, which often is not enough to provide a large enough effect to produce statistical significance. To truly study healthy soils, we need experiments that vary multiple factors at one time – and a willingness to look at the results even if they aren’t “statistically significant.”
The third major reason there is so little research into soil health is that the complexity of the issues poses problems beyond statistics. In agriculture, as in all areas of inquiry, people like simple answers. Whether they are academics, politicians, or regulators, people are naturally drawn to simple “solutions” and wary of complex ones. It is simpler to develop a product that kills all insects (and simpler to monetize) than to research how the complex interrelationship of all the microbes and trace compounds that make up our soil are each impacted by the multiple regenerative approaches mentioned above.
Moreover, it is extraordinarily difficult to change people’s existing understandings. But that is what we must do. We face myriad complex problems: an epidemic of chronic health issues, climate change, massive economic losses from floods and droughts, and the very real threat that our soil will not be able to sustain food production by the end of this century. There is a great deal of knowledge already available about how to use healthy soil practices to address all these issues. But more research is needed, both to identify the best methods and refine our current knowledge, and to provide the data necessary to drive policy changes on a major scale.
The first step is to renew a commitment to public funding of research, so that the results are not driven by big money interests. There must also be a willingness to grapple with complexity, both in the experimental design and the results.
While we work towards those major, fundamental reforms, there are many smaller ones that can be undertaken based on the level of knowledge that we have currently. We will tackle those in another article in this series, identifying the policy barriers that currently exist, and bringing forward policy changes that are needed, to effectively utilize healthy soils to support people, the land, and our economy.
Scientific studies typically identify a single factor and ask whether it makes a difference to the outcome. Let’s use the example of experimenting to see if a certain fertilizer makes a difference to how productive the crop is. The starting assumption, or “null hypothesis,” is that the factor does not make a difference. The person then does the experiment, having both a treated group (where the fertilizer is applied) and a control group (where everything is the same except there is no fertilizer).
The data about how both groups grow is gathered, and then run through mathematical equations that are designed to determine if whatever difference is observed is the result of chance or an actual effect of the fertilizer. The number produced by those equations is referred to as the “P value.” If the P value is less than 0.05 (5%), then the statisticians say that there is less than a 5% chance that the difference in growth observed between the two groups is the result of chance. Put another way, there is a 95% chance that the difference in growth observed was due to the effect of the fertilizer.
But the P value does not actually tell you whether the null hypothesis is true or false. First, a low P value might mean that fertilizer works – or it might mean that the researcher witnessed the one time out of 20 that a crop yield was unusually big for other reasons. Second, there’s a flaw in the logic of using P values this way because the mathematical calculation assumes that there is no effect (i.e. that the null hypothesis is true and the factor being tested has no effect on the results). If there actually is an effect, that assumption is false – and the whole calculation is not valid.
This may be why studies have repeatedly shown that scientific conclusions based on P values are frequently false. For example, in one study, researchers reviewed earlier studies that had found statistically significant links between 85 different genetic variants and the risk of a disorder called acute coronary syndrome. The researchers then tested the genes of 811 people who had acute coronary syndrome, and a matched group of healthy people. Only one of the 85 genetic variants appeared substantially more often in those with the syndrome than in the matched group of healthy people. Eighty-four of the genes did not appear any more often than would be expected by chance. The authors concluded that the actual genetic testing provided “no support” that any of the 85 genetic variants actually created a susceptibility to the syndrome.