Sugar industry sought to sugarcoat causes of heart disease

Using records unearthed from library storage vaults, researchers recently revealed that the sugar industry paid nutrition experts from Harvard University to downplay studies linking sugar and heart disease. Although the incident happened in the 1960s, it appears to have helped redirect the scientific narrative for decades.

The documents — which include correspondence, symposium programs and annual reports — show that the Sugar Research Foundation (as it was named at the time) paid professors who wrote a two-part review in 1967 in the New England Journal of Medicine. That report was highly skeptical of the evidence linking sugar to cardiovascular problems but accepting of the role of fat. The now-deceased professors’ overall conclusion left “no doubt” that reducing the risk of heart disease was a matter of reducing saturated fat and cholesterol, according to researchers from the University of California, San Francisco, who published their report online September 12 in JAMA Internal Medicine.

“Why does it matter today? The sugar industry helped deflect the way the research was developing,” says study coauthor Cristin Kearns, a dentist at UCSF’s Institute for Health Policy Studies. The Harvard team’s scientific favoritism had a role in directing research and policy attention toward fat and cholesterol. And in fact, the first dietary guidelines published by the federal government in 1980 said there was no convincing evidence that sugar causes heart disease, stating “the major health hazard from too much sugar is tooth decay.”
Following the publication of the Harvard report, fat and cholesterol went on to hijack the scientific agenda for decades, and even led to a craze of low-fat foods that often added sugar. Kearns points out that it was only in 2015 that dietary guidelines finally made a strong statement to limit sugar. Researchers writing this year in Progress in Cardiovascular Diseases note that current studies estimate that diets high in added sugars carry a three times higher risk of death from cardiovascular disease. (For its part, the Sugar Association says in a statement on its website that “the last several decades of research have concluded that sugar does not have a unique role in heart disease.”)

The level at which the food industry continues to influence nutrition research is still a much-debated topic. The Sugar Association’s statement acknowledged the secret deal occurred, but pointed out that “when the studies in question were published, funding disclosures and transparency standards were not the norm they are today.” Journals now require all authors to list conflicts of interest, especially funding from a source has a vested interest in the outcome.

That doesn’t mean that trade groups and industry associations no longer have an influence on scientists, says Andy Bellatti, cofounder and strategic director of Dietitians for Professional Integrity, which has campaigned to push the Academy of Nutrition and Dietetics to sever its ties with industry, While a modern researcher could not take corporate money, even for speaking fees, without disclosure, the influences may be more subtle, he says. “We’re not talking about making up data, but perhaps influencing how a research question is framed.”

In a commentary published with the JAMA study, Marion Nestle, a nutrition researcher at New York University, wrote that industry influence has not disappeared. She cited recent New York Times investigations of Coca-Cola–sponsored research and Associated Press stories revealing that a candy trade group sponsored research attempting to show that children who eat sweets have a healthy body weight.

Bellatti says that researchers don’t necessarily want to be cozy with industry, but sometimes turn to commercial sources because non-biased research money is lacking. “The reason the food industry is able to do this is because there is such little public funding for nutrition and disease,” Bellatti says.
For that reason, the scientific community should not reject industry money wholesale, says John Sievenpiper, a physician and nutrition researcher at the University of Toronto. A study of his was once ridiculed on Nestle’s blog because the disclosures covered two full pages. He believes that any scientist who takes industry money should adhere to an even higher standard of openness, including releasing study protocols ahead of time so reviewers can make sure the research question was not changed midstream to favor a certain conclusion.

While many parallels have been made between the food and tobacco industries, Sievenpiper believes those comparisons miss the complicated nature of the human diet. Tobacco is always bad, never good. Sugar, fat, cholesterol and other components of diet are some of both, making research into their effects much more nuanced, he says. And unlike with tobacco, the solution can’t be to never eat them. He believes solutions won’t involve turning single nutrients like fat or sugar into villains, but promoting better overall patterns of eating, like the Mediterranean diet.

Kearns, who has spent the past 10 years looking into the sugar industry’s influence on science, isn’t finished yet. She says her curiosity first arose during a conference on gum disease and diabetes in 2007, when she noticed a lack of scientific discussion of sugar. She started out simply Googling industry influences. The trail eventually led her to scour library archives, until she came across dusty boxes of records from a closed sugar company in Colorado. “The first page I looked at in that archive had a confidential memo,” she says. “I knew I had something no one else had never talked about before.”

She doesn’t see the research going sour any time soon. “This was their 226th project in 1965,” she says. “There’s a lot more to the story.”

There’s a new way to stop an earthquake: put a volcano in its path

Editor’s note: Science has retracted the study described in this article. The May 3, 2019, issue of the journal notes that a panel of outside experts convened by Kyoto University in Japan concluded in March 2019 that the paper contained falsified data, manipulated images and instances of plagiarism, and that these were the responsibility of lead author Aiming Lin, a geophysicist at Kyoto University. In agreement with the investigation’s recommendation, the authors withdrew the report.

A titanic volcano stopped a mega-sized earthquake in its tracks.

In April, pent-up stress along the Futagawa-Hinagu Fault Zone in Japan began to unleash a magnitude 7.1 earthquake. The rupture traveled about 30 kilometers along the fault until it reached Mount Aso, one of Earth’s largest active volcanoes. That’s where the quake met its demise, geophysicist Aiming Lin of Kyoto University in Japan and colleagues report online October 20 in Science. The quake moved across the volcano’s caldronlike crater and abruptly stopped, the researchers found.

Geophysical evidence suggests that a region of rising magma lurks beneath the volcano. This magma chamber created upward pressure plus horizontal stresses that acted as an impassable roadblock for the seismic slip powering the quake, the researchers propose. This rare meetup, the researchers warn, may have undermined the structural integrity surrounding the magma chamber, increasing the likelihood of an eruption at Aso.

Young planets carve rings and spirals in the gas around their suns

Growing planets carve rings and spiral arms out of the gas and dust surrounding their young stars, researchers report in three papers to be published in Astronomy & Astrophysics. And dark streaks radiating away from the star in one of the planet nurseries appear to be shadows cast onto the disk by the clumps of planet-building material close to the star. This isn’t the first time that astronomers have spied rings around young stars, but the new images provide a peek at what goes into building diverse planetary systems.

The three stars — HD 97048, HD 135344B and RX J1615.3-3255 — are all youthful locals in our galaxy. They sit between 460 and 600 light-years away; the oldest is roughly a mere 8 million years old. All the stars have been studied before. But now three teams of researchers have used a new instrument at the Very Large Telescope in Chile to see extra-sharp details in the planet construction zone around each star.

The new instrument, named SPHERE, was designed to record images, spectra and polarimetry (the orientations of light waves) of young exoplanet families. Flexible mirrors within the instrument adapt to atmospheric turbulence above the telescope, and a tiny disk blocks light from the star, allowing faint details around the star to come into view.

Cell biologists learn how Zika kills brain cells, devise schemes to stop it

SAN FRANCISCO — Cell biologists are learning more about how the Zika virus disrupts brain cells to cause the birth defect microcephaly, in which a baby’s brain and head are smaller than usual. Meantime, several strategies to combat the virus show preliminary promise, researchers reported at the American Society for Cell Biology’s annual meeting. Among the findings:

Brain cell die-off
Zika causes fetal brain cells neighboring an infected cell to commit suicide, David Doobin of Columbia University Medical Center reported December 6. In work with mice and rats, Doobin and colleagues found suggestions that the cells’ death might be the body’s attempt to limit spread of the virus.

The researchers applied techniques they had previously used to investigate a genetic cause of microcephaly to narrow when in pregnancy the virus is most likely to cause the brain to shrink. Timing of the virus’s effect varied by strain. For one from Puerto Rico, brain cell die-off happened in mice only in the first two trimesters. But a strain from Honduras could kill developing brain cells later into pregnancy. Microcephaly can lead to seizures, mental impairment, delays in speech and movement and other problems.

Enzyme stopper
Disrupting a Zika enzyme could help stop the virus. The enzyme, NS3, causes problems when it gloms on to centrioles, a pair of structures inside cells needed to divvy up chromosomes when cells divide, Andrew Kodani, a cell biologist at Boston Children’s Hospital reported December 6.

Zika, dengue and other related viruses, known as flaviviruses, all use a version of NS3 to chop joined proteins apart so they can do their jobs. (Before chopping, Zika’s 10 proteins are made as one long protein.) But once NS3 finishes slicing virus proteins, the enzyme moves to the centrioles, where it can mess with their assembly, Kodani and colleagues found. Something similar happens in some genetic forms of microcephaly.

A chemical called an anthracene can help fend off dengue, so Kodani and colleagues tested anthracene on Zika as well. Small amounts of the chemical can prevent NS3 from tinkering with the centrioles, the researchers found. So far the work has only been done in lab dishes.
Protein face-off
Another virulent virus could disable Zika. Work with cells grown in lab dishes suggests a bit of protein, or peptide, from the hepatitis C virus, could muck up Zika’s proteins.

The peptide interferes with HSP70, a protein that helps assemble complexes of other proteins, including ones involved in protein production. That peptide and other compounds were already known to inhibit hepatitis C replication, UCLA virologist Ronik Khachatoorian and colleagues had previously discovered. The hepatitis C virus peptide stops Zika virus proteins from being made and hampers assembly of the virus, Khachatoorian reported December 5.

The peptide has not been tested in animals yet.

Coastal waters were an oxygen oasis 2.3 billion years ago

Earth was momentarily ripe for the evolution of animals hundreds of millions of years before they first appeared, researchers propose.

Chemical clues in ancient rocks suggest that 2.32 billion to 2.1 billion years ago, shallow coastal waters held enough oxygen to support oxygen-hungry life-forms including some animals, researchers report the week of January 16 in the Proceedings of the National Academy of Sciences. But the first animal fossils, sponges, don’t appear until around 650 million years ago, following a period of scant oxygen known as the boring billion (SN: 11/14/15, p. 18).
“As far as environmental conditions were concerned, things were favorable for this evolutionary step to happen,” says study coauthor Andrey Bekker, a sedimentary geologist at the University of California, Riverside. Something else must have stalled the rise of animals, he says.

Microbes began flooding Earth with oxygen around 2.3 billion years ago during the Great Oxidation Event. This breath of oxygen enabled the eventual emergence of complex, oxygen-dependent life-forms called eukaryotes, an evolutionary line that would later include animals and plants. Scientists have proposed that the Great Oxidation Event wasn’t a smooth rise, but contained an “overshoot” during which oxygen concentrations momentarily peaked before dropping to a lower, stable level during the boring billion. Whether that overshoot was enough to support animals was unclear, though.

Bekker and colleagues tackled this question using a relatively new way to measure ancient oxygen. Rock weathering can wash the element selenium into the oceans. In oxygen-free waters, all of the selenium settles onto the seafloor. But in water with at least some oxygen, only a fraction of the selenium is deposited. And the selenium that is laid down is disproportionately that of a lighter isotope of the element, leaving atoms of a heavier isotope to be deposited elsewhere.
If ancient coasts contained relatively abundant oxygen, the researchers expected to find more light selenium close to shore and more heavy selenium in deeper, oxygen-deprived waters. Analyzing shales formed under deep waters around the world, the researchers found just such an isotope segregation. These shales had an abundance of the heavier selenium, leading researchers to infer that the lighter version of the element was concentrated closer to shore.

Oxygen concentrations in coastal waters were at least nearly one percent of present-day levels and “were flirting with the limits of what complex life can survive,” proposes study coauthor Michael Kipp, a geochemist at the University of Washington in Seattle. While the environment was probably suitable for eukaryotes, life hadn’t evolved enough by this point to take advantage of the situation, Kipp proposes. The appearance of eukaryotes in the fossil record would take hundreds of millions of years of more evolution (SN Online: 5/18/16), he says, and the first animals even longer.
Tracking selenium is such a new technique, though, that “interpretations could change as we better understand how it works,” says Philip Pogge von Strandmann, a geochemist at University College London. Currently the method is “tricky,” he says, especially for precisely estimating oxygen concentrations.

Construction of tiny, fluid-filled devices inspired by Legos

Legos have provided the inspiration for small, fluid-ferrying devices that can be built up brick-by-brick.

Tools for manipulating tiny amounts of liquid, known as microfluidic devices, can be used to perform blood tests, detect contaminants in water or simulate biological features like human blood vessels. The devices are easily portable, about the size of a quarter and require only small samples of liquid.

But fabricating such devices is not easy. Each new application requires a different configuration of twisty little passages, demanding a brand new design that must be molded or 3-D printed.

Scientists from the University of California, Irvine created Lego-style blocks out of a polymer called PDMS. Their bricks contained minuscule channels, half a millimeter wide, that allowed liquid to flow from brick to brick with no leaks. New devices could be created quickly by rearranging standard blocks into various configurations, the scientists report January 24 in the Journal of Micromechanics and Microengineering.

Analysis finds gender bias in peer-reviewer picks

Gender bias works in subtle ways, even in the scientific process. The latest illustration of that: Scientists recommend women less often than men as reviewers for scientific papers, a new analysis shows. That seemingly minor oversight is yet another missed opportunity for women that might end up having an impact on hiring, promotions and more.

Peer review is one of the bricks in the foundation supporting science. A researcher’s results don’t get published in a journal until they successfully pass through a gauntlet of scientific peers, who scrutinize the paper for faulty findings, gaps in logic or less-than-meticulous methods. The scientist submitting the paper gets to suggest names for those potential reviewers. Scientific journal editors may contact some of the recommended scientists, and then reach out to a few more.

But peer review isn’t just about the paper (and scientist) being examined. Being the one doing the reviewing “has a number of really positive benefits,” says Brooks Hanson, an earth scientist and director of publications at the American Geophysical Union in Washington, D.C. “You read papers differently as a reviewer than you do as a reader or author. You look at issues differently. It’s a learning experience in how to write papers and how to present research.”

Serving as a peer reviewer can also be a networking tool for scientific collaborations, as reviewers seek out authors whose work they admired. And of course, scientists put the journals they review for on their resumes when they apply for faculty positions, research grants and awards.

But Hanson didn’t really think of looking at who got invited to be peer reviewers until Marcia McNutt, a geophysicist and then editor in chief of Science, stopped by his office. McNutt — now president of the National Academy of Sciences in Washington, D.C. — was organizing a conference on gender bias, and asked if Hanson might be able to get any data. The American Geophysical Union is a membership organization for scientists in Earth and space sciences, and it publishes 20 scientific journals. If Hanson could associate the gender and age of the members with the papers they authored and edited, he might be able to see if there was gender bias in who got picked for peer review.

“We were skeptical at first,” he says. Hanson knew AGU had data for who reviewed and submitted papers to the journals, as well as data for AGU members. But he was unsure how well the two datasets would match up. To find out, he turned to Jory Lerback, who was then a data analyst with AGU. Merging the datasets of all the scientific articles submitted to AGU journals from 2012 to 2015 gave Lerback and Hanson a total of 24,368 paper authors and 62,552 peer reviews performed by 14,919 reviewers. For those papers, they had 97,083 author-named reviewer suggestions, and 118,873 editor-named reviewer requests.

While women were authors less often and submitted fewer papers, their papers were accepted 61 percent of the time. Male authors had papers accepted 57 percent of the time. “We were surprised by that,” Hanson says. While one reviewer of Hanson and Lerback’s analysis (a peer-reviewer on a paper about peer review surely must feel the irony) suggested that women might be getting higher acceptance rates due to reverse discrimination, Hanson disagrees. He wonders if “[women] are putting more care into the submission of the papers,” he says. Men might be more inclined to take more risks, possibly submitting papers to a journal that is beyond their work’s reach. Women, he says, might be more cautious.
When suggesting reviewers, male authors suggested female reviewers only 15 percent of the time, and male editors suggested women just 17 percent of the time. Female authors suggested women reviewers 21 percent of the time, and female editors suggested them 22 percent of the time. The result? Women made up only 20 percent of reviewers, even though 28 percent of AGU members are women and 27 percent of the published first authors in the study were women.

The Earth and space sciences have historically had far fewer women than men, but the numbers are improving for younger scientists. “We thought maybe people were just asking older people to review papers,” says Lerback. But when Lerback and Hanson broke out the results by age, they saw the gender discrepancy knew no age bracket. “It makes us more confident that this is a real issue and not just an age-related phenomenon,” Lerback says. She and Hanson report their analysis in a comment piece in the January 26 Nature.

“It’s incredibly compelling data and one of those ‘why hasn’t someone done this already’ studies,” says Jessi Smith, a social psychologist at Montana State University in Bozeman. While it’s disheartening to see yet another area of bias against women, she notes, “this is good that people are asking these questions and taking a hard look. We can’t solve it if we don’t know about it.”

The time when authors are asked to suggest reviewers might play a role, Smith notes. When scientific papers are submitted online, the field for “suggested reviewers” is often last. “That question for ‘who do you want’ always surprises me,” admits Smith. It’s a very small thing, but she notes that when people are tired and in a hurry, “that’s when these biases take place.”

Knowing that the bias exists is part of the battle, Hanson says, but it’s also time for things to change. He’s pushing for more diversity in the journals’ editorial pool — for gender, nationality and underrepresented minorities. It’s also helpful to remember that suggesting people for peer review isn’t just about the author’s work. “Even small biases, if they happen repeatedly, can have career-related effects,” he says.

Peer-reviewer selection is something that hasn’t gotten a lot of scientific attention. “I think getting a dialog going for everyone to be aware of opportunity gaps, [asking] ‘what can I do to make sure I’m not contributing to this’… I think that will be important to keep editors and authors aware and accountable,” Lerbeck says. She’s now experiencing the peer-review life first hand: She’s now a graduate student at the University of Utah in Salt Lake City. While there, she’s starting a group to discuss bias — to make sure that the conversations keep going.

Physically abused kids learn to fail at social rules for success

Physical abuse at home doesn’t just leave kids black and blue. It also bruises their ability to learn how to act at school and elsewhere, contributing to abused children’s well-documented behavior problems.

Derailment of a basic form of social learning has, for the first time, been linked to these children’s misbehavior years down the line, psychologist Jamie Hanson of the University of Pittsburgh and colleagues report February 3 in the Journal of Child Psychology and Psychiatry. Experiments indicate that physically abused kids lag behind their nonabused peers when it comes to learning to make choices that consistently lead to a reward, even after many trials.
“Physically abused kids fail to adjust flexibly to new behavioral rules in contexts outside their families,” says coauthor Seth Pollak, a psychologist at the University of Wisconsin–Madison. Youth who have endured hitting, choking and other bodily assaults by their parents view the world as a place where hugs and other gratifying responses to good behavior occur inconsistently, if at all. So these youngsters stick to what they learned early in life from volatile parents — rewards are rare and unpredictable, but punishment is always imminent. Kids armed with this expectation of futility end up fighting peers on the playground and antagonizing teachers, Pollak says.

If the new finding holds up, it could lead to new educational interventions for physically abused youth, such as training in how to distinguish safe from dangerous settings and in how to control impulses, Pollak says. Current treatments focus on helping abused children feel safe and less anxious.

More than 117,000 U.S. children were victims of documented physical abuse in 2015, the latest year for which data are available.

“Inflexible reward learning is one of many possible pathways from child maltreatment to later behavior problems,” says Stanford University psychologist Kathryn Humphreys, who did not participate in the new study. Other possible influences on physically abused kids’ disruptive acts include a heightened sensitivity to social stress and a conviction that others always have bad intentions, Humphreys suggests.

Hanson’s team studied 41 physically abused and 40 nonabused kids, ages 12 to 17. Participants came from various racial backgrounds and lived with their parents in poor or lower middle-class neighborhoods. All the youth displayed comparable intelligence and school achievement.
In one experiment, kids saw a picture of a bell or a bottle and were told to choose one to earn points to trade in for toys. Kids who accumulated enough points could select any of several desirable toys displayed in the lab, including a chemistry set and a glow-in-the-dark model of the solar system. Fewer points enabled kids to choose plainer toys, such as a Frisbee or colored pencils.

Over 100 trials, one picture chosen at random by the researchers at the start of the experiment resulted in points 80 percent of the time. The other picture yielded points 20 percent of the time. In a second round of 100 trials using pictures of a bolt and a button, one randomly chosen image resulted in points 70 percent of the time versus 30 percent for the other image.

Both groups chose higher-point images more often as trials progressed, indicating that all kids gradually learned images’ values. But physically abused kids lagged behind: They chose the more-rewarding image on an average of 131 out of 200 trials, compared with 154 out of 200 trials for nonabused youth. The abused kids were held back by what they had learned at home, Pollak suspects.

If you think the Amazon jungle is completely wild, think again

Welcome to the somewhat civilized jungle. Plant cultivation by native groups has shaped the landscape of at least part of South America’s Amazon forests for more than 8,000 years, researchers say.

Of dozens of tree species partly or fully domesticated by ancient peoples, 20 kinds of fruit and nut trees still cover large chunks of Amazonian forests, say ecologist Carolina Levis of the National Institute for Amazonian Research in Manaus, Brazil, and colleagues. Numbers and variety of domesticated tree species increase on and around previously discovered Amazonian archaeological sites, the scientists report in the March 3 Science.
Domesticated trees are “a surviving heritage of the Amazon’s past inhabitants,” Levis says.

The new report, says archaeologist Peter Stahl of the University of Victoria in Canada, adds to previous evidence that “resourceful and highly developed indigenous cultures” intentionally altered some Amazonian forests.

Southwestern and northwestern Amazonian forests contain the greatest numbers and diversity of domesticated tree species, Levis’ team found. Large stands of domesticated Brazil nut trees remain crucial resources for inhabitants of southwestern forests today.

Over the past 300 years, modern Amazonian groups may have helped spread some domesticated tree species, Levis’ group says. For instance, 17th century v­oyagers from Portugal and Spain established plantations of cacao trees in southwestern Amazonian forests that exploited cacao trees already cultivated by local communities, the scientists propose.

Their findings build on a 2013 survey of forest plots all across the Amazon led by ecologist and study coauthor Hans ter Steege of the Naturalis Biodiversity Center in Leiden, the Netherlands. Of approximately 16,000 Amazonian tree species, just 227 accounted for half of all trees, the 2013 study concluded.
Of that number, 85 species display physical features signaling partial or full domestication by native Amazonians before European contact, the new study finds. Studies of plant DNA and plant remains from the Amazon previously suggested that domestication started more than 8,000 years ago. Crucially, 20 domesticated tree species — five times more than the number expected by chance — dominate their respective Amazonian landscapes, especially near archaeological sites and rivers where ancient humans likely congregated, Levis’ team says.

Archaeologists, ecologists and crop geneticists have so far studied only a small slice of the Amazon, which covers an area equivalent to about 93 percent of the contiguous U.S. states.

Levis and her colleagues suspect ancient native groups domesticated trees and plants throughout much of the region. But some researchers, including ecologist Crystal McMichael of the University of Amsterdam, say it’s more likely that ancient South Americans domesticated trees just in certain parts of the Amazon. In the new study, only Brazil nut trees show clear evidence of expansion into surrounding forests from an area of ancient domestication, McMichael says. Other tree species may have mainly been domesticated by native groups or Europeans in the past few hundred years, she says.

Most Americans like science — and are willing to pay for it

Americans don’t hate science. Quite the contrary. In fact, 79 percent of Americans think science has made their lives easier, a 2014 Pew Research Center survey found. More than 60 percent of people also believe that government funding for science is essential to its success.

But should the United States spend more money on scientific research than it already does? A layperson’s answer to that question depends on how much that person thinks the government already spends on science, a new study shows. When people find out just how much — or rather, how little — of the federal budget goes to science, support for more funding suddenly jumps.

To see how people’s opinions of public science spending were influenced by accurate information, mechanical engineer Jillian Goldfarb and political scientist Douglas Kriner, both at Boston University, placed a small experiment into the 2014 Cooperative Congressional Election Study. The online survey was given to 1,000 Americans, carefully selected to represent the demographics of the United States. The questions were designed to be nonpartisan, and the survey itself was conducted in 2014, long before the 2016 election.

The survey was simple. First, participants were asked to estimate what percentage of the federal budget was spent on scientific research. Once they’d guessed, half of the participants were told the actual amount that the federal government allocates for nondefense spending on research and development. In 2014, that figure was 1.6 percent of the budget, or about $67 billion. Finally, all the participants were asked if federal spending on science should be increased, decreased or kept the same.

The majority of participants had no idea how much money the government spends on science, and wildly overestimated the actual amount. About half of the respondents estimated federal spending for research at somewhere between 5 and 20 percent of the budget. A quarter of participants estimated that figure was 20 percent of the budget — one very hefty chunk of change. The last 25 percent of respondents estimated that 1 to 2 percent of federal spending went to science.

When participants received no information about how much the United States spent on research, only about 40 percent of them supported more funding. But when they were confronted with the real numbers, support for more funding leapt from 40 to 60 percent.

Those two numbers hover on either side of 50 percent, but Kriner notes, “media coverage [would] go from ‘minority’ to ‘majority’” in favor of more funding — a potentially powerful message. What’s more, the support for science was present in Democrats and Republicans alike, Kriner and Goldfarb report February 1 in Science Communication.
“I think it contributes to our understanding of the aspects of federal spending that people don’t understand very well,” says Brendan Nyhan, a political scientist at Dartmouth University in Hanover, N.H. It’s not surprising that most people don’t know how much the government is spending on research. Nyhan points out that most people probably don’t know how much the government spends on education or foreign aid either.

When trying to gather more support for science funding, Goldfarb says, “tell people how little we spend, and how much they get in return.” Science as a whole isn’t that controversial. No one wants to stop exploring the universe or curing diseases, after all.

But when people — whether politicians or that guy in your Facebook feed — say they want to cut science funding, they won’t be speaking about science as a whole. “There’s a tendency to overgeneralize” and use a few controversial or perceived-to-be-wasteful projects to stand in for all of science, Nyhan warns.

When politicians want to cut funding, he notes, they focus on specific controversial studies or areas — such as climate change, stem cells or genetically modified organisms. They might highlight studies that seem silly, such as those that ended up in former Senator Tom Coburn’s “Wastebook,” (a mantle now taken up Senator Jeff Flake). Take those issues to the constituents, and funding might end up in jeopardy anyway.

Kriner hopes their study’s findings might prove useful even for controversial research areas. “One of the best safeguards against cuts is strong public support for a program,” he explains. “Building public support for science spending may help insulate it from budget cuts — and our research suggests a relatively simple way to increase public support for scientific research.”

But he worries that public support may not stay strong if science becomes too much of a political pawn. The study showed that both Republicans and Democrats supported more funding for science when they knew how little was spent. But “if the current administration increases its attacks on science spending writ large … it could potentially politicize federal support for all forms of scientific research,” Kriner says. And the stronger the politics, the more people on both sides of the aisle grow resistant to hearing arguments from the other side.

Politics aside, Goldfarb and Kriner’s data show that Americans really do like and support science. They want to pay for it. And they may even want to shell out some more money, when they know just how little they already spend.