Cells snack on nanowires

Human cells can snack on silicon.

Cells grown in the lab devour nano-sized wires of silicon through an engulfing process known as phagocytosis, scientists report December 16 in Science Advances.

Silicon-infused cells could merge electronics with biology, says John Zimmerman, a biophysicist now at Harvard University. “It’s still very early days,” he adds, but “the idea is to get traditional electronic devices working inside of cells.” Such hybrid devices could one day help control cellular behavior, or even replace electronics used for deep brain stimulation, he says.
Scientists have been trying to load electronic parts inside cells for years. One way is to zap holes in cells with electricity, which lets big stuff, like silicon nanowires linked to bulky materials, slip in. Zimmerman, then at the University of Chicago, and colleagues were looking for a simpler technique, something that would let tiny nanowires in easily and could potentially allow them to travel through a person’s bloodstream — like a drug.
Zimmerman’s team had previously shown that cells could take in silicon nanowires, but no one knew how it worked. So he added the nanowires to different kinds of cells — including human umbilical vein cells, rat nerve cells and mouse immune cells — in laboratory dishes. Under a microscope, Zimmerman says, “you can see the cell grab the wire, wrap a membrane around it, and pull it inside — kind of like a lasso.” Then, the wire travels on molecular tracks through the cell’s interior to settle around the nucleus.
Molecular tests suggested that nanowires entered via phagocytosis, a process by which some cells take in bacteria and cellular junk. During phagocytosis, a cell’s membrane encapsulates the junk, forming a pouch that carries the cargo to a recycling center inside the cell.

Not all cell types swallowed the wires, though. Knowing which types do, and how the wires get inside is important, Zimmerman says, because it could help in predicting where they would end up in the body.
But there’s still a long way to go from nanowire-loaded cells to working electronic devices, says Mark Reed, a physicist at Yale University. “This is the big question,” he says — because the nanowires aren’t actually hooked up to anything yet.

Meat-eating pitcher plants raise deathtraps to an art

Tricking some bug into drowning takes finesse, especially for a hungry meat eater with no brain, eyes or moving parts. Yet California pitcher plants are very good at it.

Growing where deposits of the mineral serpentine would kill most other plants, Darlingtonia californica survives in low-nutrient soil by being “very meat dependent,” says David Armitage of the University of Notre Dame in Indiana. Leaves he has tested get up to 95 percent of their nitrogen from wasps, beetles, ants or other insects that become trapped inside the snake-curved hollow leaves.
The leaves don’t collect rainwater because a green dome covers the top. Instead, they suck moisture up through the roots and (somehow) release it into the hollow trap. “People have been doing weird experiments where they feed [a plant] meat and milk and other things to try to trigger it to release water,” Armitage says. Experiments tempting the green carnivore with cheese, beef broth, egg whites and so on suggest there’s some sort of chemical cue.

However the water enters the leaf pool, it starts out clear. As insects drown, the liquid darkens to a murky brown or red and “smells just horrible,” he says. The soupiness comes from bacteria, which help doom prey by lowering the surface tension of the drowning pool, Armitage reports in the November Biology Letters. Ants or other small insects sink below the surface immediately instead of floating at the top.
But first, pitchers lure victims to the pool by repurposing an old plant ploy: free nectar. It’s “highly nitrogen-rich and full of sugars, so it’s delicious — I’ve tasted it,” Armitage says. Pitcher plants sprout blooms, but the trap nectar doesn’t come from the drooping flowers. A roll of tissue near the pitcher mouth oozes the treat.

That nectar-heavy roll curves onto what’s called the fishtail appendage. Mature plants
(2 years or older) grow this forked tissue like a moustache at the pitcher mouth. Biologists for more than a century have presumed that this big, red-veined, lickable prong worked as an insect lure. Armitage, however, tested the idea and says it may be wrong.
Clipping fishtails off individual leaves, or even off all the leaves in a small patch, did nothing to shrink the catch compared with fully mustachioed leaves, he reported in the American Journal of Botany in April 2016. The only thing fishtails lure, for the time being at least, are puzzled botanists.

Coastal waters were an oxygen oasis 2.3 billion years ago

Earth was momentarily ripe for the evolution of animals hundreds of millions of years before they first appeared, researchers propose.

Chemical clues in ancient rocks suggest that 2.32 billion to 2.1 billion years ago, shallow coastal waters held enough oxygen to support oxygen-hungry life-forms including some animals, researchers report the week of January 16 in the Proceedings of the National Academy of Sciences. But the first animal fossils, sponges, don’t appear until around 650 million years ago, following a period of scant oxygen known as the boring billion (SN: 11/14/15, p. 18).
“As far as environmental conditions were concerned, things were favorable for this evolutionary step to happen,” says study coauthor Andrey Bekker, a sedimentary geologist at the University of California, Riverside. Something else must have stalled the rise of animals, he says.

Microbes began flooding Earth with oxygen around 2.3 billion years ago during the Great Oxidation Event. This breath of oxygen enabled the eventual emergence of complex, oxygen-dependent life-forms called eukaryotes, an evolutionary line that would later include animals and plants. Scientists have proposed that the Great Oxidation Event wasn’t a smooth rise, but contained an “overshoot” during which oxygen concentrations momentarily peaked before dropping to a lower, stable level during the boring billion. Whether that overshoot was enough to support animals was unclear, though.

Bekker and colleagues tackled this question using a relatively new way to measure ancient oxygen. Rock weathering can wash the element selenium into the oceans. In oxygen-free waters, all of the selenium settles onto the seafloor. But in water with at least some oxygen, only a fraction of the selenium is deposited. And the selenium that is laid down is disproportionately that of a lighter isotope of the element, leaving atoms of a heavier isotope to be deposited elsewhere.
If ancient coasts contained relatively abundant oxygen, the researchers expected to find more light selenium close to shore and more heavy selenium in deeper, oxygen-deprived waters. Analyzing shales formed under deep waters around the world, the researchers found just such an isotope segregation. These shales had an abundance of the heavier selenium, leading researchers to infer that the lighter version of the element was concentrated closer to shore.

Oxygen concentrations in coastal waters were at least nearly one percent of present-day levels and “were flirting with the limits of what complex life can survive,” proposes study coauthor Michael Kipp, a geochemist at the University of Washington in Seattle. While the environment was probably suitable for eukaryotes, life hadn’t evolved enough by this point to take advantage of the situation, Kipp proposes. The appearance of eukaryotes in the fossil record would take hundreds of millions of years of more evolution (SN Online: 5/18/16), he says, and the first animals even longer.
Tracking selenium is such a new technique, though, that “interpretations could change as we better understand how it works,” says Philip Pogge von Strandmann, a geochemist at University College London. Currently the method is “tricky,” he says, especially for precisely estimating oxygen concentrations.

Construction of tiny, fluid-filled devices inspired by Legos

Legos have provided the inspiration for small, fluid-ferrying devices that can be built up brick-by-brick.

Tools for manipulating tiny amounts of liquid, known as microfluidic devices, can be used to perform blood tests, detect contaminants in water or simulate biological features like human blood vessels. The devices are easily portable, about the size of a quarter and require only small samples of liquid.

But fabricating such devices is not easy. Each new application requires a different configuration of twisty little passages, demanding a brand new design that must be molded or 3-D printed.

Scientists from the University of California, Irvine created Lego-style blocks out of a polymer called PDMS. Their bricks contained minuscule channels, half a millimeter wide, that allowed liquid to flow from brick to brick with no leaks. New devices could be created quickly by rearranging standard blocks into various configurations, the scientists report January 24 in the Journal of Micromechanics and Microengineering.

Analysis finds gender bias in peer-reviewer picks

Gender bias works in subtle ways, even in the scientific process. The latest illustration of that: Scientists recommend women less often than men as reviewers for scientific papers, a new analysis shows. That seemingly minor oversight is yet another missed opportunity for women that might end up having an impact on hiring, promotions and more.

Peer review is one of the bricks in the foundation supporting science. A researcher’s results don’t get published in a journal until they successfully pass through a gauntlet of scientific peers, who scrutinize the paper for faulty findings, gaps in logic or less-than-meticulous methods. The scientist submitting the paper gets to suggest names for those potential reviewers. Scientific journal editors may contact some of the recommended scientists, and then reach out to a few more.

But peer review isn’t just about the paper (and scientist) being examined. Being the one doing the reviewing “has a number of really positive benefits,” says Brooks Hanson, an earth scientist and director of publications at the American Geophysical Union in Washington, D.C. “You read papers differently as a reviewer than you do as a reader or author. You look at issues differently. It’s a learning experience in how to write papers and how to present research.”

Serving as a peer reviewer can also be a networking tool for scientific collaborations, as reviewers seek out authors whose work they admired. And of course, scientists put the journals they review for on their resumes when they apply for faculty positions, research grants and awards.

But Hanson didn’t really think of looking at who got invited to be peer reviewers until Marcia McNutt, a geophysicist and then editor in chief of Science, stopped by his office. McNutt — now president of the National Academy of Sciences in Washington, D.C. — was organizing a conference on gender bias, and asked if Hanson might be able to get any data. The American Geophysical Union is a membership organization for scientists in Earth and space sciences, and it publishes 20 scientific journals. If Hanson could associate the gender and age of the members with the papers they authored and edited, he might be able to see if there was gender bias in who got picked for peer review.

“We were skeptical at first,” he says. Hanson knew AGU had data for who reviewed and submitted papers to the journals, as well as data for AGU members. But he was unsure how well the two datasets would match up. To find out, he turned to Jory Lerback, who was then a data analyst with AGU. Merging the datasets of all the scientific articles submitted to AGU journals from 2012 to 2015 gave Lerback and Hanson a total of 24,368 paper authors and 62,552 peer reviews performed by 14,919 reviewers. For those papers, they had 97,083 author-named reviewer suggestions, and 118,873 editor-named reviewer requests.

While women were authors less often and submitted fewer papers, their papers were accepted 61 percent of the time. Male authors had papers accepted 57 percent of the time. “We were surprised by that,” Hanson says. While one reviewer of Hanson and Lerback’s analysis (a peer-reviewer on a paper about peer review surely must feel the irony) suggested that women might be getting higher acceptance rates due to reverse discrimination, Hanson disagrees. He wonders if “[women] are putting more care into the submission of the papers,” he says. Men might be more inclined to take more risks, possibly submitting papers to a journal that is beyond their work’s reach. Women, he says, might be more cautious.
When suggesting reviewers, male authors suggested female reviewers only 15 percent of the time, and male editors suggested women just 17 percent of the time. Female authors suggested women reviewers 21 percent of the time, and female editors suggested them 22 percent of the time. The result? Women made up only 20 percent of reviewers, even though 28 percent of AGU members are women and 27 percent of the published first authors in the study were women.

The Earth and space sciences have historically had far fewer women than men, but the numbers are improving for younger scientists. “We thought maybe people were just asking older people to review papers,” says Lerback. But when Lerback and Hanson broke out the results by age, they saw the gender discrepancy knew no age bracket. “It makes us more confident that this is a real issue and not just an age-related phenomenon,” Lerback says. She and Hanson report their analysis in a comment piece in the January 26 Nature.

“It’s incredibly compelling data and one of those ‘why hasn’t someone done this already’ studies,” says Jessi Smith, a social psychologist at Montana State University in Bozeman. While it’s disheartening to see yet another area of bias against women, she notes, “this is good that people are asking these questions and taking a hard look. We can’t solve it if we don’t know about it.”

The time when authors are asked to suggest reviewers might play a role, Smith notes. When scientific papers are submitted online, the field for “suggested reviewers” is often last. “That question for ‘who do you want’ always surprises me,” admits Smith. It’s a very small thing, but she notes that when people are tired and in a hurry, “that’s when these biases take place.”

Knowing that the bias exists is part of the battle, Hanson says, but it’s also time for things to change. He’s pushing for more diversity in the journals’ editorial pool — for gender, nationality and underrepresented minorities. It’s also helpful to remember that suggesting people for peer review isn’t just about the author’s work. “Even small biases, if they happen repeatedly, can have career-related effects,” he says.

Peer-reviewer selection is something that hasn’t gotten a lot of scientific attention. “I think getting a dialog going for everyone to be aware of opportunity gaps, [asking] ‘what can I do to make sure I’m not contributing to this’… I think that will be important to keep editors and authors aware and accountable,” Lerbeck says. She’s now experiencing the peer-review life first hand: She’s now a graduate student at the University of Utah in Salt Lake City. While there, she’s starting a group to discuss bias — to make sure that the conversations keep going.

Physically abused kids learn to fail at social rules for success

Physical abuse at home doesn’t just leave kids black and blue. It also bruises their ability to learn how to act at school and elsewhere, contributing to abused children’s well-documented behavior problems.

Derailment of a basic form of social learning has, for the first time, been linked to these children’s misbehavior years down the line, psychologist Jamie Hanson of the University of Pittsburgh and colleagues report February 3 in the Journal of Child Psychology and Psychiatry. Experiments indicate that physically abused kids lag behind their nonabused peers when it comes to learning to make choices that consistently lead to a reward, even after many trials.
“Physically abused kids fail to adjust flexibly to new behavioral rules in contexts outside their families,” says coauthor Seth Pollak, a psychologist at the University of Wisconsin–Madison. Youth who have endured hitting, choking and other bodily assaults by their parents view the world as a place where hugs and other gratifying responses to good behavior occur inconsistently, if at all. So these youngsters stick to what they learned early in life from volatile parents — rewards are rare and unpredictable, but punishment is always imminent. Kids armed with this expectation of futility end up fighting peers on the playground and antagonizing teachers, Pollak says.

If the new finding holds up, it could lead to new educational interventions for physically abused youth, such as training in how to distinguish safe from dangerous settings and in how to control impulses, Pollak says. Current treatments focus on helping abused children feel safe and less anxious.

More than 117,000 U.S. children were victims of documented physical abuse in 2015, the latest year for which data are available.

“Inflexible reward learning is one of many possible pathways from child maltreatment to later behavior problems,” says Stanford University psychologist Kathryn Humphreys, who did not participate in the new study. Other possible influences on physically abused kids’ disruptive acts include a heightened sensitivity to social stress and a conviction that others always have bad intentions, Humphreys suggests.

Hanson’s team studied 41 physically abused and 40 nonabused kids, ages 12 to 17. Participants came from various racial backgrounds and lived with their parents in poor or lower middle-class neighborhoods. All the youth displayed comparable intelligence and school achievement.
In one experiment, kids saw a picture of a bell or a bottle and were told to choose one to earn points to trade in for toys. Kids who accumulated enough points could select any of several desirable toys displayed in the lab, including a chemistry set and a glow-in-the-dark model of the solar system. Fewer points enabled kids to choose plainer toys, such as a Frisbee or colored pencils.

Over 100 trials, one picture chosen at random by the researchers at the start of the experiment resulted in points 80 percent of the time. The other picture yielded points 20 percent of the time. In a second round of 100 trials using pictures of a bolt and a button, one randomly chosen image resulted in points 70 percent of the time versus 30 percent for the other image.

Both groups chose higher-point images more often as trials progressed, indicating that all kids gradually learned images’ values. But physically abused kids lagged behind: They chose the more-rewarding image on an average of 131 out of 200 trials, compared with 154 out of 200 trials for nonabused youth. The abused kids were held back by what they had learned at home, Pollak suspects.

Too many stinkbugs spoil the wine

How many stressed-out stinkbugs does it take to spoil a batch of wine? More than three per grape cluster, new research says.

Stinkbugs are a pest among vintners because of the bugs’ taste for wine grapes and namesake foul smell. When accidentally harvested with the grapes and fermented during the wine-making process, the live insects can release their stink and ruin the wine (SN: 5/5/07, p. 285). The newly determined threshold is three per cluster of grape, researchers from Oregon State University in Corvallis report in the Journal of Agricultural and Food Chemistry. More stinkbugs produced red wine that tasted musty, as judged by a consumer panel. Quality tanked with rising levels of the stress compound, (E)-2-decenal, which smells like coriander.

White wine lovers can rest easy; stinkbugs don’t seem to affect its flavor because white is processed differently than red.

If you think the Amazon jungle is completely wild, think again

Welcome to the somewhat civilized jungle. Plant cultivation by native groups has shaped the landscape of at least part of South America’s Amazon forests for more than 8,000 years, researchers say.

Of dozens of tree species partly or fully domesticated by ancient peoples, 20 kinds of fruit and nut trees still cover large chunks of Amazonian forests, say ecologist Carolina Levis of the National Institute for Amazonian Research in Manaus, Brazil, and colleagues. Numbers and variety of domesticated tree species increase on and around previously discovered Amazonian archaeological sites, the scientists report in the March 3 Science.
Domesticated trees are “a surviving heritage of the Amazon’s past inhabitants,” Levis says.

The new report, says archaeologist Peter Stahl of the University of Victoria in Canada, adds to previous evidence that “resourceful and highly developed indigenous cultures” intentionally altered some Amazonian forests.

Southwestern and northwestern Amazonian forests contain the greatest numbers and diversity of domesticated tree species, Levis’ team found. Large stands of domesticated Brazil nut trees remain crucial resources for inhabitants of southwestern forests today.

Over the past 300 years, modern Amazonian groups may have helped spread some domesticated tree species, Levis’ group says. For instance, 17th century v­oyagers from Portugal and Spain established plantations of cacao trees in southwestern Amazonian forests that exploited cacao trees already cultivated by local communities, the scientists propose.

Their findings build on a 2013 survey of forest plots all across the Amazon led by ecologist and study coauthor Hans ter Steege of the Naturalis Biodiversity Center in Leiden, the Netherlands. Of approximately 16,000 Amazonian tree species, just 227 accounted for half of all trees, the 2013 study concluded.
Of that number, 85 species display physical features signaling partial or full domestication by native Amazonians before European contact, the new study finds. Studies of plant DNA and plant remains from the Amazon previously suggested that domestication started more than 8,000 years ago. Crucially, 20 domesticated tree species — five times more than the number expected by chance — dominate their respective Amazonian landscapes, especially near archaeological sites and rivers where ancient humans likely congregated, Levis’ team says.

Archaeologists, ecologists and crop geneticists have so far studied only a small slice of the Amazon, which covers an area equivalent to about 93 percent of the contiguous U.S. states.

Levis and her colleagues suspect ancient native groups domesticated trees and plants throughout much of the region. But some researchers, including ecologist Crystal McMichael of the University of Amsterdam, say it’s more likely that ancient South Americans domesticated trees just in certain parts of the Amazon. In the new study, only Brazil nut trees show clear evidence of expansion into surrounding forests from an area of ancient domestication, McMichael says. Other tree species may have mainly been domesticated by native groups or Europeans in the past few hundred years, she says.

Quantum counterfeiters might succeed

Scientists have created an ultrasecure form of money using quantum mechanics — and immediately demonstrated a potential security loophole.

Under ideal conditions, quantum currency is impossible to counterfeit. But thanks to the messiness of reality, a forger with access to sophisticated equipment could skirt that quantum security if banks don’t take appropriate precautions, scientists report March 1 in npj Quantum Information. Quantum money as a concept has been around since the 1970s, but this is the first time anyone has created and counterfeited quantum cash, says study coauthor Karel Lemr, a quantum physicist at Palacký University Olomouc in the Czech Republic.
Instead of paper banknotes, the researchers’ quantum bills are minted in light. To transfer funds, a series of photons — particles of light — would be transmitted to a bank using the photons’ polarizations, the orientation of their electromagnetic waves, to encode information. (The digital currency Bitcoin is similar in that there’s no bill you can hold in your hand. But quantum money has an extra layer of security, backed by the power of quantum mechanics.)

To illustrate their technique in a fun way, the researchers transmitted a pixelated picture of a banknote — an old Austrian bill depicting famed quantum physicist Erwin Schrödinger — using photons’ polarizations to stand for grayscale shades. In a real quantum money system, each bill would be different and the photon polarizations would be distributed randomly, rather than forming a picture. The polarizations would create a serial number–like code the bank could check to verify that the funds are legit.

A criminal intercepting the photons couldn’t copy them accurately because quantum information can’t be perfectly duplicated. “This is actually the cornerstone of security of quantum money,” says Lemr.
But the realities of dealing with quantum particles complicate matters. Because single photons are easily lost or garbled during transmission, banks would have to accept partial quantum bills, analogous to a dollar with a corner torn off. That means a crook might be able to make forgeries that aren’t perfect, but are good enough to pass muster.
Lemr and colleagues used an optimal cloner, a device that comes as close as possible to copying quantum information, to attempt a fake. The researchers showed that a bank would accept a forged bill if the standard for accuracy wasn’t high enough — more than about 84 percent of the received photons’ polarizations must match the original.

Previously, this vulnerability “wasn’t explicitly pointed out, but it’s not surprising,” says theoretical computer scientist Thomas Vidick of Caltech, who was not involved in the research. The result, he says, indicates that banks must be stringent enough in their standards to prove the bills they receive are real.

Most Americans like science — and are willing to pay for it

Americans don’t hate science. Quite the contrary. In fact, 79 percent of Americans think science has made their lives easier, a 2014 Pew Research Center survey found. More than 60 percent of people also believe that government funding for science is essential to its success.

But should the United States spend more money on scientific research than it already does? A layperson’s answer to that question depends on how much that person thinks the government already spends on science, a new study shows. When people find out just how much — or rather, how little — of the federal budget goes to science, support for more funding suddenly jumps.

To see how people’s opinions of public science spending were influenced by accurate information, mechanical engineer Jillian Goldfarb and political scientist Douglas Kriner, both at Boston University, placed a small experiment into the 2014 Cooperative Congressional Election Study. The online survey was given to 1,000 Americans, carefully selected to represent the demographics of the United States. The questions were designed to be nonpartisan, and the survey itself was conducted in 2014, long before the 2016 election.

The survey was simple. First, participants were asked to estimate what percentage of the federal budget was spent on scientific research. Once they’d guessed, half of the participants were told the actual amount that the federal government allocates for nondefense spending on research and development. In 2014, that figure was 1.6 percent of the budget, or about $67 billion. Finally, all the participants were asked if federal spending on science should be increased, decreased or kept the same.

The majority of participants had no idea how much money the government spends on science, and wildly overestimated the actual amount. About half of the respondents estimated federal spending for research at somewhere between 5 and 20 percent of the budget. A quarter of participants estimated that figure was 20 percent of the budget — one very hefty chunk of change. The last 25 percent of respondents estimated that 1 to 2 percent of federal spending went to science.

When participants received no information about how much the United States spent on research, only about 40 percent of them supported more funding. But when they were confronted with the real numbers, support for more funding leapt from 40 to 60 percent.

Those two numbers hover on either side of 50 percent, but Kriner notes, “media coverage [would] go from ‘minority’ to ‘majority’” in favor of more funding — a potentially powerful message. What’s more, the support for science was present in Democrats and Republicans alike, Kriner and Goldfarb report February 1 in Science Communication.
“I think it contributes to our understanding of the aspects of federal spending that people don’t understand very well,” says Brendan Nyhan, a political scientist at Dartmouth University in Hanover, N.H. It’s not surprising that most people don’t know how much the government is spending on research. Nyhan points out that most people probably don’t know how much the government spends on education or foreign aid either.

When trying to gather more support for science funding, Goldfarb says, “tell people how little we spend, and how much they get in return.” Science as a whole isn’t that controversial. No one wants to stop exploring the universe or curing diseases, after all.

But when people — whether politicians or that guy in your Facebook feed — say they want to cut science funding, they won’t be speaking about science as a whole. “There’s a tendency to overgeneralize” and use a few controversial or perceived-to-be-wasteful projects to stand in for all of science, Nyhan warns.

When politicians want to cut funding, he notes, they focus on specific controversial studies or areas — such as climate change, stem cells or genetically modified organisms. They might highlight studies that seem silly, such as those that ended up in former Senator Tom Coburn’s “Wastebook,” (a mantle now taken up Senator Jeff Flake). Take those issues to the constituents, and funding might end up in jeopardy anyway.

Kriner hopes their study’s findings might prove useful even for controversial research areas. “One of the best safeguards against cuts is strong public support for a program,” he explains. “Building public support for science spending may help insulate it from budget cuts — and our research suggests a relatively simple way to increase public support for scientific research.”

But he worries that public support may not stay strong if science becomes too much of a political pawn. The study showed that both Republicans and Democrats supported more funding for science when they knew how little was spent. But “if the current administration increases its attacks on science spending writ large … it could potentially politicize federal support for all forms of scientific research,” Kriner says. And the stronger the politics, the more people on both sides of the aisle grow resistant to hearing arguments from the other side.

Politics aside, Goldfarb and Kriner’s data show that Americans really do like and support science. They want to pay for it. And they may even want to shell out some more money, when they know just how little they already spend.