Cell biologists learn how Zika kills brain cells, devise schemes to stop it

SAN FRANCISCO — Cell biologists are learning more about how the Zika virus disrupts brain cells to cause the birth defect microcephaly, in which a baby’s brain and head are smaller than usual. Meantime, several strategies to combat the virus show preliminary promise, researchers reported at the American Society for Cell Biology’s annual meeting. Among the findings:

Brain cell die-off
Zika causes fetal brain cells neighboring an infected cell to commit suicide, David Doobin of Columbia University Medical Center reported December 6. In work with mice and rats, Doobin and colleagues found suggestions that the cells’ death might be the body’s attempt to limit spread of the virus.

The researchers applied techniques they had previously used to investigate a genetic cause of microcephaly to narrow when in pregnancy the virus is most likely to cause the brain to shrink. Timing of the virus’s effect varied by strain. For one from Puerto Rico, brain cell die-off happened in mice only in the first two trimesters. But a strain from Honduras could kill developing brain cells later into pregnancy. Microcephaly can lead to seizures, mental impairment, delays in speech and movement and other problems.

Enzyme stopper
Disrupting a Zika enzyme could help stop the virus. The enzyme, NS3, causes problems when it gloms on to centrioles, a pair of structures inside cells needed to divvy up chromosomes when cells divide, Andrew Kodani, a cell biologist at Boston Children’s Hospital reported December 6.

Zika, dengue and other related viruses, known as flaviviruses, all use a version of NS3 to chop joined proteins apart so they can do their jobs. (Before chopping, Zika’s 10 proteins are made as one long protein.) But once NS3 finishes slicing virus proteins, the enzyme moves to the centrioles, where it can mess with their assembly, Kodani and colleagues found. Something similar happens in some genetic forms of microcephaly.

A chemical called an anthracene can help fend off dengue, so Kodani and colleagues tested anthracene on Zika as well. Small amounts of the chemical can prevent NS3 from tinkering with the centrioles, the researchers found. So far the work has only been done in lab dishes.
Protein face-off
Another virulent virus could disable Zika. Work with cells grown in lab dishes suggests a bit of protein, or peptide, from the hepatitis C virus, could muck up Zika’s proteins.

The peptide interferes with HSP70, a protein that helps assemble complexes of other proteins, including ones involved in protein production. That peptide and other compounds were already known to inhibit hepatitis C replication, UCLA virologist Ronik Khachatoorian and colleagues had previously discovered. The hepatitis C virus peptide stops Zika virus proteins from being made and hampers assembly of the virus, Khachatoorian reported December 5.

The peptide has not been tested in animals yet.

Coastal waters were an oxygen oasis 2.3 billion years ago

Earth was momentarily ripe for the evolution of animals hundreds of millions of years before they first appeared, researchers propose.

Chemical clues in ancient rocks suggest that 2.32 billion to 2.1 billion years ago, shallow coastal waters held enough oxygen to support oxygen-hungry life-forms including some animals, researchers report the week of January 16 in the Proceedings of the National Academy of Sciences. But the first animal fossils, sponges, don’t appear until around 650 million years ago, following a period of scant oxygen known as the boring billion (SN: 11/14/15, p. 18).
“As far as environmental conditions were concerned, things were favorable for this evolutionary step to happen,” says study coauthor Andrey Bekker, a sedimentary geologist at the University of California, Riverside. Something else must have stalled the rise of animals, he says.

Microbes began flooding Earth with oxygen around 2.3 billion years ago during the Great Oxidation Event. This breath of oxygen enabled the eventual emergence of complex, oxygen-dependent life-forms called eukaryotes, an evolutionary line that would later include animals and plants. Scientists have proposed that the Great Oxidation Event wasn’t a smooth rise, but contained an “overshoot” during which oxygen concentrations momentarily peaked before dropping to a lower, stable level during the boring billion. Whether that overshoot was enough to support animals was unclear, though.

Bekker and colleagues tackled this question using a relatively new way to measure ancient oxygen. Rock weathering can wash the element selenium into the oceans. In oxygen-free waters, all of the selenium settles onto the seafloor. But in water with at least some oxygen, only a fraction of the selenium is deposited. And the selenium that is laid down is disproportionately that of a lighter isotope of the element, leaving atoms of a heavier isotope to be deposited elsewhere.
If ancient coasts contained relatively abundant oxygen, the researchers expected to find more light selenium close to shore and more heavy selenium in deeper, oxygen-deprived waters. Analyzing shales formed under deep waters around the world, the researchers found just such an isotope segregation. These shales had an abundance of the heavier selenium, leading researchers to infer that the lighter version of the element was concentrated closer to shore.

Oxygen concentrations in coastal waters were at least nearly one percent of present-day levels and “were flirting with the limits of what complex life can survive,” proposes study coauthor Michael Kipp, a geochemist at the University of Washington in Seattle. While the environment was probably suitable for eukaryotes, life hadn’t evolved enough by this point to take advantage of the situation, Kipp proposes. The appearance of eukaryotes in the fossil record would take hundreds of millions of years of more evolution (SN Online: 5/18/16), he says, and the first animals even longer.
Tracking selenium is such a new technique, though, that “interpretations could change as we better understand how it works,” says Philip Pogge von Strandmann, a geochemist at University College London. Currently the method is “tricky,” he says, especially for precisely estimating oxygen concentrations.

Construction of tiny, fluid-filled devices inspired by Legos

Legos have provided the inspiration for small, fluid-ferrying devices that can be built up brick-by-brick.

Tools for manipulating tiny amounts of liquid, known as microfluidic devices, can be used to perform blood tests, detect contaminants in water or simulate biological features like human blood vessels. The devices are easily portable, about the size of a quarter and require only small samples of liquid.

But fabricating such devices is not easy. Each new application requires a different configuration of twisty little passages, demanding a brand new design that must be molded or 3-D printed.

Scientists from the University of California, Irvine created Lego-style blocks out of a polymer called PDMS. Their bricks contained minuscule channels, half a millimeter wide, that allowed liquid to flow from brick to brick with no leaks. New devices could be created quickly by rearranging standard blocks into various configurations, the scientists report January 24 in the Journal of Micromechanics and Microengineering.

Analysis finds gender bias in peer-reviewer picks

Gender bias works in subtle ways, even in the scientific process. The latest illustration of that: Scientists recommend women less often than men as reviewers for scientific papers, a new analysis shows. That seemingly minor oversight is yet another missed opportunity for women that might end up having an impact on hiring, promotions and more.

Peer review is one of the bricks in the foundation supporting science. A researcher’s results don’t get published in a journal until they successfully pass through a gauntlet of scientific peers, who scrutinize the paper for faulty findings, gaps in logic or less-than-meticulous methods. The scientist submitting the paper gets to suggest names for those potential reviewers. Scientific journal editors may contact some of the recommended scientists, and then reach out to a few more.

But peer review isn’t just about the paper (and scientist) being examined. Being the one doing the reviewing “has a number of really positive benefits,” says Brooks Hanson, an earth scientist and director of publications at the American Geophysical Union in Washington, D.C. “You read papers differently as a reviewer than you do as a reader or author. You look at issues differently. It’s a learning experience in how to write papers and how to present research.”

Serving as a peer reviewer can also be a networking tool for scientific collaborations, as reviewers seek out authors whose work they admired. And of course, scientists put the journals they review for on their resumes when they apply for faculty positions, research grants and awards.

But Hanson didn’t really think of looking at who got invited to be peer reviewers until Marcia McNutt, a geophysicist and then editor in chief of Science, stopped by his office. McNutt — now president of the National Academy of Sciences in Washington, D.C. — was organizing a conference on gender bias, and asked if Hanson might be able to get any data. The American Geophysical Union is a membership organization for scientists in Earth and space sciences, and it publishes 20 scientific journals. If Hanson could associate the gender and age of the members with the papers they authored and edited, he might be able to see if there was gender bias in who got picked for peer review.

“We were skeptical at first,” he says. Hanson knew AGU had data for who reviewed and submitted papers to the journals, as well as data for AGU members. But he was unsure how well the two datasets would match up. To find out, he turned to Jory Lerback, who was then a data analyst with AGU. Merging the datasets of all the scientific articles submitted to AGU journals from 2012 to 2015 gave Lerback and Hanson a total of 24,368 paper authors and 62,552 peer reviews performed by 14,919 reviewers. For those papers, they had 97,083 author-named reviewer suggestions, and 118,873 editor-named reviewer requests.

While women were authors less often and submitted fewer papers, their papers were accepted 61 percent of the time. Male authors had papers accepted 57 percent of the time. “We were surprised by that,” Hanson says. While one reviewer of Hanson and Lerback’s analysis (a peer-reviewer on a paper about peer review surely must feel the irony) suggested that women might be getting higher acceptance rates due to reverse discrimination, Hanson disagrees. He wonders if “[women] are putting more care into the submission of the papers,” he says. Men might be more inclined to take more risks, possibly submitting papers to a journal that is beyond their work’s reach. Women, he says, might be more cautious.
When suggesting reviewers, male authors suggested female reviewers only 15 percent of the time, and male editors suggested women just 17 percent of the time. Female authors suggested women reviewers 21 percent of the time, and female editors suggested them 22 percent of the time. The result? Women made up only 20 percent of reviewers, even though 28 percent of AGU members are women and 27 percent of the published first authors in the study were women.

The Earth and space sciences have historically had far fewer women than men, but the numbers are improving for younger scientists. “We thought maybe people were just asking older people to review papers,” says Lerback. But when Lerback and Hanson broke out the results by age, they saw the gender discrepancy knew no age bracket. “It makes us more confident that this is a real issue and not just an age-related phenomenon,” Lerback says. She and Hanson report their analysis in a comment piece in the January 26 Nature.

“It’s incredibly compelling data and one of those ‘why hasn’t someone done this already’ studies,” says Jessi Smith, a social psychologist at Montana State University in Bozeman. While it’s disheartening to see yet another area of bias against women, she notes, “this is good that people are asking these questions and taking a hard look. We can’t solve it if we don’t know about it.”

The time when authors are asked to suggest reviewers might play a role, Smith notes. When scientific papers are submitted online, the field for “suggested reviewers” is often last. “That question for ‘who do you want’ always surprises me,” admits Smith. It’s a very small thing, but she notes that when people are tired and in a hurry, “that’s when these biases take place.”

Knowing that the bias exists is part of the battle, Hanson says, but it’s also time for things to change. He’s pushing for more diversity in the journals’ editorial pool — for gender, nationality and underrepresented minorities. It’s also helpful to remember that suggesting people for peer review isn’t just about the author’s work. “Even small biases, if they happen repeatedly, can have career-related effects,” he says.

Peer-reviewer selection is something that hasn’t gotten a lot of scientific attention. “I think getting a dialog going for everyone to be aware of opportunity gaps, [asking] ‘what can I do to make sure I’m not contributing to this’… I think that will be important to keep editors and authors aware and accountable,” Lerbeck says. She’s now experiencing the peer-review life first hand: She’s now a graduate student at the University of Utah in Salt Lake City. While there, she’s starting a group to discuss bias — to make sure that the conversations keep going.

Physically abused kids learn to fail at social rules for success

Physical abuse at home doesn’t just leave kids black and blue. It also bruises their ability to learn how to act at school and elsewhere, contributing to abused children’s well-documented behavior problems.

Derailment of a basic form of social learning has, for the first time, been linked to these children’s misbehavior years down the line, psychologist Jamie Hanson of the University of Pittsburgh and colleagues report February 3 in the Journal of Child Psychology and Psychiatry. Experiments indicate that physically abused kids lag behind their nonabused peers when it comes to learning to make choices that consistently lead to a reward, even after many trials.
“Physically abused kids fail to adjust flexibly to new behavioral rules in contexts outside their families,” says coauthor Seth Pollak, a psychologist at the University of Wisconsin–Madison. Youth who have endured hitting, choking and other bodily assaults by their parents view the world as a place where hugs and other gratifying responses to good behavior occur inconsistently, if at all. So these youngsters stick to what they learned early in life from volatile parents — rewards are rare and unpredictable, but punishment is always imminent. Kids armed with this expectation of futility end up fighting peers on the playground and antagonizing teachers, Pollak says.

If the new finding holds up, it could lead to new educational interventions for physically abused youth, such as training in how to distinguish safe from dangerous settings and in how to control impulses, Pollak says. Current treatments focus on helping abused children feel safe and less anxious.

More than 117,000 U.S. children were victims of documented physical abuse in 2015, the latest year for which data are available.

“Inflexible reward learning is one of many possible pathways from child maltreatment to later behavior problems,” says Stanford University psychologist Kathryn Humphreys, who did not participate in the new study. Other possible influences on physically abused kids’ disruptive acts include a heightened sensitivity to social stress and a conviction that others always have bad intentions, Humphreys suggests.

Hanson’s team studied 41 physically abused and 40 nonabused kids, ages 12 to 17. Participants came from various racial backgrounds and lived with their parents in poor or lower middle-class neighborhoods. All the youth displayed comparable intelligence and school achievement.
In one experiment, kids saw a picture of a bell or a bottle and were told to choose one to earn points to trade in for toys. Kids who accumulated enough points could select any of several desirable toys displayed in the lab, including a chemistry set and a glow-in-the-dark model of the solar system. Fewer points enabled kids to choose plainer toys, such as a Frisbee or colored pencils.

Over 100 trials, one picture chosen at random by the researchers at the start of the experiment resulted in points 80 percent of the time. The other picture yielded points 20 percent of the time. In a second round of 100 trials using pictures of a bolt and a button, one randomly chosen image resulted in points 70 percent of the time versus 30 percent for the other image.

Both groups chose higher-point images more often as trials progressed, indicating that all kids gradually learned images’ values. But physically abused kids lagged behind: They chose the more-rewarding image on an average of 131 out of 200 trials, compared with 154 out of 200 trials for nonabused youth. The abused kids were held back by what they had learned at home, Pollak suspects.

If you think the Amazon jungle is completely wild, think again

Welcome to the somewhat civilized jungle. Plant cultivation by native groups has shaped the landscape of at least part of South America’s Amazon forests for more than 8,000 years, researchers say.

Of dozens of tree species partly or fully domesticated by ancient peoples, 20 kinds of fruit and nut trees still cover large chunks of Amazonian forests, say ecologist Carolina Levis of the National Institute for Amazonian Research in Manaus, Brazil, and colleagues. Numbers and variety of domesticated tree species increase on and around previously discovered Amazonian archaeological sites, the scientists report in the March 3 Science.
Domesticated trees are “a surviving heritage of the Amazon’s past inhabitants,” Levis says.

The new report, says archaeologist Peter Stahl of the University of Victoria in Canada, adds to previous evidence that “resourceful and highly developed indigenous cultures” intentionally altered some Amazonian forests.

Southwestern and northwestern Amazonian forests contain the greatest numbers and diversity of domesticated tree species, Levis’ team found. Large stands of domesticated Brazil nut trees remain crucial resources for inhabitants of southwestern forests today.

Over the past 300 years, modern Amazonian groups may have helped spread some domesticated tree species, Levis’ group says. For instance, 17th century v­oyagers from Portugal and Spain established plantations of cacao trees in southwestern Amazonian forests that exploited cacao trees already cultivated by local communities, the scientists propose.

Their findings build on a 2013 survey of forest plots all across the Amazon led by ecologist and study coauthor Hans ter Steege of the Naturalis Biodiversity Center in Leiden, the Netherlands. Of approximately 16,000 Amazonian tree species, just 227 accounted for half of all trees, the 2013 study concluded.
Of that number, 85 species display physical features signaling partial or full domestication by native Amazonians before European contact, the new study finds. Studies of plant DNA and plant remains from the Amazon previously suggested that domestication started more than 8,000 years ago. Crucially, 20 domesticated tree species — five times more than the number expected by chance — dominate their respective Amazonian landscapes, especially near archaeological sites and rivers where ancient humans likely congregated, Levis’ team says.

Archaeologists, ecologists and crop geneticists have so far studied only a small slice of the Amazon, which covers an area equivalent to about 93 percent of the contiguous U.S. states.

Levis and her colleagues suspect ancient native groups domesticated trees and plants throughout much of the region. But some researchers, including ecologist Crystal McMichael of the University of Amsterdam, say it’s more likely that ancient South Americans domesticated trees just in certain parts of the Amazon. In the new study, only Brazil nut trees show clear evidence of expansion into surrounding forests from an area of ancient domestication, McMichael says. Other tree species may have mainly been domesticated by native groups or Europeans in the past few hundred years, she says.

Quantum counterfeiters might succeed

Scientists have created an ultrasecure form of money using quantum mechanics — and immediately demonstrated a potential security loophole.

Under ideal conditions, quantum currency is impossible to counterfeit. But thanks to the messiness of reality, a forger with access to sophisticated equipment could skirt that quantum security if banks don’t take appropriate precautions, scientists report March 1 in npj Quantum Information. Quantum money as a concept has been around since the 1970s, but this is the first time anyone has created and counterfeited quantum cash, says study coauthor Karel Lemr, a quantum physicist at Palacký University Olomouc in the Czech Republic.
Instead of paper banknotes, the researchers’ quantum bills are minted in light. To transfer funds, a series of photons — particles of light — would be transmitted to a bank using the photons’ polarizations, the orientation of their electromagnetic waves, to encode information. (The digital currency Bitcoin is similar in that there’s no bill you can hold in your hand. But quantum money has an extra layer of security, backed by the power of quantum mechanics.)

To illustrate their technique in a fun way, the researchers transmitted a pixelated picture of a banknote — an old Austrian bill depicting famed quantum physicist Erwin Schrödinger — using photons’ polarizations to stand for grayscale shades. In a real quantum money system, each bill would be different and the photon polarizations would be distributed randomly, rather than forming a picture. The polarizations would create a serial number–like code the bank could check to verify that the funds are legit.

A criminal intercepting the photons couldn’t copy them accurately because quantum information can’t be perfectly duplicated. “This is actually the cornerstone of security of quantum money,” says Lemr.
But the realities of dealing with quantum particles complicate matters. Because single photons are easily lost or garbled during transmission, banks would have to accept partial quantum bills, analogous to a dollar with a corner torn off. That means a crook might be able to make forgeries that aren’t perfect, but are good enough to pass muster.
Lemr and colleagues used an optimal cloner, a device that comes as close as possible to copying quantum information, to attempt a fake. The researchers showed that a bank would accept a forged bill if the standard for accuracy wasn’t high enough — more than about 84 percent of the received photons’ polarizations must match the original.

Previously, this vulnerability “wasn’t explicitly pointed out, but it’s not surprising,” says theoretical computer scientist Thomas Vidick of Caltech, who was not involved in the research. The result, he says, indicates that banks must be stringent enough in their standards to prove the bills they receive are real.

Most Americans like science — and are willing to pay for it

Americans don’t hate science. Quite the contrary. In fact, 79 percent of Americans think science has made their lives easier, a 2014 Pew Research Center survey found. More than 60 percent of people also believe that government funding for science is essential to its success.

But should the United States spend more money on scientific research than it already does? A layperson’s answer to that question depends on how much that person thinks the government already spends on science, a new study shows. When people find out just how much — or rather, how little — of the federal budget goes to science, support for more funding suddenly jumps.

To see how people’s opinions of public science spending were influenced by accurate information, mechanical engineer Jillian Goldfarb and political scientist Douglas Kriner, both at Boston University, placed a small experiment into the 2014 Cooperative Congressional Election Study. The online survey was given to 1,000 Americans, carefully selected to represent the demographics of the United States. The questions were designed to be nonpartisan, and the survey itself was conducted in 2014, long before the 2016 election.

The survey was simple. First, participants were asked to estimate what percentage of the federal budget was spent on scientific research. Once they’d guessed, half of the participants were told the actual amount that the federal government allocates for nondefense spending on research and development. In 2014, that figure was 1.6 percent of the budget, or about $67 billion. Finally, all the participants were asked if federal spending on science should be increased, decreased or kept the same.

The majority of participants had no idea how much money the government spends on science, and wildly overestimated the actual amount. About half of the respondents estimated federal spending for research at somewhere between 5 and 20 percent of the budget. A quarter of participants estimated that figure was 20 percent of the budget — one very hefty chunk of change. The last 25 percent of respondents estimated that 1 to 2 percent of federal spending went to science.

When participants received no information about how much the United States spent on research, only about 40 percent of them supported more funding. But when they were confronted with the real numbers, support for more funding leapt from 40 to 60 percent.

Those two numbers hover on either side of 50 percent, but Kriner notes, “media coverage [would] go from ‘minority’ to ‘majority’” in favor of more funding — a potentially powerful message. What’s more, the support for science was present in Democrats and Republicans alike, Kriner and Goldfarb report February 1 in Science Communication.
“I think it contributes to our understanding of the aspects of federal spending that people don’t understand very well,” says Brendan Nyhan, a political scientist at Dartmouth University in Hanover, N.H. It’s not surprising that most people don’t know how much the government is spending on research. Nyhan points out that most people probably don’t know how much the government spends on education or foreign aid either.

When trying to gather more support for science funding, Goldfarb says, “tell people how little we spend, and how much they get in return.” Science as a whole isn’t that controversial. No one wants to stop exploring the universe or curing diseases, after all.

But when people — whether politicians or that guy in your Facebook feed — say they want to cut science funding, they won’t be speaking about science as a whole. “There’s a tendency to overgeneralize” and use a few controversial or perceived-to-be-wasteful projects to stand in for all of science, Nyhan warns.

When politicians want to cut funding, he notes, they focus on specific controversial studies or areas — such as climate change, stem cells or genetically modified organisms. They might highlight studies that seem silly, such as those that ended up in former Senator Tom Coburn’s “Wastebook,” (a mantle now taken up Senator Jeff Flake). Take those issues to the constituents, and funding might end up in jeopardy anyway.

Kriner hopes their study’s findings might prove useful even for controversial research areas. “One of the best safeguards against cuts is strong public support for a program,” he explains. “Building public support for science spending may help insulate it from budget cuts — and our research suggests a relatively simple way to increase public support for scientific research.”

But he worries that public support may not stay strong if science becomes too much of a political pawn. The study showed that both Republicans and Democrats supported more funding for science when they knew how little was spent. But “if the current administration increases its attacks on science spending writ large … it could potentially politicize federal support for all forms of scientific research,” Kriner says. And the stronger the politics, the more people on both sides of the aisle grow resistant to hearing arguments from the other side.

Politics aside, Goldfarb and Kriner’s data show that Americans really do like and support science. They want to pay for it. And they may even want to shell out some more money, when they know just how little they already spend.

Trackers may tip a warbler’s odds of returning to its nest

Strapping tiny trackers called geolocators to the backs of birds can reveal a lot about where the birds go when they migrate, how they get there and what happens along the way. But ornithologists are finding that these cool backpacks could have not-so-cool consequences.

Douglas Raybuck of Arkansas State University and his colleagues outfitted some Cerulean warblers (Setophaga cerulea) with geolocators and some with simple color tags to test the effects the locators might have on breeding and reproduction. This particular species globe-trots from its nesting grounds in the eastern United States to wintering grounds in South America and back each year. While the backpacks didn’t affect reproduction, birds wearing the devices were less likely than those wearing tags to return to the same breeding grounds the next year. The birds may have gotten off track, cut their trips short or died, possibly due to extra weight or drag from the backpack, the team reports May 3 in The Condor.

The study adds to conflicting evidence that geolocators affect some birds in negative ways, such as altering their breeding biology. At best, potential downsides vary from bird to bird and backpack to backpack. But that shouldn’t stop researchers from using geolocators to study migrating birds, the researchers argue, because the devices pinpoint areas crucial to migrating birds and can aid in conservation efforts.

Flight demands may have steered the evolution of bird egg shape

The mystery of why birds’ eggs come in so many shapes has long been up in the air. Now new research suggests adaptations for flight may have helped shape the orbs.

Stronger fliers tend to lay more elongated eggs, researchers report in the June 23 Science. The finding comes from the first large analysis of the way egg shape varies across bird species, from the almost perfectly spherical egg of the brown hawk owl to the raindrop-shaped egg of the least sandpiper.
“Eggs fulfill such a specific role in birds — the egg is designed to protect and nourish the chick. Why there’s such diversity in form when there’s such a set function was a question that we found intriguing,” says study coauthor Mary Caswell Stoddard, an evolutionary biologist at Princeton University.

Previous studies have suggested many possible advantages for different shapes. Perhaps cone-shaped eggs are less likely to roll out of the nest of cliff-dwelling birds; spherical eggs might be more resilient to damage in the nest. But no one had tested such hypotheses across a wide spectrum of birds.

Stoddard and her team analyzed almost 50,000 eggs from 1,400 species, representing about 14 percent of known bird species. The researchers boiled each egg down to its two-dimensional silhouette and then used an algorithm to describe each egg using two variables: how elliptical versus spherical the egg is and how asymmetrical it is — whether it’s pointier on one end than the other.

Next, the researchers looked at the way these two traits vary across the bird family tree. One pattern jumped out: Species that are stronger fliers, as measured by wing shape, tend to lay more elliptical or asymmetrical eggs, says study coauthor L. Mahadevan, a mathematician and biologist at Harvard University.
Mahadevan cautions that the data show only an association, but the researchers propose one possible explanation for the link between flying and egg shape. Adapting to flight streamlined bird bodies, perhaps also narrowing the reproductive tract. That narrowing would have limited the width of an egg that a female could lay. But since eggs provide nutrition for the chick growing inside, shrinking eggs too much would deprive the developing bird. Elongated eggs might have been a compromise between keeping egg volume up without increasing girth, Stoddard suggests. Asymmetry can increase egg volume in a similar way.

Testing a causal connection between flight ability and egg shape is tough “because of course we can’t replay the whole tape of life again,” says Claire Spottiswoode, a zoologist at the University of Cambridge who wrote a commentary accompanying the study. Still, Spottiswoode says the evidence is compelling: “It’s a very plausible argument.”

Santiago Claramunt, associate curator of ornithology at the Royal Ontario Museum in Toronto, isn’t convinced that flight adaptations played a driving role in the evolution of egg shape. “Streamlining in birds is determined more by plumage than the shape of the body — high performing fliers can have rounded, bulky bodies” he says, which wouldn’t give elongated eggs the same advantage over other egg shapes. He cites frigate birds and swifts as examples, both of which make long-distance flights but have fairly broad bodies. “There’s certainly more going on there.”

Indeed, some orders of birds showed a much stronger link between flying and egg shape than others did. And while other factors — like where birds lay their eggs and how many they lay at once — weren’t significantly related to egg shape across birds as a whole, they could be important within certain branches of the bird family tree.