Quantum counterfeiters might succeed

Scientists have created an ultrasecure form of money using quantum mechanics — and immediately demonstrated a potential security loophole.

Under ideal conditions, quantum currency is impossible to counterfeit. But thanks to the messiness of reality, a forger with access to sophisticated equipment could skirt that quantum security if banks don’t take appropriate precautions, scientists report March 1 in npj Quantum Information. Quantum money as a concept has been around since the 1970s, but this is the first time anyone has created and counterfeited quantum cash, says study coauthor Karel Lemr, a quantum physicist at Palacký University Olomouc in the Czech Republic.
Instead of paper banknotes, the researchers’ quantum bills are minted in light. To transfer funds, a series of photons — particles of light — would be transmitted to a bank using the photons’ polarizations, the orientation of their electromagnetic waves, to encode information. (The digital currency Bitcoin is similar in that there’s no bill you can hold in your hand. But quantum money has an extra layer of security, backed by the power of quantum mechanics.)

To illustrate their technique in a fun way, the researchers transmitted a pixelated picture of a banknote — an old Austrian bill depicting famed quantum physicist Erwin Schrödinger — using photons’ polarizations to stand for grayscale shades. In a real quantum money system, each bill would be different and the photon polarizations would be distributed randomly, rather than forming a picture. The polarizations would create a serial number–like code the bank could check to verify that the funds are legit.

A criminal intercepting the photons couldn’t copy them accurately because quantum information can’t be perfectly duplicated. “This is actually the cornerstone of security of quantum money,” says Lemr.
But the realities of dealing with quantum particles complicate matters. Because single photons are easily lost or garbled during transmission, banks would have to accept partial quantum bills, analogous to a dollar with a corner torn off. That means a crook might be able to make forgeries that aren’t perfect, but are good enough to pass muster.
Lemr and colleagues used an optimal cloner, a device that comes as close as possible to copying quantum information, to attempt a fake. The researchers showed that a bank would accept a forged bill if the standard for accuracy wasn’t high enough — more than about 84 percent of the received photons’ polarizations must match the original.

Previously, this vulnerability “wasn’t explicitly pointed out, but it’s not surprising,” says theoretical computer scientist Thomas Vidick of Caltech, who was not involved in the research. The result, he says, indicates that banks must be stringent enough in their standards to prove the bills they receive are real.

Most Americans like science — and are willing to pay for it

Americans don’t hate science. Quite the contrary. In fact, 79 percent of Americans think science has made their lives easier, a 2014 Pew Research Center survey found. More than 60 percent of people also believe that government funding for science is essential to its success.

But should the United States spend more money on scientific research than it already does? A layperson’s answer to that question depends on how much that person thinks the government already spends on science, a new study shows. When people find out just how much — or rather, how little — of the federal budget goes to science, support for more funding suddenly jumps.

To see how people’s opinions of public science spending were influenced by accurate information, mechanical engineer Jillian Goldfarb and political scientist Douglas Kriner, both at Boston University, placed a small experiment into the 2014 Cooperative Congressional Election Study. The online survey was given to 1,000 Americans, carefully selected to represent the demographics of the United States. The questions were designed to be nonpartisan, and the survey itself was conducted in 2014, long before the 2016 election.

The survey was simple. First, participants were asked to estimate what percentage of the federal budget was spent on scientific research. Once they’d guessed, half of the participants were told the actual amount that the federal government allocates for nondefense spending on research and development. In 2014, that figure was 1.6 percent of the budget, or about $67 billion. Finally, all the participants were asked if federal spending on science should be increased, decreased or kept the same.

The majority of participants had no idea how much money the government spends on science, and wildly overestimated the actual amount. About half of the respondents estimated federal spending for research at somewhere between 5 and 20 percent of the budget. A quarter of participants estimated that figure was 20 percent of the budget — one very hefty chunk of change. The last 25 percent of respondents estimated that 1 to 2 percent of federal spending went to science.

When participants received no information about how much the United States spent on research, only about 40 percent of them supported more funding. But when they were confronted with the real numbers, support for more funding leapt from 40 to 60 percent.

Those two numbers hover on either side of 50 percent, but Kriner notes, “media coverage [would] go from ‘minority’ to ‘majority’” in favor of more funding — a potentially powerful message. What’s more, the support for science was present in Democrats and Republicans alike, Kriner and Goldfarb report February 1 in Science Communication.
“I think it contributes to our understanding of the aspects of federal spending that people don’t understand very well,” says Brendan Nyhan, a political scientist at Dartmouth University in Hanover, N.H. It’s not surprising that most people don’t know how much the government is spending on research. Nyhan points out that most people probably don’t know how much the government spends on education or foreign aid either.

When trying to gather more support for science funding, Goldfarb says, “tell people how little we spend, and how much they get in return.” Science as a whole isn’t that controversial. No one wants to stop exploring the universe or curing diseases, after all.

But when people — whether politicians or that guy in your Facebook feed — say they want to cut science funding, they won’t be speaking about science as a whole. “There’s a tendency to overgeneralize” and use a few controversial or perceived-to-be-wasteful projects to stand in for all of science, Nyhan warns.

When politicians want to cut funding, he notes, they focus on specific controversial studies or areas — such as climate change, stem cells or genetically modified organisms. They might highlight studies that seem silly, such as those that ended up in former Senator Tom Coburn’s “Wastebook,” (a mantle now taken up Senator Jeff Flake). Take those issues to the constituents, and funding might end up in jeopardy anyway.

Kriner hopes their study’s findings might prove useful even for controversial research areas. “One of the best safeguards against cuts is strong public support for a program,” he explains. “Building public support for science spending may help insulate it from budget cuts — and our research suggests a relatively simple way to increase public support for scientific research.”

But he worries that public support may not stay strong if science becomes too much of a political pawn. The study showed that both Republicans and Democrats supported more funding for science when they knew how little was spent. But “if the current administration increases its attacks on science spending writ large … it could potentially politicize federal support for all forms of scientific research,” Kriner says. And the stronger the politics, the more people on both sides of the aisle grow resistant to hearing arguments from the other side.

Politics aside, Goldfarb and Kriner’s data show that Americans really do like and support science. They want to pay for it. And they may even want to shell out some more money, when they know just how little they already spend.

Volcanic eruptions nearly snuffed out Gentoo penguin colony

Penguins have been pooping on Ardley Island off the coast of the Antarctic Peninsula for a long, long time. The population there is one of the biggest and oldest Gentoo penguin (Pygoscelis papua) colonies. But evidence from ancient excrement suggests that these animals didn’t always flourish.

Stephen Roberts of the British Antarctic Survey and colleagues set out to see how the Ardley population responded to past changes in climate to better inform future conservation efforts. The researchers studied the geochemical makeup of lake sediment samples and identified elements from penguin guano. Knowing the fraction of guano in lake sludge over time let the researchers track penguin population changes.

The Gentoo penguin colony was nearly wiped out three times over the 6,700 years that the penguins have occupied Ardley Island. But rather than lining up with changes in temperature or sea ice levels, these population dips corresponded to volcanic ash preserved in the geologic record from big eruptions of a volcano on nearby Deception Island. After each population crash, the colony took 400 to 800 years to recover, the team reports April 11 in Nature Communications.

Readers puzzled by proton’s properties

Proton puzzler
Uncertainty over the proton’s size, spin and life span could have physicists rethinking standard notions about matter and the universe, Emily Conover reported in “The proton puzzle” (SN: 4/29/17, p. 22).

Readers wondered about the diameter (or size) of the proton, which has three fundamental particles called quarks rattling around inside. “Still scratching my head over how combining three dimensionless quarks ends up forming a proton with a ‘diameter,’ ” online reader Down_Home wrote. “Maybe that word doesn’t mean what I think it means.”
The three quarks within the proton are only apparent when the proton is probed with high-energy particles, Conover says. At lower energies, particles “see” the entire proton as one entity. “In that case, the proton just behaves like a sphere of positive charge,” she says. Scientists measure the size of this sphere by looking at how electrons are deflected when they come close to the proton. “Researchers disagree on the sphere’s diameter, which makes for a bit of an identity crisis for the proton,” Conover says.

Blooming Arctic
Nearly 30 percent of ice covering the Arctic Ocean at summer’s peak is thin enough to foster sprawling phytoplankton blooms in the waters below, a recent study estimated. These ice-covered blooms were probably uncommon just 20 years ago, Thomas Sumner reported in “Thinning ice creates undersea greenhouses in the Arctic” (SN: 4/29/17, p. 20).

Several online readers wanted to know how the under-ice blooms get the carbon dioxide they need to photosynthesize.
Others wondered about the blooms’ potential effect on climate. “I’m not sure if this is good or bad news,” reader Witch Daemon wrote. More phytoplankton could mean that the Arctic could store more carbon, but the blooms wouldn’t exist if it weren’t for warming and melting ice, Witch Daemon reasoned.
When phytoplankton are trapped under ice, they absorb CO2 that’s dissolved in the upper ocean, says oceanographer and study coauthor Christopher Horvat of Harvard University.

What the blooms might mean for storing carbon in the ocean is uncertain, Horvat says. But if these under-ice blooms occur in addition to the familiar blooms along the edges of the ice, then there’s a chance that more carbon could be stored away.

Shields up
Most of the gases in Mars’ atmosphere may have been stripped away by solar wind, Ashley Yeager reported in “Extreme gas loss dried out Mars” (SN: 4/29/17, p. 20). The loss of so much gas may explain how the planet morphed from a wet, warm world to a dry, icy one.

Online reader Robert Knox wondered how long it took for the solar wind to strip Mars of its atmospheric gases. “Earth is closer to the sun, so the solar wind is more intense,” Knox wrote. “Why did this not happen to Earth?”

Luckily for us, Earth is protected by a magnetic field, Yeager says. This field deflects the solar wind and prevents it from picking away at the planet’s atmospheric particles. Mars lost most of its global magnetic field about 4.2 billion years ago, which allowed the solar wind to sweep away much of the planet’s atmosphere over a few hundred million years, she says.

Top 10 discoveries about waves

Physics fans are a lot like surfers. Both think waves are really fun.

For surfers, it’s all about having a good time. For physicists, it’s about understanding some of nature’s most important physical phenomena. Yet another detection of gravitational waves, announced June 1, further reinvigorates the world’s science fans’ excitement over waves.

Waves have naturally always been a topic of scientific and mathematical interest. They play a part in an enormous range of physical processes, from heat and light to radio and TV, sonograms and music, earthquakes and holograms. (Waves used to even be a common sight in baseball stadiums, but fans got tired of standing up and down and it was really annoying anyway.)

Many of science’s greatest achievements have been discoveries of new kinds of waves or new insights into wave motion. Identifying just the Top 10 such discoveries (or ideas) is therefore difficult and bound to elicit critical comments from cult members of particular secret wave societies. So remember, if your favorite wave isn’t on this list, it would have been No. 11.

  1. Thomas Young: Light is a wave.
    In the opening years of the 19th century, the English physician Young tackled a long-running controversy about the nature of light. A century earlier, Isaac Newton had argued forcibly for the view that light consisted of (very small) particles. Newton’s contemporary Christiaan Huygens strongly disagreed, insisting that light traveled through space as a wave.

Through a series of clever experiments, Young demonstrated strong evidence for waves. Poking two tiny holes in a thick sheet of paper, Young saw that light passing through created alternating bands of light and darkness on a surface placed on the other side of the paper. That was just as expected if light passing through the two holes interfered just as water waves do, canceling out when crest met trough or enhancing when crests met “in phase.” Young did not work out his wave theory with mathematical rigor and so Newton’s defenders resisted, attempting to explain away Young’s results.

But soon Augustin Jean Fresnel in France worked out the math of light waves in detail. And in 1850, when Jean-Bernard-Léon Foucault showed that light travels faster in air than water, the staunchest Newton fans had to capitulate. Newton himself would have acknowledged that light must therefore consist of waves. (Much later, though, Einstein found a way that light could in fact consist of particles, which came to be called photons.)

  1. Michelson and Morley: Light waves don’t vibrate anything.
    Waves are vibrations, implying the need for something to vibrate. Sound vibrated molecules in the air, for instance, and ocean waves vibrated molecules of water. Light, supposedly, vibrated an invisible substance called the ether.

In 1887, Albert A. Michelson and his collaborator Edward Morley devised an experiment to detect that ether. Earth’s motion through the ether should have meant that light’s velocity would depend on its direction. (Traveling with the Earth’s motion, light’s speed wouldn’t be the same as traveling at right angles to the direction of motion.) Michelson and Morley figured they could detect that difference by exploiting the interference phenomena discovered by Young. But their apparatus failed to find any ether effect. They thought their experiment was flawed. But later Einstein figured out there actually wasn’t any ether.

  1. James Clerk Maxwell: Light is an electromagnetic wave.
    Maxwell died in 1879, the year Einstein was born, and so did not know there wasn’t an ether. He did figure out, though, that both electricity and magnetism could be explained by stresses in some such medium.

Electric and magnetic charges in the ether ought to generate disturbances in the form of waves, Maxwell realized. Based on the strengths of those forces he calculated that the waves would travel at the fantastic speed of 310 million meters per second, suspiciously close to the best recent measurements of the speed of light (those measurements ranged from 298 million to 315 million meters per second). So Maxwell, without the benefit of ever having watched NCIS on TV, then invoked Gibbs’ Rule 39 (there’s no such thing as a coincidence) and concluded that light was an example of an electromagnetic wave.

“It seems we have strong reason to conclude that light itself (including radiant heat, and other radiations if any) is an electromagnetic disturbance in the form of waves propagated through the electromagnetic field,” he wrote in 1864. His “other radiations, if any” turned out to be an entire spectrum of all sorts of cool waves, from gamma radiation to radio signals.

  1. Heinrich Hertz: Radio waves.
    Not very many people took Maxwell seriously at first. A few, though, known as the Maxwellians, promoted his ideas. One physicist who had faith in Maxwell, or at least in his equations, was Hertz, who performed experiments in his lab in Karlsruhe, Germany, that successfully produced and detected radio waves, eventually to be exploited by propagandists to spread a lot of illogical nonsense on talk radio.

His success inspired much more respect for the equations in Maxwell’s theory, which Hertz found almost magical: “It is impossible to study this wonderful theory without feeling as if the mathematical equations had an independent life and an intelligence of their own, as if they were wiser than ourselves,” Hertz said. His prime experimental success came in 1887, the same year that Michelson and Morley failed to detect the ether. Hertz died in 1894, long before his discovery was put to widespread use.

  1. John Michell: Seismic waves.
    Michell, an English geologist and astronomer, was motivated by the great Lisbon earthquake of 1755 to investigate the cause of earthquakes. In 1760 he concluded that “subterraneous fires” should be blamed, noting that volcanoes — “burning mountains” — commonly occur in the same neighborhood as frequent earthquakes.

Michell noted that “the motion of the earth in earthquakes is … partly propagated by waves, which succeed one another sometimes at larger and sometimes at smaller distances.” He cited witness accounts of quakes in which the ground rose “like the sea in a wave.” Much later seismologists developed a more precise understanding of the seismic waves that shake the Earth, using them as probes to infer the planet’s inner structure.

  1. Wilhelm Röntgen: X-rays.
    When Hertz discovered radio waves, he knew he was looking for the long-wavelength radiation foreshadowed in Maxwell’s equations. But a few years later, in 1895, Röntgen found the radio wave counterpart of the opposite end of the electromagnetic spectrum — by accident.
    Mysterious short-wavelength rays of an unknown type (therefore designated X) emerged when Röntgen shot cathode rays (beams of electrons) through a glass tube. Röntgen suspected that his creation might be a new kind of wave among the many Maxwell had anticipated: “There seems to exist some kind of relationship between the new rays and light rays; at least this is indicated by the formation of shadows,” Röntgen wrote. Those shadows, of course, became the basis for a revolutionary medical technology.

Besides providing a major new tool for observing shattered bones and other structures inside the body, X-rays eventually became essential tools for scientific investigation in astronomy, biology and other fields. And they shattered the late 19th century complacency of physicists who thought they’d basically figured everything out about nature. Weirdly, though, X-rays later turned out to be particles sometimes, validating Einstein’s ideas that light had an alter ego particle identity. (By the way, it turned out that X-rays aren’t the electromagnetic waves with the shortest wavelengths — gamma rays can be even shorter. Maybe they would be No. 11.)

  1. Epicurus: The swerve.
    Not exactly a wave in the ordinary sense, the swerve was a deviation from straight line motion postulated by the Greek philosopher Epicurus around 300 B.C. Unlike Aristotle, Epicurus believed in atoms, and argued that reality was built entirely from the random collisions of an infinite number of those tiny particles. Supposedly, he thought, atoms would all just fall straight down to the center of the universe unless some unpredictable “swerve” occasionally caused them to deviate from their paths so they would bounce off each other and congregate into complex structures.

It has not escaped the attention of modern philosophers that the Epicurean unpredictable swerve is a bit like the uncertainty in particle motions introduced by quantum mechanics. Which has its own waves.

  1. Louis de Broglie: Matter waves.
    In the early 1920s, de Broglie noticed a peculiar connection between relativity and quantum physics. Max Planck’s famous quantum formula related energy to frequency of a wave motion. Einstein’s special relativity related energy to the mass of a particle. De Broglie thought it would make a fine doctoral dissertation to work out the implications of two seemingly separate things both related to energy. If energy equals mass (times the speed of light squared) and energy equals frequency (time Planck’s constant), then voilà, mass equals frequency (times some combination of the constants). Therefore, de Broglie reasoned, particles (of mass) ought to also exist as waves (with a frequency).

That might have seemed wacky, but Einstein read de Broglie’s thesis and thought it made sense. Soon Walter Elsasser in Germany reported experiments that supported de Broglie, and in America Clinton Davisson and coworkers demonstrated conclusively that electrons did in fact exhibit wave properties.

De Broglie won the physics Nobel Prize in 1929; Davisson shared the 1937 Nobel with George Thomson, who had conducted similar experiments showing electrons are waves. Which was ironic, because George’s father, J.J. Thomson, won the 1906 Nobel for the work that revealed the existence of the electron as a particle. Eight decades later Ernst Ruska won a Nobel for his design of a powerful microscope that exploited the electron’s wave behavior.

  1. Max Born: Probability waves.
    De Broglie’s idea ignited a flurry of activity among physicists trying to figure out how waves fit into quantum theory. Niels Bohr, for instance, spent considerable effort attempting to reconcile the dual wave-particle nature of both electrons and light. Erwin Schrödinger, meanwhile, developed a full-fledged “wave mechanics” to describe the behavior of electrons in atoms solely from the wave perspective. Schrödinger’s math incorporated a “wave function” that was great for calculating the expected results of experiments, even though some experiments clearly showed electrons to be particles.

Born, a German physicist and good friend of Einstein’s, deduced the key to clarifying the wave function: It was an indicator of the probability of finding the particle in a given location. Combined with Werner Heisenberg’s brand-new uncertainty principle, Born’s realization led to the modern view that an electron is wavelike in the sense that it does not possess a definite location until it is observed. That approach works fine for all practical purposes, but physicists and philosophers still engage in vigorous debates today about the true physical status of the wave function.

  1. LIGO: Gravitational waves.
    Soon after he completed his general theory of relativity, Einstein realized that it implied the possibility of gravitational radiation — vibrations of spacetime itself. He had no idea, though, that by spending a billion dollars, physicists a century later could actually detect those spacetime ripples. But thanks to lasers (which maybe would have been No. 11), the Laser Interferometer Gravitational-Wave Observatory — two huge labs in Louisiana and Washington state — captured the spacetime shudders emitted from a pair of colliding black holes in September 2015.
    That detection is certainly one of the most phenomenal experimental achievements in the history of science. It signaled a new era in astronomy, providing astronomers a tool for probing the depths of the universe that are obscured from view with Maxwell’s “other radiations, if any.” For astronomy, gravitational radiation is the wave of the future.

Flight demands may have steered the evolution of bird egg shape

The mystery of why birds’ eggs come in so many shapes has long been up in the air. Now new research suggests adaptations for flight may have helped shape the orbs.

Stronger fliers tend to lay more elongated eggs, researchers report in the June 23 Science. The finding comes from the first large analysis of the way egg shape varies across bird species, from the almost perfectly spherical egg of the brown hawk owl to the raindrop-shaped egg of the least sandpiper.
“Eggs fulfill such a specific role in birds — the egg is designed to protect and nourish the chick. Why there’s such diversity in form when there’s such a set function was a question that we found intriguing,” says study coauthor Mary Caswell Stoddard, an evolutionary biologist at Princeton University.

Previous studies have suggested many possible advantages for different shapes. Perhaps cone-shaped eggs are less likely to roll out of the nest of cliff-dwelling birds; spherical eggs might be more resilient to damage in the nest. But no one had tested such hypotheses across a wide spectrum of birds.

Stoddard and her team analyzed almost 50,000 eggs from 1,400 species, representing about 14 percent of known bird species. The researchers boiled each egg down to its two-dimensional silhouette and then used an algorithm to describe each egg using two variables: how elliptical versus spherical the egg is and how asymmetrical it is — whether it’s pointier on one end than the other.

Next, the researchers looked at the way these two traits vary across the bird family tree. One pattern jumped out: Species that are stronger fliers, as measured by wing shape, tend to lay more elliptical or asymmetrical eggs, says study coauthor L. Mahadevan, a mathematician and biologist at Harvard University.
Mahadevan cautions that the data show only an association, but the researchers propose one possible explanation for the link between flying and egg shape. Adapting to flight streamlined bird bodies, perhaps also narrowing the reproductive tract. That narrowing would have limited the width of an egg that a female could lay. But since eggs provide nutrition for the chick growing inside, shrinking eggs too much would deprive the developing bird. Elongated eggs might have been a compromise between keeping egg volume up without increasing girth, Stoddard suggests. Asymmetry can increase egg volume in a similar way.

Testing a causal connection between flight ability and egg shape is tough “because of course we can’t replay the whole tape of life again,” says Claire Spottiswoode, a zoologist at the University of Cambridge who wrote a commentary accompanying the study. Still, Spottiswoode says the evidence is compelling: “It’s a very plausible argument.”

Santiago Claramunt, associate curator of ornithology at the Royal Ontario Museum in Toronto, isn’t convinced that flight adaptations played a driving role in the evolution of egg shape. “Streamlining in birds is determined more by plumage than the shape of the body — high performing fliers can have rounded, bulky bodies” he says, which wouldn’t give elongated eggs the same advantage over other egg shapes. He cites frigate birds and swifts as examples, both of which make long-distance flights but have fairly broad bodies. “There’s certainly more going on there.”

Indeed, some orders of birds showed a much stronger link between flying and egg shape than others did. And while other factors — like where birds lay their eggs and how many they lay at once — weren’t significantly related to egg shape across birds as a whole, they could be important within certain branches of the bird family tree.

The moon might have had a heavy metal atmosphere with supersonic winds

The infant moon may have had a thick metal atmosphere, where supersonic winds raised waves in its magma ocean.

That’s the conclusion of a new simulation that calculates how heat from the young sun, the Earth and the moon’s own hot surface could have vaporized lunar metals to give the moon an atmosphere as thick as Mars’. The model, reported online June 22 at arXiv.org, offers a way to test theories of how the moon formed and suggests how researchers could study exoplanets without leaving Earth’s own neighborhood.
Most planetary scientists think the moon formed when a Mars-sized protoplanet slammed into the Earth around 4.5 billion years ago. The collision threw hot, molten material into Earth’s orbit, which coalesced and eventually cooled into the moon.

At first, though, the moon would have been covered in a deep, global ocean of hot liquid rock. The postcollision Earth would have been blisteringly hot as well — upwards of 2000° Celsius — and would have glowed like a red dwarf star.

Prabal Saxena of NASA’s Goddard Spaceflight Center in Greenbelt, Md., and colleagues added up the radiation the early moon would have received from that starlike Earth, plus the sun and the magma ocean itself. Previous models had suggested the early moon should have an atmosphere, but the team believes its model is the first to include all those inputs at once, revealing fresh details about how the atmosphere and ocean may have interacted.

All of that radiation would have vaporized volatile atoms in the metal-rich magma ocean and formed an atmosphere about one-tenth the thickness of Earth’s, the model showed. To keep things simple, the team used sodium — an easily vaporized element that is abundant on the moon — to represent all the components that could contribute to an atmosphere.

As long as the molten ocean remained liquid, the atmosphere would have received freshly vaporized sodium atoms from the ocean and sent them whipping through the metallic air. An extreme temperature difference — the side of the moon facing Earth would have been heated to temperatures greater than 1700° and the farside would have chilled to a frigid ‒150° — would have raised winds with speeds over a kilometer per second. The winds would probably have blown waves in the magma ocean.
When the winds reached the twilight zone between hot and cold, the atmosphere would have condensed, leaving a band of sodium snow.

After about 1,000 years, the magma ocean would have cooled enough to solidify into a rocky crust. Without a liquid reservoir to draw from, the entire atmosphere would have collapsed.

“The moon’s atmosphere was like a hard-partying rock star,” Saxena says. “It had a really violent, heavy metal existence, but it rapidly just fell apart.”

Kevin Zahnle of NASA’s Ames Research Center in Moffett Field, Calif., thinks this live fast, die young picture of the lunar atmosphere sounds plausible, and might be testable. “Their story is well within the bounds of the possible,” he says. But he’s not sure all of the model’s assumptions are good ones. Both Earth and the moon would have had to be “exceedingly dry” to avoid developing steamy water atmospheres first, for instance.

One way to test the model would be to look for a ring of extra sodium in the rocks around the transition zone. That would show that the atmosphere really did have an extreme temperature gradient and high winds.

Other models of the moon’s formation — for instance, if it formed from several small impacts instead of a single large one — would lead to a cooler atmosphere, less strong winds and ultimately no sodium snow, Saxena says. Finding that extra sodium could help settle the debate about which kind of impact really happened (SN: 4/15/17, p.18).

With its proximity to an Earth that glowed like a star, the early moon could also be a good analog for rocky exoplanets orbiting red dwarfs, Saxena says.

“If we can characterize what the early moon looked like, it can tell us about the physical mechanisms that are operating on these close-in extreme exoplanets,” Saxena says.

CRISPR adds storing movies to its feats of molecular biology

Short film is alive and well. Using the current trendy gene-editing system CRISPR, a team from Harvard University has encoded images and a short movie into the DNA of living bacteria.

The work is part of a larger effort to use DNA to store data — from audio recordings and poetry to entire books on synthetic biology. Last year, Seth Shipman and his colleagues at Harvard threw CRISPR into the mix when they used the editing system to record molecular data in the DNA of Escherichia coli.

Now, the team is upping its game with images of a human hand and a short movie, a GIF of a galloping horse from iconic turn-of-the-century photographer Eadweard Muybridge’s Human and Animal Locomotion. In the code, the nucleotide bases that form DNA correspond to black-and-white pixel values. The video was encoded frame by frame. Once the team synthesized the DNA, they used CRISPR and two associated Cas proteins (Cas 1 and 2) to slip the data into the genetic blueprint of E. coli colonies.

After growing the bacteria for several generations, the scientists retrieved the code for the images and film frames and were able to reconstruct the clips. About 90 percent of the encoded information was left intact. Though it’s not a perfect storage system, the results demonstrate CRISPR’s potential for hiding data in the genetic blueprints of bacteria, Shipman and his colleagues write July 12 in Nature.

Baby-led weaning won’t necessarily ward off extra weight

When my younger daughter was around 6 months old, we gave her mashed up prune. She grimaced and shivered a little, appearing to be absolutely disgusted. But then she grunted and reached for more.

Most babies are ready for solid food around 6 months of age, and feeding them can be fun. One of the more entertaining approaches does not involve a spoon. Called baby-led weaning, it involves allowing babies to feed themselves appropriate foods.

Proponents of the approach say that babies become more skilled eaters when allowed to explore on their own. They’re in charge of getting food into their own mouths, gumming it and swallowing it down — all skills that require muscle coordination. When the right foods are provided (yes to soft steamed broccoli; no to whole grapes), babies who feed themselves are no more likely to choke than their spoon-fed peers.

Some baby-led weaning proponents also suspected that the method might ward off obesity, and a small study suggested as much. The idea is that babies allowed to feed themselves might better learn how to regulate their food intake, letting hunger and fullness guide them to a reasonable calorie count. But a new study that looked at the BMIs of babies who fed themselves and those who didn’t found that babies grew similarly with either eating style.

A clinical trial of about 200 mother-baby pairs in New Zealand tracked two different approaches to eating and their impact on weight. Half of the moms were instructed to feed their babies as they normally would, which for most meant spoon-feeding their babies purees, at least early on. The other half was instructed that only breast milk or formula was best until 6 months of age, and after that, babies could be encouraged to feed themselves. These mothers also received breastfeeding support.

At the 1- and 2-year marks, the babies’ average BMI z-scores were similar, regardless of feeding method, researchers report July 10 in JAMA Pediatrics. (A BMI z-score takes age and sex into account.) And baby-led weaning actually produced slightly more overweight babies than the other approaches, but not enough to be meaningful. At age 2, 10.3 percent of baby-led weaning babies were considered overweight and 6.4 percent of traditionally-fed babies were overweight. The two groups of babies seemed to take in about the same energy from food, analyses of the nutritional value and amount of food eaten revealed.

The trial found a few other differences between the two groups. Babies who did baby-led weaning exclusively breastfed for longer, a median of about 22 weeks. Babies in the other group were exclusively breastfed for a median of about 17 weeks. Babies in the baby-led weaning group were also more likely to have held off on solid food until 6 months of age.

While baby-led weaning may not protect babies against being overweight, the study did uncover a few perks of the approach. Parents reported that babies who fed themselves seemed less fussy about foods. These babies also reportedly enjoyed eating more (though my daughter’s prune fake-out face is evidence that babies’ inner opinions can be hard to read). Even so, these data seem to point toward a more positive experience all around when using the baby-led weaning approach. That’s ideal for both experience-hungry babies and the parents who get to savor watching them eat.

Spread of misfolded proteins could trigger type 2 diabetes

Type 2 diabetes and prion disease seem like an odd couple, but they have something in common: clumps of misfolded, damaging proteins.

Now new research finds that a dose of corrupted pancreas proteins induces normal ones to misfold and clump. This raises the possibility that, like prion disease, type 2 diabetes could be triggered by these deformed proteins spreading between cells or even individuals, the researchers say.

When the deformed pancreas proteins were injected into mice without type 2 diabetes, the animals developed symptoms of the disease, including overly high blood sugar levels, the researchers report online August 1 in the Journal of Experimental Medicine.
“It is interesting, albeit not super-surprising” that the deformed proteins could jump-start the process in other mice, says Bruce Verchere, a diabetes researcher at the University of British Columbia in Vancouver. But “before you could say anything about transmissibility of type 2 diabetes, there’s a lot more that needs to be done.”

Beta cells in the pancreas make the glucose-regulating hormone insulin. The cells also produce a hormone called islet amyloid polypeptide, or IAPP. This protein can clump together and damage cells, although how it first goes bad is not clear. The vast majority of people with type 2 diabetes accumulate deposits of misfolded IAPP in the pancreas, and the clumps are implicated in the death of beta cells.

Deposits of misfolded proteins are a hallmark of such neurodegenerative diseases as Alzheimer’s and Parkinson’s as well as prion disorders like Creutzfeldt-Jakob disease (SN: 10/17/15, p. 12).

Since IAPP misfolds like a prion protein, neurologist Claudio Soto of the University of Texas Health Science Center at Houston and his colleagues wondered if type 2 diabetes could be transmitted between cells, or even between individuals. With this paper, his group “just wanted to put on the table” this possibility.

The mouse version of the IAPP protein cannot clump — and mice don’t develop type 2 diabetes, a sign that the accumulation of IAPP is important in the development of the disease, says Soto. To study the disease in mice, the animals need to be engineered to produce a human version of IAPP. When pancreas cells containing clumps of misfolded IAPP, taken from an engineered diabetic mouse, were mixed in a dish of healthy human pancreas cells, it triggered the clumping of IAPP in the human cells.
The same was true when non-diabetic mice got a shot made with the diabetic mouse pancreas cells. The non-diabetic mice developed deposits of clumped IAPP that grew over time, and the majority of beta cells died. When the mice were alive, more than 70 percent of the animals had blood sugar levels beyond the healthy range.

Soto’s group plans to study if IAPP could be transmitted in a real world scenario, such as through a blood transfusion. They’ve already begun work on transfusing blood from mice with diabetes to healthy mice, to see if they can induce the disease. “More work needs to be done to see if this ever operates in real life,” Soto says.

Even if transmission of the misfolded protein occurs only within an individual, “this opens up a lot of opportunities for intervention,” Soto says, “because now you can target the IAPP.”

Verchere also believes IAPP is “a big player” in the progression of type 2 diabetes, and that therapies that prevent the clumps of proteins from forming are needed. Whether or not future research supports the idea that the disease is transmissible, the study is “good for appreciating the potential role of IAPP in diabetes.”