Stars with too much lithium may have stolen it

Something is giving small, pristine stars extra lithium. A dozen newly discovered stars contain more of the element than astronomers can explain.

Some of the newfound stars are earlier in their life cycles than stars previously found with too much lithium, researchers report in the Jan. 10 Astrophysical Journal Letters. Finding young lithium-rich stars could help explain where the extra material comes from without having to tinker with well-accepted stellar evolution rules.

The first stars in the Milky Way formed from the hydrogen, helium and small amounts of lithium that were produced in the Big Bang, so most of this ancient cohort have low lithium levels at the surface (SN: 11/14/15, p. 12). As the stars age, they usually lose even more.
Mysteriously, some aging stars have unusually high amounts of lithium. About a dozen red giant stars — the end-of-life stage for a sunlike star — have been spotted over the last few decades with extra lithium at their surfaces. It’s not enough lithium to explain a different cosmic conundrum, in which the universe overall seems to have a lot less lithium than it should (SN: 10/18/14, p. 15). But it’s enough to confuse astronomers. Red giants usually dredge up material that is light on lithium from their cores, making their surfaces look even more depleted in the element.

Finding lithium-enriched red giants “is not expected from standard models of low-mass star evolution, which is usually regarded as a relatively well-established field in astrophysics,” says astronomer Wako Aoki of the National Astronomical Observatory of Japan in Tokyo. A red giant with lots of lithium must have had a huge amount of lithium in its former life, or imply a tweak is needed to some fundamental rule of stellar evolution.

Aoki and his colleagues used the Subaru Telescope in Hawaii to find the 12 new lithium-rich stars, all about 0.8 times the mass of the sun. Five of the stars seem to be relatively early in their life cycles — a little older than regular sunlike stars but a little younger than red giants.
That suggests lithium-rich stars somehow picked up the extra lithium early in their lives, though it’s not clear from where. The stars could have stolen material from companion stars, or eaten unfortunate planets (SN: 5/19/01, p. 310). But there are reasons to think they did neither — for one thing, they don’t have an excess of any other elements.

“This is a mystery,” Aoki says.

Further complicating the picture is the possibility that the five youngish stars could be red giants after all, thanks to uncertainties in the measurements of their sizes, says astronomer Evan Kirby of Caltech. Future surveys should check stars that are definitely in the same stage of life as the sun.

Still, the new results are “tantalizing,” Kirby says. “It’s a puzzle that’s been around for almost 40 years now, and so far there’s no explanation that has satisfying observational support.”

Your phone is like a spy in your pocket

Consider everything your smartphone has done for you today. Counted your steps? Deposited a check? Transcribed notes? Navigated you somewhere new?

Smartphones make for such versatile pocket assistants because they’re equipped with a suite of sensors, including some we may never think — or even know — about, sensing, for example, light, humidity, pressure and temperature.

Because smartphones have become essential companions, those sensors probably stayed close by throughout your day: the car cup holder, your desk, the dinner table and nightstand. If you’re like the vast majority of American smartphone users, the phone’s screen may have been black, but the device was probably on the whole time.

“Sensors are finding their ways into every corner of our lives,” says Maryam Mehrnezhad, a computer scientist at Newcastle University in England. That’s a good thing when phones are using their observational dexterity to do our bidding. But the plethora of highly personal information that smartphones are privy to also makes them powerful potential spies.
Online app store Google Play has already discovered apps abusing sensor access. Google recently booted 20 apps from Android phones and its app store because the apps could — without the user’s knowledge — record with the microphone, monitor a phone’s location, take photos, and then extract the data. Stolen photos and sound bites pose obvious privacy invasions. But even seemingly innocuous sensor data can potentially broadcast sensitive information. A smartphone’s movement may reveal what users are typing or disclose their whereabouts. Even barometer readings that subtly shift with increased altitude could give away which floor of a building you’re standing on, suggests Ahmed Al-Haiqi, a security researcher at the National Energy University in Kajang, Malaysia.

These sneaky intrusions may not be happening in real life yet, but concerned researchers in academia and industry are working to head off eventual invasions. Some scientists have designed invasive apps and tested them on volunteers to shine a light on what smartphones can reveal about their owners. Other researchers are building new smartphone security systems to help protect users from myriad real and hypothetical privacy invasions, from stolen PIN codes to stalking.

Message revealed
Motion detectors within smartphones, like the accelerometer and the rotation-sensing gyroscope, could be prime tools for surreptitious data collection. They’re not permission protected — the phone’s user doesn’t have to give a newly installed app permission to access those sensors. So motion detectors are fair game for any app downloaded onto a device, and “lots of vastly different aspects of the environment are imprinted on those signals,” says Mani Srivastava, an engineer at UCLA.

For instance, touching different regions of a screen makes the phone tilt and shift just a tiny bit, but in ways that the phone’s motion sensors pick up, Mehrnezhad and colleagues demonstrated in a study reported online April 2017 in the International Journal of Information Security. These sensors’ data may “look like nonsense” to the human eye, says Al-Haiqi, but sophisticated computer programs can discern patterns in the mess and match segments of motion data to taps on various areas of the screen.

For the most part, these computer programs are machine-learning algorithms, Al-Haiqi says. Researchers train them to recognize keystrokes by feeding the programs a bunch of motion sensor data labeled with the key tap that produces particular movement. A pair of researchers built TouchLogger, an app that collects orientation sensor data and uses the data to deduce taps on smartphones’ number keyboards. In a test on HTC phones, reported in 2011 in San Francisco at the USENIX Workshop on Hot Topics in Security, TouchLogger discerned more than 70 percent of key taps correctly.

Since then, a spate of similar studies have come out, with scientists writing code to infer keystrokes on number and letter keyboards on different kinds of phones. In 2016 in Pervasive and Mobile Computing, Al-Haiqi and colleagues reviewed these studies and concluded that only a snoop’s imagination limits the ways motion data could be translated into key taps. Those keystrokes could divulge everything from the password entered on a banking app to the contents of an e-mail or text message.

Story continues below graphs
A more recent application used a whole fleet of smartphone sensors — including the gyroscope, accelerometer, light sensor and magnetism-measuring magnetometer — to guess PINs. The app analyzed a phone’s movement and how, during typing, the user’s finger blocked the light sensor. When tested on a pool of 50 PIN numbers, the app could discern keystrokes with 99.5 percent accuracy, the researchers reported on the Cryptology ePrint Archive in December.

Other researchers have paired motion data with mic recordings, which can pick up the soft sound of a fingertip tapping a screen. One group designed a malicious app that could masquerade as a simple note-taking tool. When the user tapped on the app’s keyboard, the app covertly recorded both the key input and the simultaneous microphone and gyroscope readings to learn the sound and feel of each keystroke.

The app could even listen in the background when the user entered sensitive info on other apps. When tested on Samsung and HTC phones, the app, presented in the Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless and Mobile Networks, inferred the keystrokes of 100 four-digit PINs with 94 percent accuracy.

Al-Haiqi points out, however, that success rates are mostly from tests of keystroke-deciphering techniques in controlled settings — assuming that users hold their phones a certain way or sit down while typing. How these info-extracting programs fare in a wider range of circumstances remains to be seen. But the answer to whether motion and other sensors would open the door for new privacy invasions is “an obvious yes,” he says.

Tagalong
Motion sensors can also help map a person’s travels, like a subway or bus ride. A trip produces an undercurrent of motion data that’s discernible from shorter-lived, jerkier movements like a phone being pulled from a pocket. Researchers designed an app, described in 2017 in IEEE Transactions on Information Forensics and Security, to extract the data signatures of various subway routes from accelerometer readings.

In experiments with Samsung smartphones on the subway in Nanjing, China, this tracking app picked out which segments of the subway system a user was riding with at least 59, 81 and 88 percent accuracy — improving as the stretches expanded from three to five to seven stations long. Someone who can trace a user’s subway movements might figure out where the traveler lives and works, what shops or bars the person frequents, a daily schedule, or even — if the app is tracking multiple people — who the user meets at various places.
Accelerometer data can also plot driving routes, as described at the 2012 IEEE International Conference on Communication Systems and Networks in Bangalore, India. Other sensors can be used to track people in more confined spaces: One team synced a smartphone mic and portable speaker to create an on-the-fly sonar system to map movements throughout a house. The team reported the work in the September 2017 Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

“Fortunately there is not anything like [these sensor spying techniques] in real life that we’ve seen yet,” says Selcuk Uluagac, an electrical and computer engineer at Florida International University in Miami. “But this doesn’t mean there isn’t a clear danger out there that we should be protecting ourselves against.”

That’s because the kinds of algorithms that researchers have employed to comb sensor data are getting more advanced and user-friendly all the time, Mehrnezhad says. It’s not just people with Ph.D.s who can design the kinds of privacy invasions that researchers are trying to raise awareness about. Even app developers who don’t understand the inner workings of machine-learning algorithms can easily get this kind of code online to build sensor-sniffing programs.

What’s more, smartphone sensors don’t just provide snooping opportunities for individual cybercrooks who peddle info-stealing software. Legitimate apps often harvest info, such as search engine and app download history, to sell to advertising companies and other third parties. Those third parties could use the information to learn about aspects of a user’s life that the person doesn’t necessarily want to share.

Take a health insurance company. “You may not like them to know if you are a lazy person or you are an active person,” Mehrnezhad says. “Through these motion sensors, which are reporting the amount of activity you’re doing every day, they could easily identify what type of user you are.”

Sensor safeguards
Since it’s only getting easier for an untrusted third party to make private inferences from sensor data, researchers are devising ways to give people more control over what information apps can siphon off of their devices. Some safeguards could appear as standalone apps, whereas others are tools that could be built into future operating system updates.

Uluagac and colleagues proposed a system called 6thSense, which monitors a phone’s sensor activity and alerts its owner to unusual behavior, in Vancouver at the August 2017 USENIX Security Symposium. The user trains this system to recognize the phone’s normal sensor behavior during everyday tasks like calling, Web browsing and driving. Then, 6thSense continually checks the phone’s sensor activity against these learned behaviors.

If someday the program spots something unusual — like the motion sensors reaping data when a user is just sitting and texting — 6thSense alerts the user. Then the user can check if a recently downloaded app is responsible for this suspicious activity and delete the app from the phone.

Uluagac’s team recently tested a prototype of the system: Fifty users trained Samsung smartphones with 6thSense to recognize their typical sensor activity. When the researchers fed the 6thSense system examples of benign data from daily activities mixed in with segments of malicious sensor operations, 6thSense picked out the problematic bits with over 96 percent accuracy.
For people who want more active control over their data, Supriyo Chakraborty, a privacy and security researcher at IBM in Yorktown Heights, N.Y., and colleagues devised DEEProtect, a system that blunts apps’ abilities to draw conclusions about certain user activity from sensor data. People could use DEEProtect, described in a paper posted online at arXiv.org in February 2017, to specify preferences about what apps should be allowed to do with sensor data. For example, someone may want an app to transcribe speech but not identify the speaker.

DEEProtect intercepts whatever raw sensor data an app tries to access and strips that data down to only the features needed to make user-approved inferences. For speech-to-text translation, the phone typically needs sound frequencies and the probabilities of particular words following each other in a sentence.

But sound frequencies could also help a spying app deduce a speaker’s identity. So DEEProtect distorts the dataset before releasing it to the app, leaving information on word orders alone, since that has little or no bearing on speaker identity. Users can control how much DEEProtect changes the data; more distortion begets more privacy but also degrades app functions.

In another approach, Giuseppe Petracca, a computer scientist and engineer at Penn State, and colleagues are trying to protect users from accidentally granting sensor access to deceitful apps, with a security system called AWare.

Apps have to get user permission upon first installation or first use to access certain sensors like the mic and camera. But people can be cavalier about granting those blanket authorizations, Uluagac says. “People blindly give permission to say, ‘Hey, you can use the camera, you can use the microphone.’ But they don’t really know how the apps are using these sensors.”

Instead of asking permission when a new app is installed, AWare would request user permission for an app to access a certain sensor the first time a user provided a certain input, like pressing a camera button. On top of that, the AWare system memorizes the state of the phone when the user grants that initial permission — the exact appearance of the screen, sensors requested and other information. That way, AWare can tell users if the app later attempts to trick them into granting unintended permissions.

For instance, Petracca and colleagues imagine a crafty data-stealing app that asks for camera access when the user first pushes a camera button, but then also tries to access the mic when the user later pushes that same button. The AWare system, also presented at the 2017 USENIX Security Symposium, would realize the mic access wasn’t part of the initial deal, and would ask the user again if he or she would like to grant this additional permission.

Petracca and colleagues found that people using Nexus smartphones equipped with AWare avoided unwanted authorizations about 93 percent of the time, compared with 9 percent among people using smartphones with typical first-use or install-time permission policies.

The price of privacy
The Android security team at Google is also trying to mitigate the privacy risks posed by app sensor data collection. Android security engineer Rene Mayrhofer and colleagues are keeping tabs on the latest security studies coming out of academia, Mayrhofer says.

But just because someone has built and successfully tested a prototype of a new smartphone security system doesn’t mean it will show up in future operating system updates. Android hasn’t incorporated proposed sensor safeguards because the security team is still looking for a protocol that strikes the right balance between restricting access for nefarious apps and not stunting the functions of trustworthy programs, Mayrhofer explains.

“The whole [app] ecosystem is so big, and there are so many different apps out there that have a totally legitimate purpose,” he adds. Any kind of new security system that curbs apps’ sensor access presents “a real risk of breaking” legitimate apps.

Tech companies may also be reluctant to adopt additional security measures because these extra protections can come at the cost of user friendliness, like AWare’s additional permissions pop-ups. There’s an inherent trade-off between security and convenience, UCLA’s Srivastava says. “You’re never going to have this magical sensor shield [that] gives you this perfect balance of privacy and utility.”

But as sensors get more pervasive and powerful, and algorithms for analyzing the data become more astute, even smartphone vendors may eventually concede that the current sensor protections aren’t cutting it. “It’s like cat and mouse,” Al-Haiqi says. “Attacks will improve, solutions will improve. Attacks will improve, solutions will improve.”

The game will continue, Chakraborty agrees. “I don’t think we’ll get to a place where we can declare a winner and go home.”

New device can transmit underwater sound to air

Don’t expect to play a game of Marco Polo by shouting from beneath the pool’s surface. No one will hear you because, normally, only about 0.1 percent of sound is transmitted from water to the air. But a new type of device might one day help.

Researchers have designed a new metamaterial — a type of material that behaves in ways conventional materials can’t — that increases sound transmission to 30 percent. The metamaterial could have applications for more than poolside play. A future version might be used to detect noisy marine life or listen in on sonar use, say applied physicist Oliver Wright of Hokkaido University in Sapporo, Japan, and a team at Yonsei University in Seoul, South Korea, who describe the metamaterial in a paper accepted to Physical Review Letters.
Currently, detection of underwater sounds happens with hydrophones, which have to be underwater. But what if you wanted to listen in from the surface?

Enter the new device. It’s a small cylinder with a weighted rubber membrane stretched across a metal frame that floats atop the water surface. When underwater sound waves hit the device, its frame and membrane vibrate at finely tuned frequencies to help sound transmit into the air.

“A ‘hard’ surface like a table or water reflects almost 100 percent of sound,” says Wright. “We want to try to mitigate that by introducing an intermediary structure.”
Both water and air resist the flow of sound, a property known as acoustic impedance. Because of its density, water’s acoustic impedance is 3,600 times that of air. The greater the mismatch, the more sound is reflected at a boundary.
Adding a layer of material one-fourth the thickness of an incoming wave’s wavelength can reduce the amount of reflection. This is the principle at work behind anti-reflective coatings applied to lenses of cameras and glasses. While optical light has a wavelength in the hundreds of nanometers, necessitating a thin coating only a few atoms thick, audible sound waves can be meters long.

Even though it’s only one-hundredth the thickness of the sound’s wavelength, instead of the conventional one-fourth, the metamaterial still transmits sound.

“It’s a tour de force of experimental demonstration,” says Oleg Godin, a physicist at the Naval Postgraduate School in Monterey, Calif., who was not involved with the research. “But I’m less impressed by the suggestions and implications about its uses. It’s wishful thinking.”

One major problem that the researchers would have to overcome is the device’s inability to transmit sound that hits the surface an angle. In the lab, the device is tested in a tube — effectively a one-direction environment. But on the vast surface of a lake or ocean, the device would be limited to transmitting sounds from the small area directly below it. Additionally, the metamaterial is limited to transmitting a narrow band of frequencies. Noise outside that range reflects off the water’s surface as usual.

Still, the scientists are optimistic about the next steps, and even propose that a sheet of these devices could work in concert.

Somewhere in the brain is a storage device for memories

People tend to think of memories as deeply personal, ephemeral possessions — snippets of emotions, words, colors and smells stitched into our unique neural tapestries as life goes on. But a strange series of experiments conducted decades ago offered a different, more tangible perspective. The mind-bending results have gained unexpected support from recent studies.

In 1959, James Vernon McConnell, a psychologist at the University of Michigan in Ann Arbor, painstakingly trained small flatworms called planarians to associate a shock with a light. The worms remembered this lesson, later contracting their bodies in response to the light.
One weird and wonderful thing about planarians is that they can regenerate their bodies — including their brains. When the trained flatworms were cut in half, they regrew either a head or a tail, depending on which piece had been lost. Not surprisingly, worms that kept their heads and regrew tails retained the memory of the shock, McConnell found. Astonishingly, so did the worms that grew replacement heads and brains. Somehow, these fully operational, complex arrangements of brand-spanking-new nerve cells had acquired the memory of the painful shock, McConnell reported.

In subsequent experiments, McConnell went even further, attempting to transfer memory from one worm to another. He tried grafting the head of a trained worm onto the tail of an untrained worm, but he couldn’t get the head to stick. He injected trained planarian slurry into untrained worms, but the recipients often exploded. Finally, he ground up bits of the trained planarians and fed them to untrained worms. Sure enough, after their meal, the untrained worms seemed to have traces of the memory — the cannibals recoiled at the light.

The implications were bizarre, and potentially profound: Lurking in that pungent planarian puree must be a substance that allowed animals to literally eat one another’s memories.

These outlandish experiments aimed to answer a question that had been needling scientists for decades: What is the physical basis of memory? Somehow, memories get etched into cells, forming a physical trace that researchers call an “engram.” But the nature of these stable, specific imprints is a mystery.
Today, McConnell’s memory transfer episode has largely faded from scientific conversation. But developmental biologist Michael Levin of Tufts University in Medford, Mass., and a handful of other researchers wonder if McConnell was onto something. They have begun revisiting those historical experiments in the ongoing hunt for the engram.

Applying powerful tools to the engram search, scientists are already challenging some widely held ideas about how memories are stored in the brain. New insights haven’t yet revealed the identity of the physical basis of memory, though. Scientists are chasing a wide range of possibilities. Some ideas are backed by strong evidence; others are still just hunches. In pursuit of the engram, some researchers have even searched for clues in memories that persist in brains that go through massive reorganization.
Synapse skeptics
One of today’s most entrenched explanations puts engrams squarely within the synapses, connections where chemical and electrical messages move between nerve cells, or neurons. These contact points are strengthened when bulges called synaptic boutons grow at the ends of message-sending axons and when hairlike protrusions called spines decorate message-receiving dendrites.

In the 1970s, researchers described evidence that as an animal learned something, these neural connections bulked up, forming more contact points of synaptic boutons and dendritic spines. With stronger connection points, cells could fire in tandem when the memory needed to be recalled. Stronger connections mean stronger memory, as the theory goes.

This process of bulking up, called long-term potentiation, or LTP, was thought to offer an excellent explanation for the physical basis of memory, says David Glanzman, a neuroscientist at UCLA who has studied learning and memory since 1980. “Until a few years ago, I implicitly accepted this model for memory,” he says. “I don’t anymore.”

Glanzman has good reason to be skeptical. In his lab, he studies the sea slug Aplysia, an organism that he first encountered as a postdoctoral student in the Columbia University lab of Nobel Prize–winning neuroscientist Eric Kandel. When shocked in the tail, these slugs protectively withdraw their siphon and gill. After multiple shocks, the animals become sensitized and withdraw faster. (The distinction between learning and remembering is hazy, so researchers often treat the two as similar actions.)

After neurons in the sea slugs had formed the memory of the shock, the researchers saw stronger synapses between the nerve cells that sense the shock and the ones that command the siphon and gill to move. When the memory was made, the synapse sprouted more message-sending bumps, Glanzman’s team reported in eLife in 2014. And when the memory was weakened with a drug that prevents new proteins from being made, some of these bumps disappeared. “That made perfect sense,” Glanzman says. “We expected the synaptic growth to revert and go back to the original, nonlearned state. And it did.”

But the answer to the next logical question was a curve ball. If the memory was stored in bulked-up synapses, as researchers thought, then the new contact points that appear when a memory is formed should be the same ones that vanish when the memory is subsequently lost, Glanzman reasoned. That’s not what happened — not even close. “We found that it was totally random,” Glanzman says. “Completely random.”
Research from Susumu Tonegawa, a Nobel Prize–winning neuroscientist at MIT, turned up a result in mice that was just as startling. His project relies on sophisticated tools, including optogenetics. With this technique, the scientists activated specific memory-storing neurons with light. The researchers call those memory-storing cells “engram cells.”

In a parallel to the sea slug experiment, Tonegawa’s team conditioned mice to fear a particular cage and genetically marked the cells that somehow store that fear memory. After the mice learned the association, the researchers saw more dendritic spines in the neurons that store the memory — evidence of stronger synapses.

When the researchers caused amnesia with a drug, that newly formed synaptic muscle went away. “We wiped out the LTP, completely wiped it out,” says neuroscientist Tomás Ryan, who conducted the study in Tonegawa’s lab and reported the results with Tonegawa and colleagues in 2015 in Science.

And yet the memory wasn’t lost. With laser light, the researchers could still activate the engram cells — and the memory they somehow still held. That means that the memory was stored in something that isn’t related to the strength of the synapses.

The surprising results suggest that researchers may have been sidetracked, focusing too hard on synaptic strength as a memory storage system, says Ryan, now at Trinity College Dublin. “That approach has produced about 12,000 or so papers on the topic, but it hasn’t been very successful in explaining how memory works.”

Silent memories
The finding that memory storage and synaptic strength aren’t always tied together raises an important distinction that may help untangle the research. The cellular machines that are required for calling up memories are not necessarily the same machines that store memories. The search for what stores memories may have been muddled with results on how cells naturally call up memories.

Ryan, Tonegawa and Glanzman all think that LTP, with its bulked-up synapses, is important for retrieving memories, but not the thing that actually stores them. It’s quite possible to have stored memories that aren’t readily accessible, what Glanzman calls “occult memories” and Tonegawa refers to as “silent engrams.” Both think the concept applies more broadly than in just the sea slugs and mice they study.

Glanzman explains the situation by considering a talented violin player. “If you cut off my hands, I’m not able to play the violin,” he says. “But it doesn’t mean that I don’t know how to play the violin.” The analogy is overly simple, he says, “but that’s how I think of synapses. They enable the memory to be expressed, but they are not where the memory is.”

In a paper published last October in Proceedings of the National Academy of Sciences, Tonegawa and colleagues created silent engrams of a cage in which mice received a shock. The mice didn’t seem to remember the room paired with the shock, suggesting that the memory was silent. But the memory could be called up after a genetic tweak beefed up synaptic connections by boosting the number of synaptic spines specifically among the neurons that stored the memory.

Story continues below diagram
Those results add weight to the idea that synaptic strength is crucial for memory recall, but not storage, and they also hint that, somehow, the brain stores many inaccessible memory traces. Tonegawa suspects that these silent engrams are quite common.

Finding and reactivating silent engrams “tells us quite a bit about how memory works,” Tonegawa says. “Memory is not always active for you. You learn it, you store it,” but depending on the context, it might slip quietly into the brain and remain silent, he says. Consider an old memory from high school that suddenly pops up. “Something triggered you to recall — something very specific — and that probably involves the conversion of a silent engram to an active engram,” Tonegawa says.

But engrams, silent or active, must still be holding memory information somehow. Tonegawa thinks that this information is stored not in synapses’ strength, but in synapses’ very existence. When a specific memory is formed, new connections are quickly forged, creating anatomical bridges between constellations of cells, he suspects. “That defines the content of memory,” he says. “That is the substrate.”

These newly formed synapses can then be beefed up, leading to the memory burbling up as an active engram, or pared down and weakened, leading to a silent engram. Tonegawa says this idea requires less energy than the LTP model, which holds that memory storage requires constantly revved up synapses full of numerous contact points. Synapse existence, he argues, can hold memory in a latent, low-maintenance state.
“The brain doesn’t even have to recognize that it’s a memory,” says Ryan, who shares this view. Stored in the anatomy of arrays of neurons, memory is “in the shape of the brain itself,” he says.
A temporary vessel
Tonegawa is confident that the very existence of physical links between neurons stores memories. But other researchers have their own notions.

Back in the 1950s, McConnell suspected that RNA, cellular material that can help carry out genetic instructions but can also carry information itself, might somehow store memories.

This unorthodox idea, that RNA is involved in memory storage, has at least one modern-day supporter in Glanzman, who plans to present preliminary data at a meeting in April that suggest injections of RNA can transfer memory between sea slugs.

Glanzman thinks that RNA is a temporary storage vessel for memories, though. The real engram, he suggests, is the folding pattern of DNA in cells’ nuclei. Changes to how tightly DNA is packed can govern how genes are deployed. Those changes, part of what’s known as the epigenetic code, can be made — and even transferred — by roving RNA molecules, Glanzman argues. He is quick to point out that his idea, memory transfer by RNA, is radical. “I don’t think you could find another card-carrying Ph.D. neuroscientist who believes that.”

Other researchers, including neurobiologist David Sweatt of Vanderbilt University in Nashville, also suspect that long-lasting epigenetic changes to DNA hold memories, an idea Sweatt has been pursuing for 20 years. Because epigenetic changes can be stable, “they possess the unique attribute necessary to contribute to the engram,” he says.

And still more engram ideas abound. Some results suggest that a protein called PKM-zeta, which helps keep synapses strong, preserves memories. Other evidence suggests a role for structures called perineuronal nets, rigid sheaths that wrap around neurons. Holes in these nets allow synapses to peek through, solidifying memories, the reasoning goes (SN: 11/14/15, p. 8). A different line of research focuses on proteins that incite others to misfold and aggregate around synapses, strengthening memories. Levin, at Tufts, has his own take. He thinks that bioelectrical signals, detected by voltage-sensing proteins on the outside of cells, can store memories, though he has no evidence yet.

Beyond the brain
Levin’s work on planarians, reminiscent of McConnell’s cannibal research, may even prod memory researchers to think beyond the brain. Planarians can remember the texture of their terrain, even using a new brain, Levin and Tal Shomrat, now at the Ruppin Academic Center in Mikhmoret, Israel, reported in 2013 in the Journal of Experimental Biology. The fact that memory somehow survived decapitation hints that signals outside of the brain may somehow store memories, even if temporarily.

Memory clues may also come from other animals that undergo extreme brain modification over their lifetimes. As caterpillars transition to moths, their brains change dramatically. But a moth that had learned as a caterpillar to avoid a certain odor paired with a shock holds onto that information, despite having a radically different brain, researchers have found.

Story continues below box
Similar results come from mammals, such as the Arctic ground squirrel, which massively prunes its synaptic connections to save energy during torpor in the winter. Within hours of the squirrel waking up, the pruned synaptic connections grow back. Remarkably, some old memories seem to survive the experience. The squirrels have been shown to remember familiar squirrels, as well as how to perform motor feats such as jumping between boxes and crawling through tubes. Even human babies retain taste and sound memories from their time in the womb, despite a very changed brain.

These extreme cases of memory persistence raise lots of basic questions about the nature of the engram, including whether memories must always be stored in the brain. But it’s important that the engram search isn’t restricted to the most advanced forms of humanlike memory, Levin says. “Memory starts, evolutionarily, very early on,” he says. “Single-celled organisms, bacteria, slime molds, fish, these things all have memory.… They have the ability to alter their future behavior based on what’s happened before.”

The diversity of ideas — and of experimental approaches — highlights just how unsettled the engram question remains. After decades of work, the field is still young. Even if the physical identity of the engram is eventually discovered and universally agreed on, a bigger question still looms, Levin says.

“The whole point of memory is that you should be able to look at the engram and immediately know what it means,” he says. Somehow, the physical message of a memory, in whatever form it ultimately takes, must be translated into the experience of a memory by the brain. But no one has a clue how this occurs.

At a 2016 workshop, a small group of researchers gathered to discuss engram ideas that move beyond synapse strength. “All the rebels came together,” Glanzman says. The two-day debate didn’t settle anything, but it was “very valuable,” says Ryan, who also attended. He coauthored a summary of the discussion that appeared in the May 2017 Annals of the New York Academy of Sciences. “Because the mind is part of the natural world, there is no reason to believe that it will be any less tangible and ultimately comprehensible than other components,” Ryan and coauthors optimistically wrote.

For now, the field hasn’t been able to explain memories in tangible terms. But research is moving forward, in part because of its deep implications. The hunt for memories gets at the very nature of identity, Levin says. “What does it mean to be a coherent individual that has a coherent bundle of memories?” The elusive identity of the engram may prove key to answering that question.