Lakers vs. Warriors final score, results: Golden State forces Game 6 as Anthony Davis suffers head injury

Faced with a do-or-die situation in Game 5, the Warriors came up big.

A 121-106 win at Chase Center kept Golden State's season alive as they successfully avoided elimination at the hands of the Lakers. The series will now head back down to Los Angeles for Game 6, where LeBron James and Co. will have another chance to punch their ticket to the Western Conference Finals.

Stephen Curry led the way for the Warriors with 27 points and eight assists. Andrew Wiggins finished with 25 points and seven rebounds, while Draymond Green had one of his best games of the postseason with 20 points and 10 rebounds.

A bad night got even worse for LA when Anthony Davis was forced to leave the game late in the fourth quarter. He appeared to take a shot to the face from Golden State's Kevon Looney, and TNT's Chris Haynes reported that he was taken down the tunnel in a wheelchair. The Lakers now face an anxious wait to see whether he'll be good to go for a massive Game 6.

The Sporting News was tracking all the key moments as the Warriors defeated the Lakers in Game 5 of the Western Conference semifinals:

Lakers vs. Warriors score
Team Q1 Q2 Q3 Q4 Final
Lakers 28 31 23 24 106
Warriors 32 38 23 28 121
Lakers vs. Warriors live score, updates, highlights from Game 5
12:31 a.m. FINAL — The final buzzer rings out, and we're headed to a Game 6. Curry finishes with 27 points, Wiggins with 25 and Draymond Green with 20.

12:27 a.m. — Darvin Ham has raised the white flag and sent in his reserves. Golden State is going to get the win, and their season is going to continue. An excellent performance by the Warriors tonight with their backs against the wall.

12:24 a.m. — Both teams continue to trade blows, but with time ticking down, the Warriors look like they're going to cruise to a win here. Davis still hasn't returned to the court, and it sounds like he may be dealing with some dizziness and vision difficulties. He'll probably be sidelined for the rest of the game.

12:19 a.m. — Steph with a big shot! A triple from the corner extends the lead back to 14 points! It's Warriors 109, Lakers 95 as we enter the final minutes.

12:16 a.m. — Draymond gets the crowd on its feet with a nice jumper, but Austin Reaves answers at the other end with a three from way downtown! The Lakers have chipped away and the lead is now down to just nine points with 5:25 left.
12:14 a.m. — Davis is having to head down the tunnel and towards the locker room after that injury. TNT's Chris Haynes reported he looked a little shaky on his feet and needed some help to stay upright. Let's hope he's OK.

12:09 a.m. — Gary Payton II finishes with the hoop and harm over LeBron, and the crowd is loving it. To add insult to injury for the Lakers, Anthony Davis appeared to take an elbow to the face on the other end. He appears to be in significant discomfort, and he is forced to head to the bench.

12:03 a.m. — Any momentum the Lakers may have had has quickly vanished early in the fourth quarter. Curry drains a pull-up jumper with the shot clock winding down, then Wiggins converts on a running floater in the lane to stretch the lead back to 15 points. LA is running out of time here.
11:55 p.m. END OF THIRD QUARTER — The Lakers use a mini-run to cut into the lead slightly. LeBron converts on a layup with time winding down in the quarter, and we enter the final frame with Golden State up 93-82. James appeared to land on the foot of Wiggins on that last shot, and he was grimacing a little bit as he walked away. Something to keep an eye on.

11:49 p.m. — With the third quarter winding down, the Warriors are showing no signs of letting up. Curry just blew right by three defenders for an easy layup, and once again Darvin Ham has used a timeout to try and spark something from his team.

11:41 p.m. — How about Draymond Green in this game? He's been sensational so far, racking up 18 points on 6 of 10 shooting from the field. He just converted on another layup to make it 85-70, Warriors.

11:32 p.m. — The Lakers are off to a terrible start in this half, and in the blink of an eye the Warriors have stretched their lead to 18 points! Wiggins caps off a 9-2 Golden State flurry with a one-handed putback slam and Darvin Ham takes a timeout to stop the bleeding. That could be a huge momentum swing in this game.
11:27 p.m. START OF SECOND HALF — And away we go in the third quarter. Can the Warriors hold off the Lakers to stay alive?

11:20 p.m. — Davis leads all scorers with 18 points at the half while Wiggins leads Golden State with 16. James has 17 and Curry has 12, including that buzzer-beater to make it an eleven-point game.

11:11 p.m. END OF FIRST HALF — Stephen Curry lights up Chase Center with a three to beat the buzzer! That's just his second trey of the night, but it sends the Warriors into the locker room with a 70-59 lead! They ended the half on a 16-5 run to take control of Game 5.
11:03 p.m. — We knew a run was coming from one of these teams, and this time it has come from the Warriors! Poole connects from deep, then Wiggins follows it up with a triple of his own. After a Lakers timeout, the home team leads 64-56 with less than two minutes left in the half.

10:56 p.m. — LeBron isn't cooling off, and he drives for a layup then knocks down a three moments later to tie things up at 50 apiece. Back and forth we go.

10:51 p.m. — Andrew Wiggins gets a bucket and a foul, then does it again less than 40 seconds later! That pair of three-point plays puts the Warriors back in the lead by five with seven minutes remaining in the first half.

10:45 p.m. — Now LeBron is starting to get going! He buries a pair of three-pointers to take his tally to 12 points on the night and give the Lakers the lead. After he sinks a pair of free throws, it's 41-40, LA.
10:37 p.m. END OF FIRST QUARTER — Whew, time to catch your breath! A Jordan Poole floater with six seconds left on the clock has made it 32-28 Warriors at the end of the first quarter. They could really use a good performance from him tonight. If the game continues like this, we're in for a treat.

10:35 p.m. — This game has been fast-paced and a lot of fun so far. Davis continues to fill it up and he's up to 13 points as we near the end of the first quarter. But 20-year-old Moses Moody has knocked down a pair of threes to keep Golden State's lead intact. They're up 30-26 with just over a minute left in the period.

10:27 p.m. — But here come the Lakers! Anthony Davis is getting himself involved, and his putback dunk cuts the Warriors' lead to just five points. He has nine points already in the early going.

10:20 p.m. — This has been one heck of a start by the Warriors. Gary Payton II drains a three, Draymond converts on another layup and then Stephen Curry opens his account for the night with a three from way downtown. The home team is out to a 17-5 lead less than five minutes into the game.
10:17 p.m. — Draymond Green is off to a fast start! He buries a three to get the Warriors on the board, and his layup through contact draws a foul and leads to a three-point play. Golden State leads 9-3 early.

10:12 p.m. — And there's the opening tip. We are underway in San Francisco.

10:07 p.m. — Knicks-Heat just wrapped up. meaning Warriors-Lakers is up next on TNT. Can Golden State do what New York did and stave off elimination at home in Game 5?

9:59 p.m. — Steph was doing Steph things in pregame warmups.
9:52 p.m. — No surprises from the Lakers with their starting lineup.
9:46 p.m. — For the second game in a row, Gary Payton II gets the start for Golden State.
What channel is Lakers vs. Warriors on?
Date: Wednesday, May 10
TV channel: TNT
Live streaming: Sling TV
Lakers vs. Warriors will air on TNT. Viewers can also stream the game on Sling TV.

Fans in the U.S. can watch the NBA Playoffs on Sling TV, which is now offering HALF OFF your first month! Stream Sling Orange for $20 in your first month to catch all the games on TNT, ESPN & ABC. For games on NBA TV, subscribe to Sling Orange & Sports Extra for $27.50 in your first month. Local regional blackout restrictions apply.

SIGN UP FOR SLING: English | Spanish

What time is Lakers vs. Warriors tonight?
Date: Wednesday, May 10
Time: 10 p.m. ET | 7 p.m. PT
Lakers vs. Warriors will tip off around 10 p.m. ET (7 p.m. local time) on Wednesday, May 10. The game will be played at the Chase Center in San Francisco.

Lakers vs. Warriors odds
Golden State is a 7.5-point favorite heading into Game 5.

 Warriors    Lakers

Spread -7.5 +7.5
Moneyline -350 +260
For the full market, check out BetMGM.

Lakers vs. Warriors schedule
Here is the complete schedule for the second-round series between Los Angeles and Golden State:

Date Game Time (ET) TV channel
May 2 Lakers 117, Warriors 112 10 p.m. TNT
May 4 Warriors 127, Lakers 100 9 p.m. ESPN
May 6 Lakers 127, Warriors 97 8:30 p.m. ABC
May 8 Lakers 104, Warriors 101 10 p.m. TNT
May 10 Game 5 10 p.m. TNT
May 12 Game 6* TBD ESPN
May 14 Game 7* TBD ABC

Mysterious high-energy particles could come from black hole jets

It’s three for the price of one. A trio of mysterious high-energy particles could all have the same source: active black holes embedded in galaxy clusters, researchers suggest January 22 in Nature Physics.

Scientists have been unable to figure out the origins of the three types of particles — gamma rays that give a background glow to the universe, cosmic neutrinos and ultrahigh energy cosmic rays. Each carries a huge amount of energy, from about a billion electron volts for a gamma ray to 100 billion billion electron volts for some cosmic rays.
Strangely, each particle type seems to contribute the same total amount of energy to the universe as the other two. That’s a clue that all three may be powered by the same engine, says physicist Kohta Murase of Penn State.

“We can explain the data of these three messengers with one single picture,” Murase says.

First, a black hole accelerates charged particles to extreme energies in a powerful jet (SN: 9/16/17, p. 16). These jets “are one of the most promising candidate sources of ultrahigh energy cosmic rays,” Murase says. The most energetic cosmic rays escape the jet and immediately plow through a sea of magnetized gas within the galaxy cluster.

Some rays escape the gas as well and zip towards Earth. But less energetic rays are trapped in the cluster for up to a billion years. There, they interact with the gas and create high-energy neutrinos that then escape the cluster.
Meanwhile, the cosmic rays that escaped travel through intergalactic space and interact with photons to produce the glow of gamma rays.

Murase and astrophysicist Ke Fang of the University of Maryland in College Park found that computer simulations of this scenario lined up with observations of how many cosmic rays, neutrinos and gamma rays reached Earth.

“It’s a nice piece of unification of many ideas,” says physicist Francis Halzen of the IceCube Neutrino Observatory in Antarctica, where the highest energy neutrinos have been observed.

There are other possible sources for the particles — for one, IceCube has already traced an especially high-energy neutrino to a single active black hole that may not be in a cluster (SN Online: 4/7/16). The observatory could eventually trace neutrinos back to galaxy clusters. “That’s the ultimate test,” Halzen says. “This could be tomorrow, could be God knows when.”

Here’s the key ingredient that lets a centipede’s bite take down prey

Knocking out an animal 15 times your size — no problem. A newly identified toxin in the venom of a tropical centipede helps the arthropod to overpower giant prey in about 30 seconds.

Insight into how this venom overwhelms lab mice could lead to an antidote for people who suffer excruciatingly painful, reportedly even fatal, centipede bites, an international research team reports the week of January 22 in Proceedings of the National Academy of Sciences.

In Hawaii, centipede bites account for about 400 emergency room visits a year, according to data from 2004 to 2008. The main threat there is Scolopendra subspinipes, an agile species almost as long as a human hand.
The subspecies S. subspinipes mutilans starred in studies at the Kunming Institute of Zoology in China and collaborating labs. Researchers there found a small peptide, now named “spooky toxin,” largely responsible for venom misery.

This toxin blocks a molecular channel that normally lets potassium flow through cell membranes. A huge amount of the biochemistry of staying alive involves potassium, so clogging some of what are called KCNQ channels caused mayhem in mice: slow and gasping breath, high blood pressure, frizzling nerve dysfunctions and so on. Administering the epilepsy drug retigabine opened the potassium channels and counteracted much of the toxin’s effects, raising hopes of a treatment for these bites.

New technique could help spot snooping drones

Now there’s a way to tell if a drone is spying on someone.

Researchers have devised a method to tell what a drone is recording — without having to decrypt the video data that the device streams to the pilot’s smartphone. This technique, described January 9 at arXiv.org, could help military bases detect unwanted surveillance and civilians protect their privacy as more commercial drones take to the skies.

“People have already worked on detecting [the presence of] drones, but no one had solved the problem of, ‘Is the drone actually recording something in my direction?’” says Ahmad Javaid, a cybersecurity researcher at the University of Toledo in Ohio, who was not involved in the work.
Ben Nassi, a software engineer at Ben-Gurion University of the Negev in Israel, and colleagues realized that changing the appearance of objects in a drone’s field of view influences the stream of encrypted data the drone sends to its smartphone controller. That’s because the more pixels that change from one video frame to the next, the more data bits the drone sends per second. So rapidly switching the appearance of a person or house and seeing whether those alterations correspond to higher drone-to-phone Wi-Fi traffic can reveal whether a drone is snooping.

Nassi’s team tested this idea by covering a house window with a smart film that could switch between transparent and nearly opaque, and aiming a drone with a video camera at the window from 40 meters away. Every two seconds, the researchers either flickered the smart film back and forth or left it transparent. They pointed a radio frequency scanner at the drone to intercept its outgoing Wi-Fi signals and found that its traffic spiked whenever the smart film flickered.

Story continues after graph
For people without such radio equipment, it’s also possible to intercept Wi-Fi signals with a laptop or computer with a wireless card, says Simon Birnbach, a computer scientist at the University of Oxford not involved in the work.

In another test, a drone recorded someone wearing a strand of LED lights from about 20 meters’ distance. At five-second intervals, the person either flipped the LED lights on and off, or left them off. The drone camera’s data stream peaked whenever the LED lights flickered.

This strategy to discern a drone camera’s target is “a very cool idea,” says Thomas Ristenpart, a computer scientist at Cornell University not involved in the work. But the researchers need to test whether the method works in a wider range of settings and find ways to alter a drone’s view without cumbersome equipment, he says. “I don’t think anyone is going to want to wear a [light-up] shirt on the off chance a drone may fly by.”

Javaid agrees that this prototype system must be made more user-friendly to achieve widespread use. For home security, he imagines a small device stuck to a window that flashes a light and intercepts a drone’s Wi-Fi signals whenever it detects one nearby. The device could alert the homeowner if a drone is found scoping out the house.

Still, identifying a nosy drone may not always be enough to know who’s flying it. “It’s sort of the equivalent of knowing that an unmarked van pulled up and waited outside of your house,” says Drew Davidson, a computer scientist at Tala Security, Inc. in Dallas, who was not involved in the study. “Better to know than not, but not exactly enough for the police to find a suspect.”

Stars with too much lithium may have stolen it

Something is giving small, pristine stars extra lithium. A dozen newly discovered stars contain more of the element than astronomers can explain.

Some of the newfound stars are earlier in their life cycles than stars previously found with too much lithium, researchers report in the Jan. 10 Astrophysical Journal Letters. Finding young lithium-rich stars could help explain where the extra material comes from without having to tinker with well-accepted stellar evolution rules.

The first stars in the Milky Way formed from the hydrogen, helium and small amounts of lithium that were produced in the Big Bang, so most of this ancient cohort have low lithium levels at the surface (SN: 11/14/15, p. 12). As the stars age, they usually lose even more.
Mysteriously, some aging stars have unusually high amounts of lithium. About a dozen red giant stars — the end-of-life stage for a sunlike star — have been spotted over the last few decades with extra lithium at their surfaces. It’s not enough lithium to explain a different cosmic conundrum, in which the universe overall seems to have a lot less lithium than it should (SN: 10/18/14, p. 15). But it’s enough to confuse astronomers. Red giants usually dredge up material that is light on lithium from their cores, making their surfaces look even more depleted in the element.

Finding lithium-enriched red giants “is not expected from standard models of low-mass star evolution, which is usually regarded as a relatively well-established field in astrophysics,” says astronomer Wako Aoki of the National Astronomical Observatory of Japan in Tokyo. A red giant with lots of lithium must have had a huge amount of lithium in its former life, or imply a tweak is needed to some fundamental rule of stellar evolution.

Aoki and his colleagues used the Subaru Telescope in Hawaii to find the 12 new lithium-rich stars, all about 0.8 times the mass of the sun. Five of the stars seem to be relatively early in their life cycles — a little older than regular sunlike stars but a little younger than red giants.
That suggests lithium-rich stars somehow picked up the extra lithium early in their lives, though it’s not clear from where. The stars could have stolen material from companion stars, or eaten unfortunate planets (SN: 5/19/01, p. 310). But there are reasons to think they did neither — for one thing, they don’t have an excess of any other elements.

“This is a mystery,” Aoki says.

Further complicating the picture is the possibility that the five youngish stars could be red giants after all, thanks to uncertainties in the measurements of their sizes, says astronomer Evan Kirby of Caltech. Future surveys should check stars that are definitely in the same stage of life as the sun.

Still, the new results are “tantalizing,” Kirby says. “It’s a puzzle that’s been around for almost 40 years now, and so far there’s no explanation that has satisfying observational support.”

Your phone is like a spy in your pocket

Consider everything your smartphone has done for you today. Counted your steps? Deposited a check? Transcribed notes? Navigated you somewhere new?

Smartphones make for such versatile pocket assistants because they’re equipped with a suite of sensors, including some we may never think — or even know — about, sensing, for example, light, humidity, pressure and temperature.

Because smartphones have become essential companions, those sensors probably stayed close by throughout your day: the car cup holder, your desk, the dinner table and nightstand. If you’re like the vast majority of American smartphone users, the phone’s screen may have been black, but the device was probably on the whole time.

“Sensors are finding their ways into every corner of our lives,” says Maryam Mehrnezhad, a computer scientist at Newcastle University in England. That’s a good thing when phones are using their observational dexterity to do our bidding. But the plethora of highly personal information that smartphones are privy to also makes them powerful potential spies.
Online app store Google Play has already discovered apps abusing sensor access. Google recently booted 20 apps from Android phones and its app store because the apps could — without the user’s knowledge — record with the microphone, monitor a phone’s location, take photos, and then extract the data. Stolen photos and sound bites pose obvious privacy invasions. But even seemingly innocuous sensor data can potentially broadcast sensitive information. A smartphone’s movement may reveal what users are typing or disclose their whereabouts. Even barometer readings that subtly shift with increased altitude could give away which floor of a building you’re standing on, suggests Ahmed Al-Haiqi, a security researcher at the National Energy University in Kajang, Malaysia.

These sneaky intrusions may not be happening in real life yet, but concerned researchers in academia and industry are working to head off eventual invasions. Some scientists have designed invasive apps and tested them on volunteers to shine a light on what smartphones can reveal about their owners. Other researchers are building new smartphone security systems to help protect users from myriad real and hypothetical privacy invasions, from stolen PIN codes to stalking.

Message revealed
Motion detectors within smartphones, like the accelerometer and the rotation-sensing gyroscope, could be prime tools for surreptitious data collection. They’re not permission protected — the phone’s user doesn’t have to give a newly installed app permission to access those sensors. So motion detectors are fair game for any app downloaded onto a device, and “lots of vastly different aspects of the environment are imprinted on those signals,” says Mani Srivastava, an engineer at UCLA.

For instance, touching different regions of a screen makes the phone tilt and shift just a tiny bit, but in ways that the phone’s motion sensors pick up, Mehrnezhad and colleagues demonstrated in a study reported online April 2017 in the International Journal of Information Security. These sensors’ data may “look like nonsense” to the human eye, says Al-Haiqi, but sophisticated computer programs can discern patterns in the mess and match segments of motion data to taps on various areas of the screen.

For the most part, these computer programs are machine-learning algorithms, Al-Haiqi says. Researchers train them to recognize keystrokes by feeding the programs a bunch of motion sensor data labeled with the key tap that produces particular movement. A pair of researchers built TouchLogger, an app that collects orientation sensor data and uses the data to deduce taps on smartphones’ number keyboards. In a test on HTC phones, reported in 2011 in San Francisco at the USENIX Workshop on Hot Topics in Security, TouchLogger discerned more than 70 percent of key taps correctly.

Since then, a spate of similar studies have come out, with scientists writing code to infer keystrokes on number and letter keyboards on different kinds of phones. In 2016 in Pervasive and Mobile Computing, Al-Haiqi and colleagues reviewed these studies and concluded that only a snoop’s imagination limits the ways motion data could be translated into key taps. Those keystrokes could divulge everything from the password entered on a banking app to the contents of an e-mail or text message.

Story continues below graphs
A more recent application used a whole fleet of smartphone sensors — including the gyroscope, accelerometer, light sensor and magnetism-measuring magnetometer — to guess PINs. The app analyzed a phone’s movement and how, during typing, the user’s finger blocked the light sensor. When tested on a pool of 50 PIN numbers, the app could discern keystrokes with 99.5 percent accuracy, the researchers reported on the Cryptology ePrint Archive in December.

Other researchers have paired motion data with mic recordings, which can pick up the soft sound of a fingertip tapping a screen. One group designed a malicious app that could masquerade as a simple note-taking tool. When the user tapped on the app’s keyboard, the app covertly recorded both the key input and the simultaneous microphone and gyroscope readings to learn the sound and feel of each keystroke.

The app could even listen in the background when the user entered sensitive info on other apps. When tested on Samsung and HTC phones, the app, presented in the Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless and Mobile Networks, inferred the keystrokes of 100 four-digit PINs with 94 percent accuracy.

Al-Haiqi points out, however, that success rates are mostly from tests of keystroke-deciphering techniques in controlled settings — assuming that users hold their phones a certain way or sit down while typing. How these info-extracting programs fare in a wider range of circumstances remains to be seen. But the answer to whether motion and other sensors would open the door for new privacy invasions is “an obvious yes,” he says.

Tagalong
Motion sensors can also help map a person’s travels, like a subway or bus ride. A trip produces an undercurrent of motion data that’s discernible from shorter-lived, jerkier movements like a phone being pulled from a pocket. Researchers designed an app, described in 2017 in IEEE Transactions on Information Forensics and Security, to extract the data signatures of various subway routes from accelerometer readings.

In experiments with Samsung smartphones on the subway in Nanjing, China, this tracking app picked out which segments of the subway system a user was riding with at least 59, 81 and 88 percent accuracy — improving as the stretches expanded from three to five to seven stations long. Someone who can trace a user’s subway movements might figure out where the traveler lives and works, what shops or bars the person frequents, a daily schedule, or even — if the app is tracking multiple people — who the user meets at various places.
Accelerometer data can also plot driving routes, as described at the 2012 IEEE International Conference on Communication Systems and Networks in Bangalore, India. Other sensors can be used to track people in more confined spaces: One team synced a smartphone mic and portable speaker to create an on-the-fly sonar system to map movements throughout a house. The team reported the work in the September 2017 Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

“Fortunately there is not anything like [these sensor spying techniques] in real life that we’ve seen yet,” says Selcuk Uluagac, an electrical and computer engineer at Florida International University in Miami. “But this doesn’t mean there isn’t a clear danger out there that we should be protecting ourselves against.”

That’s because the kinds of algorithms that researchers have employed to comb sensor data are getting more advanced and user-friendly all the time, Mehrnezhad says. It’s not just people with Ph.D.s who can design the kinds of privacy invasions that researchers are trying to raise awareness about. Even app developers who don’t understand the inner workings of machine-learning algorithms can easily get this kind of code online to build sensor-sniffing programs.

What’s more, smartphone sensors don’t just provide snooping opportunities for individual cybercrooks who peddle info-stealing software. Legitimate apps often harvest info, such as search engine and app download history, to sell to advertising companies and other third parties. Those third parties could use the information to learn about aspects of a user’s life that the person doesn’t necessarily want to share.

Take a health insurance company. “You may not like them to know if you are a lazy person or you are an active person,” Mehrnezhad says. “Through these motion sensors, which are reporting the amount of activity you’re doing every day, they could easily identify what type of user you are.”

Sensor safeguards
Since it’s only getting easier for an untrusted third party to make private inferences from sensor data, researchers are devising ways to give people more control over what information apps can siphon off of their devices. Some safeguards could appear as standalone apps, whereas others are tools that could be built into future operating system updates.

Uluagac and colleagues proposed a system called 6thSense, which monitors a phone’s sensor activity and alerts its owner to unusual behavior, in Vancouver at the August 2017 USENIX Security Symposium. The user trains this system to recognize the phone’s normal sensor behavior during everyday tasks like calling, Web browsing and driving. Then, 6thSense continually checks the phone’s sensor activity against these learned behaviors.

If someday the program spots something unusual — like the motion sensors reaping data when a user is just sitting and texting — 6thSense alerts the user. Then the user can check if a recently downloaded app is responsible for this suspicious activity and delete the app from the phone.

Uluagac’s team recently tested a prototype of the system: Fifty users trained Samsung smartphones with 6thSense to recognize their typical sensor activity. When the researchers fed the 6thSense system examples of benign data from daily activities mixed in with segments of malicious sensor operations, 6thSense picked out the problematic bits with over 96 percent accuracy.
For people who want more active control over their data, Supriyo Chakraborty, a privacy and security researcher at IBM in Yorktown Heights, N.Y., and colleagues devised DEEProtect, a system that blunts apps’ abilities to draw conclusions about certain user activity from sensor data. People could use DEEProtect, described in a paper posted online at arXiv.org in February 2017, to specify preferences about what apps should be allowed to do with sensor data. For example, someone may want an app to transcribe speech but not identify the speaker.

DEEProtect intercepts whatever raw sensor data an app tries to access and strips that data down to only the features needed to make user-approved inferences. For speech-to-text translation, the phone typically needs sound frequencies and the probabilities of particular words following each other in a sentence.

But sound frequencies could also help a spying app deduce a speaker’s identity. So DEEProtect distorts the dataset before releasing it to the app, leaving information on word orders alone, since that has little or no bearing on speaker identity. Users can control how much DEEProtect changes the data; more distortion begets more privacy but also degrades app functions.

In another approach, Giuseppe Petracca, a computer scientist and engineer at Penn State, and colleagues are trying to protect users from accidentally granting sensor access to deceitful apps, with a security system called AWare.

Apps have to get user permission upon first installation or first use to access certain sensors like the mic and camera. But people can be cavalier about granting those blanket authorizations, Uluagac says. “People blindly give permission to say, ‘Hey, you can use the camera, you can use the microphone.’ But they don’t really know how the apps are using these sensors.”

Instead of asking permission when a new app is installed, AWare would request user permission for an app to access a certain sensor the first time a user provided a certain input, like pressing a camera button. On top of that, the AWare system memorizes the state of the phone when the user grants that initial permission — the exact appearance of the screen, sensors requested and other information. That way, AWare can tell users if the app later attempts to trick them into granting unintended permissions.

For instance, Petracca and colleagues imagine a crafty data-stealing app that asks for camera access when the user first pushes a camera button, but then also tries to access the mic when the user later pushes that same button. The AWare system, also presented at the 2017 USENIX Security Symposium, would realize the mic access wasn’t part of the initial deal, and would ask the user again if he or she would like to grant this additional permission.

Petracca and colleagues found that people using Nexus smartphones equipped with AWare avoided unwanted authorizations about 93 percent of the time, compared with 9 percent among people using smartphones with typical first-use or install-time permission policies.

The price of privacy
The Android security team at Google is also trying to mitigate the privacy risks posed by app sensor data collection. Android security engineer Rene Mayrhofer and colleagues are keeping tabs on the latest security studies coming out of academia, Mayrhofer says.

But just because someone has built and successfully tested a prototype of a new smartphone security system doesn’t mean it will show up in future operating system updates. Android hasn’t incorporated proposed sensor safeguards because the security team is still looking for a protocol that strikes the right balance between restricting access for nefarious apps and not stunting the functions of trustworthy programs, Mayrhofer explains.

“The whole [app] ecosystem is so big, and there are so many different apps out there that have a totally legitimate purpose,” he adds. Any kind of new security system that curbs apps’ sensor access presents “a real risk of breaking” legitimate apps.

Tech companies may also be reluctant to adopt additional security measures because these extra protections can come at the cost of user friendliness, like AWare’s additional permissions pop-ups. There’s an inherent trade-off between security and convenience, UCLA’s Srivastava says. “You’re never going to have this magical sensor shield [that] gives you this perfect balance of privacy and utility.”

But as sensors get more pervasive and powerful, and algorithms for analyzing the data become more astute, even smartphone vendors may eventually concede that the current sensor protections aren’t cutting it. “It’s like cat and mouse,” Al-Haiqi says. “Attacks will improve, solutions will improve. Attacks will improve, solutions will improve.”

The game will continue, Chakraborty agrees. “I don’t think we’ll get to a place where we can declare a winner and go home.”

New device can transmit underwater sound to air

Don’t expect to play a game of Marco Polo by shouting from beneath the pool’s surface. No one will hear you because, normally, only about 0.1 percent of sound is transmitted from water to the air. But a new type of device might one day help.

Researchers have designed a new metamaterial — a type of material that behaves in ways conventional materials can’t — that increases sound transmission to 30 percent. The metamaterial could have applications for more than poolside play. A future version might be used to detect noisy marine life or listen in on sonar use, say applied physicist Oliver Wright of Hokkaido University in Sapporo, Japan, and a team at Yonsei University in Seoul, South Korea, who describe the metamaterial in a paper accepted to Physical Review Letters.
Currently, detection of underwater sounds happens with hydrophones, which have to be underwater. But what if you wanted to listen in from the surface?

Enter the new device. It’s a small cylinder with a weighted rubber membrane stretched across a metal frame that floats atop the water surface. When underwater sound waves hit the device, its frame and membrane vibrate at finely tuned frequencies to help sound transmit into the air.

“A ‘hard’ surface like a table or water reflects almost 100 percent of sound,” says Wright. “We want to try to mitigate that by introducing an intermediary structure.”
Both water and air resist the flow of sound, a property known as acoustic impedance. Because of its density, water’s acoustic impedance is 3,600 times that of air. The greater the mismatch, the more sound is reflected at a boundary.
Adding a layer of material one-fourth the thickness of an incoming wave’s wavelength can reduce the amount of reflection. This is the principle at work behind anti-reflective coatings applied to lenses of cameras and glasses. While optical light has a wavelength in the hundreds of nanometers, necessitating a thin coating only a few atoms thick, audible sound waves can be meters long.

Even though it’s only one-hundredth the thickness of the sound’s wavelength, instead of the conventional one-fourth, the metamaterial still transmits sound.

“It’s a tour de force of experimental demonstration,” says Oleg Godin, a physicist at the Naval Postgraduate School in Monterey, Calif., who was not involved with the research. “But I’m less impressed by the suggestions and implications about its uses. It’s wishful thinking.”

One major problem that the researchers would have to overcome is the device’s inability to transmit sound that hits the surface an angle. In the lab, the device is tested in a tube — effectively a one-direction environment. But on the vast surface of a lake or ocean, the device would be limited to transmitting sounds from the small area directly below it. Additionally, the metamaterial is limited to transmitting a narrow band of frequencies. Noise outside that range reflects off the water’s surface as usual.

Still, the scientists are optimistic about the next steps, and even propose that a sheet of these devices could work in concert.

Somewhere in the brain is a storage device for memories

People tend to think of memories as deeply personal, ephemeral possessions — snippets of emotions, words, colors and smells stitched into our unique neural tapestries as life goes on. But a strange series of experiments conducted decades ago offered a different, more tangible perspective. The mind-bending results have gained unexpected support from recent studies.

In 1959, James Vernon McConnell, a psychologist at the University of Michigan in Ann Arbor, painstakingly trained small flatworms called planarians to associate a shock with a light. The worms remembered this lesson, later contracting their bodies in response to the light.
One weird and wonderful thing about planarians is that they can regenerate their bodies — including their brains. When the trained flatworms were cut in half, they regrew either a head or a tail, depending on which piece had been lost. Not surprisingly, worms that kept their heads and regrew tails retained the memory of the shock, McConnell found. Astonishingly, so did the worms that grew replacement heads and brains. Somehow, these fully operational, complex arrangements of brand-spanking-new nerve cells had acquired the memory of the painful shock, McConnell reported.

In subsequent experiments, McConnell went even further, attempting to transfer memory from one worm to another. He tried grafting the head of a trained worm onto the tail of an untrained worm, but he couldn’t get the head to stick. He injected trained planarian slurry into untrained worms, but the recipients often exploded. Finally, he ground up bits of the trained planarians and fed them to untrained worms. Sure enough, after their meal, the untrained worms seemed to have traces of the memory — the cannibals recoiled at the light.

The implications were bizarre, and potentially profound: Lurking in that pungent planarian puree must be a substance that allowed animals to literally eat one another’s memories.

These outlandish experiments aimed to answer a question that had been needling scientists for decades: What is the physical basis of memory? Somehow, memories get etched into cells, forming a physical trace that researchers call an “engram.” But the nature of these stable, specific imprints is a mystery.
Today, McConnell’s memory transfer episode has largely faded from scientific conversation. But developmental biologist Michael Levin of Tufts University in Medford, Mass., and a handful of other researchers wonder if McConnell was onto something. They have begun revisiting those historical experiments in the ongoing hunt for the engram.

Applying powerful tools to the engram search, scientists are already challenging some widely held ideas about how memories are stored in the brain. New insights haven’t yet revealed the identity of the physical basis of memory, though. Scientists are chasing a wide range of possibilities. Some ideas are backed by strong evidence; others are still just hunches. In pursuit of the engram, some researchers have even searched for clues in memories that persist in brains that go through massive reorganization.
Synapse skeptics
One of today’s most entrenched explanations puts engrams squarely within the synapses, connections where chemical and electrical messages move between nerve cells, or neurons. These contact points are strengthened when bulges called synaptic boutons grow at the ends of message-sending axons and when hairlike protrusions called spines decorate message-receiving dendrites.

In the 1970s, researchers described evidence that as an animal learned something, these neural connections bulked up, forming more contact points of synaptic boutons and dendritic spines. With stronger connection points, cells could fire in tandem when the memory needed to be recalled. Stronger connections mean stronger memory, as the theory goes.

This process of bulking up, called long-term potentiation, or LTP, was thought to offer an excellent explanation for the physical basis of memory, says David Glanzman, a neuroscientist at UCLA who has studied learning and memory since 1980. “Until a few years ago, I implicitly accepted this model for memory,” he says. “I don’t anymore.”

Glanzman has good reason to be skeptical. In his lab, he studies the sea slug Aplysia, an organism that he first encountered as a postdoctoral student in the Columbia University lab of Nobel Prize–winning neuroscientist Eric Kandel. When shocked in the tail, these slugs protectively withdraw their siphon and gill. After multiple shocks, the animals become sensitized and withdraw faster. (The distinction between learning and remembering is hazy, so researchers often treat the two as similar actions.)

After neurons in the sea slugs had formed the memory of the shock, the researchers saw stronger synapses between the nerve cells that sense the shock and the ones that command the siphon and gill to move. When the memory was made, the synapse sprouted more message-sending bumps, Glanzman’s team reported in eLife in 2014. And when the memory was weakened with a drug that prevents new proteins from being made, some of these bumps disappeared. “That made perfect sense,” Glanzman says. “We expected the synaptic growth to revert and go back to the original, nonlearned state. And it did.”

But the answer to the next logical question was a curve ball. If the memory was stored in bulked-up synapses, as researchers thought, then the new contact points that appear when a memory is formed should be the same ones that vanish when the memory is subsequently lost, Glanzman reasoned. That’s not what happened — not even close. “We found that it was totally random,” Glanzman says. “Completely random.”
Research from Susumu Tonegawa, a Nobel Prize–winning neuroscientist at MIT, turned up a result in mice that was just as startling. His project relies on sophisticated tools, including optogenetics. With this technique, the scientists activated specific memory-storing neurons with light. The researchers call those memory-storing cells “engram cells.”

In a parallel to the sea slug experiment, Tonegawa’s team conditioned mice to fear a particular cage and genetically marked the cells that somehow store that fear memory. After the mice learned the association, the researchers saw more dendritic spines in the neurons that store the memory — evidence of stronger synapses.

When the researchers caused amnesia with a drug, that newly formed synaptic muscle went away. “We wiped out the LTP, completely wiped it out,” says neuroscientist Tomás Ryan, who conducted the study in Tonegawa’s lab and reported the results with Tonegawa and colleagues in 2015 in Science.

And yet the memory wasn’t lost. With laser light, the researchers could still activate the engram cells — and the memory they somehow still held. That means that the memory was stored in something that isn’t related to the strength of the synapses.

The surprising results suggest that researchers may have been sidetracked, focusing too hard on synaptic strength as a memory storage system, says Ryan, now at Trinity College Dublin. “That approach has produced about 12,000 or so papers on the topic, but it hasn’t been very successful in explaining how memory works.”

Silent memories
The finding that memory storage and synaptic strength aren’t always tied together raises an important distinction that may help untangle the research. The cellular machines that are required for calling up memories are not necessarily the same machines that store memories. The search for what stores memories may have been muddled with results on how cells naturally call up memories.

Ryan, Tonegawa and Glanzman all think that LTP, with its bulked-up synapses, is important for retrieving memories, but not the thing that actually stores them. It’s quite possible to have stored memories that aren’t readily accessible, what Glanzman calls “occult memories” and Tonegawa refers to as “silent engrams.” Both think the concept applies more broadly than in just the sea slugs and mice they study.

Glanzman explains the situation by considering a talented violin player. “If you cut off my hands, I’m not able to play the violin,” he says. “But it doesn’t mean that I don’t know how to play the violin.” The analogy is overly simple, he says, “but that’s how I think of synapses. They enable the memory to be expressed, but they are not where the memory is.”

In a paper published last October in Proceedings of the National Academy of Sciences, Tonegawa and colleagues created silent engrams of a cage in which mice received a shock. The mice didn’t seem to remember the room paired with the shock, suggesting that the memory was silent. But the memory could be called up after a genetic tweak beefed up synaptic connections by boosting the number of synaptic spines specifically among the neurons that stored the memory.

Story continues below diagram
Those results add weight to the idea that synaptic strength is crucial for memory recall, but not storage, and they also hint that, somehow, the brain stores many inaccessible memory traces. Tonegawa suspects that these silent engrams are quite common.

Finding and reactivating silent engrams “tells us quite a bit about how memory works,” Tonegawa says. “Memory is not always active for you. You learn it, you store it,” but depending on the context, it might slip quietly into the brain and remain silent, he says. Consider an old memory from high school that suddenly pops up. “Something triggered you to recall — something very specific — and that probably involves the conversion of a silent engram to an active engram,” Tonegawa says.

But engrams, silent or active, must still be holding memory information somehow. Tonegawa thinks that this information is stored not in synapses’ strength, but in synapses’ very existence. When a specific memory is formed, new connections are quickly forged, creating anatomical bridges between constellations of cells, he suspects. “That defines the content of memory,” he says. “That is the substrate.”

These newly formed synapses can then be beefed up, leading to the memory burbling up as an active engram, or pared down and weakened, leading to a silent engram. Tonegawa says this idea requires less energy than the LTP model, which holds that memory storage requires constantly revved up synapses full of numerous contact points. Synapse existence, he argues, can hold memory in a latent, low-maintenance state.
“The brain doesn’t even have to recognize that it’s a memory,” says Ryan, who shares this view. Stored in the anatomy of arrays of neurons, memory is “in the shape of the brain itself,” he says.
A temporary vessel
Tonegawa is confident that the very existence of physical links between neurons stores memories. But other researchers have their own notions.

Back in the 1950s, McConnell suspected that RNA, cellular material that can help carry out genetic instructions but can also carry information itself, might somehow store memories.

This unorthodox idea, that RNA is involved in memory storage, has at least one modern-day supporter in Glanzman, who plans to present preliminary data at a meeting in April that suggest injections of RNA can transfer memory between sea slugs.

Glanzman thinks that RNA is a temporary storage vessel for memories, though. The real engram, he suggests, is the folding pattern of DNA in cells’ nuclei. Changes to how tightly DNA is packed can govern how genes are deployed. Those changes, part of what’s known as the epigenetic code, can be made — and even transferred — by roving RNA molecules, Glanzman argues. He is quick to point out that his idea, memory transfer by RNA, is radical. “I don’t think you could find another card-carrying Ph.D. neuroscientist who believes that.”

Other researchers, including neurobiologist David Sweatt of Vanderbilt University in Nashville, also suspect that long-lasting epigenetic changes to DNA hold memories, an idea Sweatt has been pursuing for 20 years. Because epigenetic changes can be stable, “they possess the unique attribute necessary to contribute to the engram,” he says.

And still more engram ideas abound. Some results suggest that a protein called PKM-zeta, which helps keep synapses strong, preserves memories. Other evidence suggests a role for structures called perineuronal nets, rigid sheaths that wrap around neurons. Holes in these nets allow synapses to peek through, solidifying memories, the reasoning goes (SN: 11/14/15, p. 8). A different line of research focuses on proteins that incite others to misfold and aggregate around synapses, strengthening memories. Levin, at Tufts, has his own take. He thinks that bioelectrical signals, detected by voltage-sensing proteins on the outside of cells, can store memories, though he has no evidence yet.

Beyond the brain
Levin’s work on planarians, reminiscent of McConnell’s cannibal research, may even prod memory researchers to think beyond the brain. Planarians can remember the texture of their terrain, even using a new brain, Levin and Tal Shomrat, now at the Ruppin Academic Center in Mikhmoret, Israel, reported in 2013 in the Journal of Experimental Biology. The fact that memory somehow survived decapitation hints that signals outside of the brain may somehow store memories, even if temporarily.

Memory clues may also come from other animals that undergo extreme brain modification over their lifetimes. As caterpillars transition to moths, their brains change dramatically. But a moth that had learned as a caterpillar to avoid a certain odor paired with a shock holds onto that information, despite having a radically different brain, researchers have found.

Story continues below box
Similar results come from mammals, such as the Arctic ground squirrel, which massively prunes its synaptic connections to save energy during torpor in the winter. Within hours of the squirrel waking up, the pruned synaptic connections grow back. Remarkably, some old memories seem to survive the experience. The squirrels have been shown to remember familiar squirrels, as well as how to perform motor feats such as jumping between boxes and crawling through tubes. Even human babies retain taste and sound memories from their time in the womb, despite a very changed brain.

These extreme cases of memory persistence raise lots of basic questions about the nature of the engram, including whether memories must always be stored in the brain. But it’s important that the engram search isn’t restricted to the most advanced forms of humanlike memory, Levin says. “Memory starts, evolutionarily, very early on,” he says. “Single-celled organisms, bacteria, slime molds, fish, these things all have memory.… They have the ability to alter their future behavior based on what’s happened before.”

The diversity of ideas — and of experimental approaches — highlights just how unsettled the engram question remains. After decades of work, the field is still young. Even if the physical identity of the engram is eventually discovered and universally agreed on, a bigger question still looms, Levin says.

“The whole point of memory is that you should be able to look at the engram and immediately know what it means,” he says. Somehow, the physical message of a memory, in whatever form it ultimately takes, must be translated into the experience of a memory by the brain. But no one has a clue how this occurs.

At a 2016 workshop, a small group of researchers gathered to discuss engram ideas that move beyond synapse strength. “All the rebels came together,” Glanzman says. The two-day debate didn’t settle anything, but it was “very valuable,” says Ryan, who also attended. He coauthored a summary of the discussion that appeared in the May 2017 Annals of the New York Academy of Sciences. “Because the mind is part of the natural world, there is no reason to believe that it will be any less tangible and ultimately comprehensible than other components,” Ryan and coauthors optimistically wrote.

For now, the field hasn’t been able to explain memories in tangible terms. But research is moving forward, in part because of its deep implications. The hunt for memories gets at the very nature of identity, Levin says. “What does it mean to be a coherent individual that has a coherent bundle of memories?” The elusive identity of the engram may prove key to answering that question.