Astronomers have found 12 more moons around Jupiter, and one is really weird. While 11 orbit in the same direction as their nearest neighbors, one doesn’t, potentially putting it on a fatal collision course.
“It’s driving down the highway on the wrong side of the road,” says planetary scientist Scott Sheppard of the Carnegie Institution for Science in Washington, D.C.
Sheppard and colleagues found the moons while looking for something else entirely: a putative planet that could exist beyond the orbit of Neptune, known colloquially as Planet Nine (SN: 7/23/16, p. 7). During a survey in 2017 of the most distant objects in the solar system using the Victor Blanco 4-meter telescope in Chile, Jupiter happened to be visible in the same area of sky that the team was searching during one of its observing runs. “Might as well kill two birds with one stone,” Sheppard thought. The researchers found a dozen objects moving around the sun at the same rate as Jupiter. Follow-up observations confirmed the moons’ existence and orbits: two inner moons that orbit in the same direction that Jupiter spins, nine outer moons that orbit the planet in the opposite direction and one oddball traveler. The researchers announced two of the moons in 2017 and the remaining 10 on July 16.
The motions of all but the oddball are normal for Jovian moons, which now number a whopping 79. Scientists think that’s because the inner moons formed from a disk of gas and dust that orbited the giant planet in the solar system’s early days, similar to how the planets formed around the sun (SN: 5/12/18, p. 28). The outer moons were probably free-floating space rocks captured when they came too close, and their opposite orbit was set by the direction that they approached Jupiter from.
But one moon broke the mold. This rock, which the team calls Valetudo for the Roman goddess of health and hygiene, is tiny, only about a kilometer across. It orbits in the same direction as Jupiter’s spin, but alongside the farther-out retrograde moons. As a result, Valetudo is probably doomed to collide with one or more of the other moons someday. The researchers are still calculating when, but they expect it to occur sometime between 100 million and a billion years from now. Valetudo may be the last remnant of a bigger object that has already withstood several collisions, or of a family of moons that has since been smashed to smithereens. “It’s probably the largest surviving member, if not the only one,” Sheppard says.
Such nonconformist satellites are not rare, notes planetary scientist David Jewitt of UCLA, who was not involved in the new work. “But they are very interesting, because we know that they have been captured by their host planets, but we don’t know how, or from where,” he says. Figuring out what oddballs like Valetudo are made of could help nail those details down.
There’s a new clique among quantum particles in a semiconductor.
Electrons and positively charged holes in the material’s atomic lattice band together to create a tight-knit posse dubbed a collexon, researchers report July 26 in Communications Physics. This new class of quasiparticle — a quantum clan that acts like a single subatomic particle — could help researchers better understand semiconductors, which are essential to most modern electronics.
The collexon is similar to a quasiparticle known as an exciton, a pairing of an electron and a hole (SN: 5/17/14, p. 5). While these pairs go it alone in excitons, electron-hole duos in collexons join forces with the surrounding sea of electrons, Christian Nenstiel, a physicist at the Technical University of Berlin, and colleagues report. The researchers made this discovery when they inserted germanium atoms into a gallium nitride semiconductor, and zapped the material with a laser to see how it emits light. In similar experiments, emissions from excitons faded as the number of impurities, such as the germanium atoms, increased. But this time, at high concentrations of the introduced atoms, light shone at different wavelengths than seen with the excitons. The team deduced that large numbers of wandering electrons, introduced by the germanium, helped stabilize excitons to form the new type of quasiparticle.
It’s too early to predict applications, says study coauthor Gordon Callsen of the École Polytechnique Fédérale de Lausanne in Switzerland. The discovery instead suggests that researchers underestimate interactions among ensembles of particles in semiconductors. “Lots of interesting physics is still waiting for us,” he says.
Scrolling through a news feed often feels like playing Two Truths and a Lie.
Some falsehoods are easy to spot. Like reports that First Lady Melania Trump wanted an exorcist to cleanse the White House of Obama-era demons, or that an Ohio school principal was arrested for defecating in front of a student assembly. In other cases, fiction blends a little too well with fact. Was CNN really raided by the Federal Communications Commission? Did cops actually uncover a meth lab inside an Alabama Walmart? No and no. But anyone scrolling through a slew of stories could easily be fooled.
We live in a golden age of misinformation. On Twitter, falsehoods spread further and faster than the truth (SN: 3/31/18, p. 14). In the run-up to the 2016 U.S. presidential election, the most popular bogus articles got more Facebook shares, reactions and comments than the top real news, according to a BuzzFeed News analysis.
Before the internet, “you could not have a person sitting in an attic and generating conspiracy theories at a mass scale,” says Luca de Alfaro, a computer scientist at the University of California, Santa Cruz. But with today’s social media, peddling lies is all too easy — whether those lies come from outfits like Disinfomedia, a company that has owned several false news websites, or a scrum of teenagers in Macedonia who raked in the cash by writing popular fake news during the 2016 election. Most internet users probably aren’t intentionally broadcasting bunk. Information overload and the average Web surfer’s limited attention span aren’t exactly conducive to fact-checking vigilance. Confirmation bias feeds in as well. “When you’re dealing with unfiltered information, it’s likely that people will choose something that conforms to their own thinking, even if that information is false,” says Fabiana Zollo, a computer scientist at Ca’ Foscari University of Venice in Italy who studies how information circulates on social networks.
Intentional or not, sharing misinformation can have serious consequences. Fake news doesn’t just threaten the integrity of elections and erode public trust in real news. It threatens lives. False rumors that spread on WhatsApp, a smartphone messaging system, for instance, incited lynchings in India this year that left more than a dozen people dead.
To help sort fake news from truth, programmers are building automated systems that judge the veracity of online stories. A computer program might consider certain characteristics of an article or the reception an article gets on social media. Computers that recognize certain warning signs could alert human fact-checkers, who would do the final verification.
Automatic lie-finding tools are “still in their infancy,” says computer scientist Giovanni Luca Ciampaglia of Indiana University Bloomington. Researchers are exploring which factors most reliably peg fake news. Unfortunately, they have no agreed-upon set of true and false stories to use for testing their tactics. Some programmers rely on established media outlets or state press agencies to determine which stories are true or not, while others draw from lists of reported fake news on social media. So research in this area is something of a free-for-all.
But teams around the world are forging ahead because the internet is a fire hose of information, and asking human fact-checkers to keep up is like aiming that hose at a Brita filter. “It’s sort of mind-numbing,” says Alex Kasprak, a science writer at Snopes, the oldest and largest online fact-checking site, “just the volume of really shoddy stuff that’s out there.” Substance and style When it comes to inspecting news content directly, there are two major ways to tell if a story fits the bill for fraudulence: what the author is saying and how the author is saying it.
Ciampaglia and colleagues automated this tedious task with a program that checks how closely related a statement’s subject and object are. To do this, the program uses a vast network of nouns built from facts found in the infobox on the right side of every Wikipedia page — although similar networks have been built from other reservoirs of knowledge, like research databases. In the Ciampaglia group’s noun network, two nouns are connected if one noun appeared in the infobox of another. The fewer degrees of separation between a statement’s subject and object in this network, and the more specific the intermediate words connecting subject and object, the more likely the computer program is to label a statement as true.
Take the false assertion “Barack Obama is a Muslim.” There are seven degrees of separation between “Obama” and “Islam” in the noun network, including very general nouns, such as “Canada,” that connect to many other words. Given this long, meandering route, the automated fact-checker, described in 2015 in PLOS ONE, deemed Obama unlikely to be Muslim.
But estimating the veracity of statements based on this kind of subject-object separation has limits. For instance, the system deemed it likely that former President George W. Bush is married to Laura Bush. Great. It also decided George W. Bush is probably married to Barbara Bush, his mother. Less great. Ciampaglia and colleagues have been working to give their program a more nuanced view of the relationships between nouns in the network.
Verifying every statement in an article isn’t the only way to see if a story passes the smell test. Writing style may be another giveaway. Benjamin Horne and Sibel Adali, computer scientists at Rensselaer Polytechnic Institute in Troy, N.Y., analyzed 75 true articles from media outlets deemed most trustworthy by Business Insider, as well as 75 false stories from sites on a blacklist of misleading websites. Compared with real news, false articles tended to be shorter and more repetitive with more adverbs. Fake stories also had fewer quotes, technical words and nouns.
Based on these results, the researchers created a computer program that used the four strongest distinguishing factors of fake news — number of nouns and number of quotes, redundancy and word counts — to judge article veracity. The program, presented at last year’s International Conference on Web and Social Media in Montreal, correctly sorted fake news from true 71 percent of the time (a program that sorted fake news from true at random would show about 50 percent accuracy). Horne and Adali are looking for additional features to boost accuracy.
Verónica Pérez-Rosas, a computer scientist at the University of Michigan in Ann Arbor, and colleagues compared 240 genuine and 240 made-up articles. Like Horne and Adali, Pérez-Rosas’ team found more adverbs in fake news articles than in real ones. The fake news in this analysis, reported at arXiv.org on August 23, 2017, also tended to use more positive language and express more certainty. Computers don’t necessarily need humans to tell them which aspects of fake articles give these stories away. Computer scientist and engineer Vagelis Papalexakis of the University of California, Riverside and colleagues built a fake news detector that started by sorting a cache of articles into groups based on how similar the stories were. The researchers didn’t provide explicit instructions on how to assess similarity. Once the program bunched articles according to likeness, the researchers labeled 5 percent of all the articles as factual or false. From this information, the algorithm, described April 24 at arXiv.org, predicted labels for the rest of the unmarked articles. Papalexakis’ team tested this system on almost 32,000 real and 32,000 fake articles shared on Twitter. Fed that little kernel of truth, the program correctly predicted labels for about 69 percent of the other stories.
Adult supervision Getting it right about 70 percent of the time isn’t nearly accurate enough to trust news-vetting programs on their own. But fake news detectors could offer a proceed-with-caution alert when a user opens a suspicious story in a Web browser, similar to the alert that appears when you’re about to visit a site with no security certificate.
In a similar kind of first step, social media platforms could use misinformation watchdogs to prowl news feeds for questionable stories to then send to human fact-checkers. Today, Facebook considers feedback from users — like those who post disbelieving comments or report that an article is false — when choosing which stories to fact-check. The company then sends these stories to the professional skeptics at FactCheck.org, PolitiFact or Snopes for verification. But Facebook is open to using other signals to find hoaxes more efficiently, says Facebook spokesperson Lauren Svensson.
No matter how good computers get at finding fake news, these systems shouldn’t totally replace human fact-checkers, Horne says. The final call on whether a story is false may require a more nuanced understanding than a computer can provide.
“There’s a huge gray scale” of misinformation, says Julio Amador Diaz Lopez, a computer scientist and economist at Imperial College London. That spectrum — which includes truth taken out of context, propaganda and statements that are virtually impossible to verify, such as religious convictions — may be tough for computers to navigate.
Snopes science writer Kasprak imagines that the future of fact-checking will be like computer-assisted audio transcription. First, the automated system hammers out a rough draft of the transcription. But a human still has to review that text for overlooked details like spelling and punctuation errors, or words that the program just got wrong. Similarly, computers could compile lists of suspect articles for people to check, Kasprak says, emphasizing that humans should still get the final say on what’s labeled as true.
Eyes on the audience Even as algorithms get more astute at flagging bogus articles, there’s no guarantee that fake news creators won’t step up their game to elude detection. If computer programs are designed to be skeptical of stories that are overly positive or express lots of certainty, then con authors could refine their writing styles accordingly.
“Fake news, like a virus, can evolve and update itself,” says Daqing Li, a network scientist at Beihang University in Beijing who has studied fake news on Twitter. Fortunately, online news stories can be judged on more than the content of their narratives. And other telltale signs of false news might be much harder to manipulate — namely, the kinds of audience engagement these stories attract on social media. Juan Cao, a computer scientist at the Institute of Computing Technology at the Chinese Academy of Sciences in Beijing, found that on China’s version of Twitter, Sina Weibo, the specific tweets about a certain piece of news are good indicators for whether a particular story is true. Cao’s team built a system that could round up the tweets discussing a particular news event, then sort those posts into two groups: those that expressed support for the story and those that opposed it. The system considered several factors to gauge the credibility of those posts. If, for example, the story centered on a local event that a user was geographically close to, the user’s input was seen as more credible than the input of a user farther away. If a user had been dormant for a long time and started posting about a single story, that abnormal behavior counted against the user’s credibility. By weighing the ethos of the supporting and the skeptical tweets, the program decided whether a particular story was likely to be fake.
Cao’s group tested this technique on 73 real and 73 fake stories, labeled as such by organizations like China’s state-run Xinhua News Agency. The algorithm examined about 50,000 tweets about these stories on Sina Weibo, and recognized fake news correctly about 84 percent of the time. Cao’s team described the findings in 2016 in Phoenix at an Association for the Advancement of Artificial Intelligence conference. UC Santa Cruz’s de Alfaro and colleagues similarly reported in Macedonia at last year’s European Conference on Machine Learning and Principles and Practices of Knowledge Discovery in Databases that hoaxes can be distinguished from real news circulating on Facebook based on which users like these stories.
Rather than looking at who’s reacting to an article, a computer could look at how the story is getting passed around on social media. Li and colleagues studied the shapes of repost networks that branched out from news stories on social media. The researchers analyzed repost networks of about 1,700 fake and 500 true news stories on Weibo, as well as about 30 fake and 30 true news networks on Twitter. On both social media sites, Li’s team found, most people tended to repost real news straight from a single source, whereas fake news tended to spread more through people reposting from other reposters.
A typical network of real news reposts “looks much more like a star, but the fake news spreads more like a tree,” Li says. This held true even when Li’s team ignored news originally posted by well-known, official sources, like news outlets themselves. Reported March 9 at arXiv.org, these findings suggest that computers could use social media engagement as a litmus test for truthfulness, even without putting individual posts under the microscope.Juan Cao, a computer scientist at the Institute of Computing Technology at the Chinese Academy of Sciences in Beijing, found that on China’s version of Twitter, Sina Weibo, the specific tweets about a certain piece of news are good indicators for whether a particular story is true. Cao’s team built a system that could round up the tweets discussing a particular news event, then sort those posts into two groups: those that expressed support for the story and those that opposed it. The system considered several factors to gauge the credibility of those posts. If, for example, the story centered on a local event that a user was geographically close to, the user’s input was seen as more credible than the input of a user farther away. If a user had been dormant for a long time and started posting about a single story, that abnormal behavior counted against the user’s credibility. By weighing the ethos of the supporting and the skeptical tweets, the program decided whether a particular story was likely to be fake.
Cao’s group tested this technique on 73 real and 73 fake stories, labeled as such by organizations like China’s state-run Xinhua News Agency. The algorithm examined about 50,000 tweets about these stories on Sina Weibo, and recognized fake news correctly about 84 percent of the time. Cao’s team described the findings in 2016 in Phoenix at an Association for the Advancement of Artificial Intelligence conference. UC Santa Cruz’s de Alfaro and colleagues similarly reported in Macedonia at last year’s European Conference on Machine Learning and Principles and Practices of Knowledge Discovery in Databases that hoaxes can be distinguished from real news circulating on Facebook based on which users like these stories.
Rather than looking at who’s reacting to an article, a computer could look at how the story is getting passed around on social media. Li and colleagues studied the shapes of repost networks that branched out from news stories on social media. The researchers analyzed repost networks of about 1,700 fake and 500 true news stories on Weibo, as well as about 30 fake and 30 true news networks on Twitter. On both social media sites, Li’s team found, most people tended to repost real news straight from a single source, whereas fake news tended to spread more through people reposting from other reposters.
A typical network of real news reposts “looks much more like a star, but the fake news spreads more like a tree,” Li says. This held true even when Li’s team ignored news originally posted by well-known, official sources, like news outlets themselves. Reported March 9 at arXiv.org, these findings suggest that computers could use social media engagement as a litmus test for truthfulness, even without putting individual posts under the microscope. Truth to the people When misinformation is caught circulating on social networks, how best to deal with it is still an open question. Simply scrubbing bogus articles from news feeds is probably not the way to go. Social media platforms exerting that level of control over what visitors can see “would be like a totalitarian state,” says Murphy Choy, a data analyst at SSON Analytics in Singapore. “It’s going to become very uncomfortable for all parties involved.”
Platforms could put warning signs on misinformation. But labeling stories that have been verified as false may have an unfortunate “implied truth effect.” People might put more trust in any stories that aren’t explicitly flagged as false, whether they’ve been checked or not, according to research posted last September on the Social Science Research Network by human behavior researchers Gordon Pennycook, of the University of Regina in Canada, and David Rand at Yale University.
Rather than remove stories, Facebook shows debunked stories lower in users’ news feeds, which can cut a false article’s future views by 80 percent, company spokesperson Svensson says. Facebook also displays articles that debunk false stories whenever users encounter the related stories — though that technique may backfire. In a study of Facebook users who like and share conspiracy news, Zollo and colleague Walter Quattrociocchi found that after conspiracists interacted with debunking articles, these users actually increased their activity on Facebook conspiracy pages. The researchers reported this finding in June in Complex Spreading Phenomena in Social Systems.
There’s still a lot of work to be done in teaching computers — and people — to recognize fake news. As the old saying goes: A lie can get halfway around the world before the truth has put on its shoes. But keen-eyed computer algorithms may at least slow down fake stories with some new ankle weights.
Humans’ gift of gab probably wasn’t the evolutionary boon that scientists once thought.
There’s no evidence that FOXP2, sometimes called “the language gene,” gave humans such a big evolutionary advantage that it was quickly adopted across the species, what scientists call a selective sweep. That finding, reported online August 2 in Cell, follows years of debate about the role of FOXP2 in human evolution.
In 2002, the gene became famous when researchers thought they had found evidence that a tweak in FOXP2 spread quickly to all humans — and only humans — about 200,000 years ago. That tweak swapped two amino acids in the human version of the gene for ones different than in other animals’ versions of the gene. FOXP2 is involved in vocal learning in songbirds, and people with mutations in the gene have speech and language problems. Many researchers initially thought that the amino acid swap was what enabled humans to speak. Speech would have given humans a leg up on competition from Neandertals and other ancient hominids. That view helped make FOXP2 a textbook example of selective sweeps. Some researchers even suggested that FOXP2 was the gene that defines humans, until it became clear that the gene did not allow humans to settle the world and replace other hominids, says archeaogeneticist Johannes Krause at the Max Planck Institute for the Science of Human History in Jena, Germany, who was not involved in the study. “It was not the one gene to rule them all.”
The FOXP2 sweep theory first ran into trouble in 2008, when researchers discovered that Neandertals also had the two amino acid tweaks (SN Online: 11/14/08). That meant the change happened at least 700,000 years ago, before humans and Neandertal became separate branches of the hominid family tree. Then in 2009, some members of the 2002 team that originally reported the sweep presented new evidence showing that the two amino acid change wasn’t what swept to evolutionary prominence after all.
“That was sad, but that’s how it is,” says Wolfgang Enard, an evolutionary geneticist at Ludwig-Maximilians-Universität in Munich, who was involved in both the 2002 and 2009 studies. Still, there were hints that other genetic variants in and around FOXP2 might have been involved in a sweep, so the debate continued. Evolutionary and population geneticist Elizabeth Atkinson of Massachusetts General Hospital in Boston and colleagues decided to revisit the gene’s evolution “to see if FOXP2 ’s story held up using modern techniques,” she says. The researchers conducted a similar statistical analysis of patterns of genetic variation in FOXP2 as was done in the 2002 study. But this time the team studied more people, especially more people of African descent, and used data from the entire genome. In a selective sweep, one pattern of genetic variants around a gene becomes much more common than other versions of the gene until nearly everyone has the popular version. When considering all the people in their study together, the researchers picked up the same statistical signal for a selective sweep as Enard’s group had. But when Atkinson’s team examined Africans separately from Europeans and Asians, the signs of a sweep were erased.
That result reflects what happened in human history, Krause says.
When humans migrated out of Africa, certain versions of genes were carried with the migrants and other forms were left behind in Africa. The version of FOXP2 that left with the migrants became more common as the migrant population grew. Atkinson’s team identified the statistical signal as being from population growth, rather than a selective sweep, by looking at changes elsewhere in the genome. If FOXP2 were getting swept, it would be the only gene sending the statistical signal. Instead, other parts of the genome scored similarly to FOXP2 on the statistical test.
The finding doesn’t mean that changes in FOXP2 weren’t important for language evolution, says Kirk Lohmueller, a population geneticist at UCLA. But geneticists may have to rethink some assumptions about how the evolution of species works. Selective sweeps were thought to be a major way that natural selection — the process that drives evolution — altered species. But these and other results suggest that selective sweeps were not very common in human evolution.
Many of the traits associated with being human, including speech and language, are controlled by multiple genes, so no one gene may have given a sweep-worthy boost. Or perhaps a speech and language sweep happened, but so long ago that its signal is too weak to pick up now, Lohmueller says.
Health officials have confirmed 12 cases of rat lungworm disease in the continental United States since January 2011 — including six patients who had not traveled abroad but still contracted the illness caused by a parasite endemic to tropical regions in Asia and Hawaii.
While the disease can be mild, it can become extreme and cause severe neurological problems. In most of the new cases, patients complained of headache, fever, weakness and symptoms consistent with meningitis, the U.S. Centers for Disease Control and Prevention reports in the Aug. 3 Morbidity and Mortality Weekly Report. The disease is also known as angiostrongyliasis, after the parasitic roundworm Angiostrongylus cantonensis whose larvae hatch in the lungs of rats and then are expelled in the rodents’ excrement. At that point, the larvae can be picked up by snails and slugs, and then passed along to humans if the snails and slugs are eaten raw. On July 30, researchers added centipedes to the list of creatures that can transmit the disease to humans, after a Chinese woman and her son contracted the disease in 2012 after eating raw centipedes bought at a market (SN Online: 7/30/18).
More than half of the recent U.S. cases involved patients who had eaten raw vegetables, likely inadvertently consuming a snail or slug, and at least one case involved a toddler who ate slugs while playing. Of the six cases confirmed as originating within the country, four were reported from Texas, one from Tennessee and one from Alabama.
“We don’t know exactly the source of the infection,” CDC epidemiologist Susan Montgomery says. “Fresh produce really should be washed thoroughly and carefully.”
The CDC also confirmed 18 cases and reported three more probable cases in 2017 alone in Hawaii. Montgomery and her team only tracked cases sent to CDC labs for testing, meaning there may be more undiagnosed cases.
Dim light emanating from the purgatory between galaxies could illuminate the most shadowy constituents of the cosmos.
Dark matter, an unidentified type of particle that interacts gravitationally but otherwise shuns normal matter, lurks throughout clusters of galaxies. Because the elusive substance emits no light, it’s difficult to pin down how it is distributed, even though it makes up the majority of a cluster’s mass. But a feeble glow known as intracluster light could reveal dark matter’s whereabouts, researchers suggest July 30 at arXiv.org. The intermediary could eventually help scientists get a better handle on what dark matter is and how it behaves. Galaxy clusters grow by swallowing up additional galaxies. As galaxies are assimilated, they can be torn apart and their stars scattered. It’s those stars that produce intracluster light. And where there’s intracluster light, there’s dark matter, the team found. “The shape of this very diffuse light traces very nicely the shape of the total mass of the cluster,” says study coauthor Mireia Montes, an astrophysicist at the University of New South Wales in Sydney. Once stripped from their galaxies, the stars are tugged by the dark matter’s gravity and thereby end up concentrated in the same regions as it resides.
Typically, scientists use an effect called gravitational lensing to map dark matter (SN: 10/17/15, p. 24). A galaxy cluster’s mass acts like a lens, bending light from more distant objects. By measuring that bending, scientists can see how the dark matter’s mass is distributed within the cluster. However, “that is an incredibly hard measurement to make,” says astrophysicist Stacy Kim of Ohio State University, who was not involved with the research. Measuring intracluster light is easier, Kim says, but teasing out the faint light is still challenging, requiring extended observations with a powerful telescope. Scientists sometimes use another proxy for dark matter: X-rays emitted by hot gas within a cluster. But if a galaxy cluster has recently merged with another, collisions between gas clouds mean that the X-rays will be displaced from the dark matter. So a map of the matter made using X-rays might be skewed. The stars that produce intracluster light don’t have that problem, because they don’t get knocked off course in cluster mergers the way colliding gas clouds do.
In a study of six galaxy clusters, each observed with NASA’s Hubble Space Telescope, the researchers found that the distribution of intracluster light matched up well with the dark matter mass distribution as determined by gravitational lensing. The X-ray distribution didn’t match, because the six clusters had each been roiled by a recent smashup with another cluster. The team hopes to study more clusters to see if the match between dark matter and intracluster light holds up.
By measuring intracluster light, scientists could “perhaps learn something about the nature of dark matter,” says astrophysicist James Bullock of the University of California, Irvine, who was not involved with the research.
If the distribution of dark matter in galaxy clusters doesn’t agree with standard theoretical predictions, that could reveal new properties of the unidentified particles. For example, dark matter might be interacting with itself (SN: 7/7/18, p. 9). So having a new method to trace out dark matter is great, Bullock says. “This is definitely promising.”
OK, so what if a giant prehistoric shark, thought to be extinct for about 2.5 million years, is actually still lurking in the depths of the ocean? That’s the premise of the new flick The Meg, which opens August 10 and pits massive Carcharocles megalodon against a grizzled and fearless deep-sea rescue diver, played by Jason Statham, and a handful of resourceful scientists.
The protagonists discover the sharks in a deep oceanic trench about 300 kilometers off the coast of China — a trench, the film suggests, that extends down more than 11,000 meters below the ocean surface. (That depth makes it even deeper than the Mariana Trench’s Challenger Deep, the actual deepest known point in the ocean). Hydrothermal vents down in the trench supposedly keep those dark waters warm enough to support an ecosystem teeming with life. And — spoiler alert! — of course, the scientists’ investigation inadvertently helps megalodons escape from the depths. The giant living fossils head to the surface, where they terrorize shark fishermen and beachgoers a la Jaws.
But could a population of megalodons actually have survived down there? To explore what is and isn’t possible and what we still don’t know about sharks, Science News went to the movies with paleobiologist Meghan Balk of the Smithsonian’s National Museum of Natural History in Washington, D.C., who studies the ancient predators.
Did megalodons ever actually get as big as they are in the movie? Extremely unlikely The megalodon sharks of The Meg reach sizes of about 20 to 25 meters long, the film says — massive although just a tad smaller than the longest known blue whales. But estimates based on the size of fossil teeth suggest that even the largest known C. megalodon was much smaller, at up to 18 meters — “and that was the absolute largest,” Balk says. On average, C. megalodon tended to be around 10 meters long, she says, which still made them much bigger than the average great white shark, at around 5 to 6 meters long.
Would a megalodon otherwise look like the film version? Yes and no The movie’s sharks aren’t entirely inaccurate representations, Balk says. These megalodons correctly have six gills — between five and seven is accurate for sharks in general, she says. And the shape of the dorsal fin is, appropriately, modeled after the great white shark, the closest modern relative to the ancient sharks. Also, a male meg in the film even has “claspers,” appendages under the abdomen used to hold a female during mating. “When I looked at it, I was like, oh, they did a pretty good job. They didn’t just create a random shark,” Balk says. On the other hand, it’s actually a bit odd that the movie’s megalodons wouldn’t have evolved some significant anatomical differences from their prehistoric brethren, Balk says. “Like the eye getting bigger” to see better or becoming blind after a few million or so years living in the darkness of the deep sea, she says. Or you might even expect dwarfism, in which populations restricted by geographic isolation, such as being stuck within a trench, shrink in size.
Would such huge sharks have had enough to eat down there? Extremely unlikely In general, “there’s just not enough energy in the deep sea” to sustain giant sharks, Balk says. Life does bloom at hydrothermal vents, although the deepest known hydrothermal vents are only about 5,000 meters deep. But even if there were vents in the deepest trenches, it’s not clear there would be enough big species living down there to sustain not just one, but a whole population of massive sharks. In the film, the vent field is populated with many smaller species known to cluster around hydrothermal vents, including shrimps, snails and tube worms. Viewers also see one giant squid, but there would have had to be a whole lot more food of that size. C. megalodon — like modern great whites — ate many different things, from orcas to squid. And the humongous megalodon sharks in the movie “would have eaten a lot of squid,” Balk says, laughing. Could sharks live at such depths? Unlikely How deep sharks can live in the ocean is actually still a big unknown (SN Online: 5/7/18). “Quantifying the depth that sharks go to is a big endeavor right now,” Balk says. Few sharks are known to inhabit the abyssal regions of the ocean below about 4,000 meters — let alone the depths of oceanic trenches lying below 6,000 meters. Aside from the scarcity of food, temperature is another limitation to deep-sea living.
Sharks that do inhabit deeper parts of the ocean, such as goblin sharks and the Greenland shark (SN 9/17/16, p. 13), tend to have low metabolic rates. That means they move much more slowly than the energetic predators of the movie, Balk says. C. megalodon, although it lived around the globe, tended to prefer warmer, shallower waters and used coastal regions as nursing grounds. So, could megalodons have survived to modern times without humans knowing about it? Extremely unlikely Sharks shed a lot of teeth throughout their lives, and those teeth are the main fossil evidence of the life and times of prehistoric sharks (SN Online: 8/2/18). Fossilized C. megalodon teeth found in sediments around the world suggest that the creatures lived between about 14 million and 2.6 million years ago — or perhaps up until 1.5 million years ago at the latest, Balk says. It’s not clear why they went extinct. But there are a handful of hypotheses: competition for food with other creatures like orcas; ocean circulation changes about 3 million years ago when the Isthmus of Panama formed (SN: 9/17/16, p. 12); nearshore nursery sites vanished; or possibly a loss of prey sources stemming from a marine mammal extinction about 2.6 million years ago.
Bottom line: The sheer abundance of shed teeth — as many as 20,000 per shark in its lifetime — is one of the strongest arguments against megalodon surviving into modern times, Balk says. “That’s one of the reasons why we know megalodon’s definitely extinct. We would have found a tooth.”
Whales may have made their mark on the seafloor in a part of the Pacific Ocean designated for future deep-sea mining.
Thousands of grooves found carved into the seabed could be the first evidence that large marine mammals visit this little-explored region, researchers report August 22 in Royal Society Open Science. If deep-diving whales are indeed using the region for foraging or other activities, scientists say, authorities must take that into account when planning how to manage future mining activities. The Clarion-Clipperton Fracture Zone, or CCZ, is a vast plain on the deep seafloor that spans about 4.5 million square kilometers between Hawaii and Mexico. The region is littered with trillions of small but potentially valuable rocky nodules containing manganese, copper, cobalt and rare earth elements.
Little is known of the seafloor ecosystems in this region that might be disturbed by mining of the nodules. So several research cruises have visited the area since 2013 to conduct baseline assessments of what creatures might live on or near the seafloor. A 2015 cruise led by Daniel Jones of the National Oceanography Centre Southampton in England is the first to find evidence that suggests that live whales may have dived down to visit the seafloor in the region. Using an autonomous underwater vehicle to scan the seafloor at depths from 3,999 meters to 4,258 meters, Jones’ team found 3,539 grooves in all. These depressions tended to be arranged into sequences of as many as 21 grooves, spaced six to 13 meters apart. It’s difficult to determine exactly when the marks were made, because sediment settles very slowly through the deep water to fill in seafloor depressions. The oldest marks were made within the last 28,000 years, the team estimates. But some newer tracks appear to overlap older tracks.
No known geologic mechanism could produce the grooves, report Jones and his National Oceanography Centre colleagues, deep-sea ecologist Leigh Marsh and marine geoscientist Veerle Huvenne. But living creatures might: Some scientists, including biologist Les Watling of the University of Hawaii at Manoa and marine ecologist Peter Auster of the University of Connecticut at Avery Point, previously suggested that certain deep-diving whales, known as beaked whales, can make such markings as they use their beaks to forage for food hiding in the seafloor.
The new research is intriguing, Watling says, but adds that the biggest question mark is whether a beaked whale could really dive so deep. “When we published our paper, we were extending the probable depth of diving of the whale by several hundred meters,” he says. “These authors are doubling the depth that we talked about.” But, he adds, the new paper also points out that some anatomical studies suggest that a Cuvier’s beaked whale (Ziphius cavirostris), at least, may be able to survive a 5,000-meter dive.
Auster adds that the researchers were careful to consider other possibilities for what might have made the markings and systematically eliminate those options, leaving only the whales. And that’s definitely a matter that prospective miners will have to pay attention to, he says. Before mining proceeds, he says, future seafloors studies in the region should include efforts to detect whales, using passive acoustic monitoring, for instance. “This is a huge finding,” says Diva Amon, a deep-sea biologist at the Natural History Museum in London. She has previously cataloged a wealth of seafloor life in the CCZ, including new genera of jellyfish, starfish and sponges. That abundance may be attributable to the variety of sediment types in the region, she adds: Soft seafloor sediment and hard rocky nodules offer numerous places for life to get a foothold.
But whales can be a game changer, because large, charismatic marine mammals can garner public attention in a way that smaller seafloor-dwellers don’t, she says. Although the new study can’t pinpoint when the grooves were made, she says, “this is why more work needs to be done.” Even if the observed grooves were made by whales thousands of years ago, the whales’ behavior may not have changed significantly in the ensuing years, given the stability of the deep-sea environment.
“I would expect that if they were [making the depressions] a couple of thousand years ago, they’re probably still doing it now,” she says.
To date, the International Seabed Authority, the organization that oversees both mining licenses in international waters and environmental regulation of those regions, has issued 16 exploration contracts within the CCZ. Contractors working in the area must record marine mammal sightings within surface waters, as well as sightings of migratory birds, Amon says.
But, she adds, “the fact that these whales may be diving about a thousand meters deeper than was previously known” — and using seafloor that could be irreparably altered [by mining] — has the potential to change the way we manage the CCZ.”
Scientists are one step closer to a long-sought way to store carbon dioxide in rocks.
A new technique speeds up the formation of a mineral called magnesite that, in nature, captures and stores large amounts of the greenhouse gas CO2. And the process can be done at room temperature in the lab, researchers reported August 14 at the Goldschmidt geochemistry conference, held in Boston. If the mineral can be produced in large quantities, the method could one day help fight climate change. “A lot of carbon on Earth is already stored within carbonate minerals, such as limestone,” says environmental geoscientist Ian Power of Trent University in Peterborough, Canada, who presented the research. “Earth knows how to store carbon naturally and does this over geologic time. But we’re emitting so much CO2 now that Earth can’t keep up.”
Researchers have been seeking ways to boost the planet’s capacity for CO2 storage (SN: 6/5/10, p. 16). One possible technique: Sequester the CO2 gas by converting it to carbonate minerals. Magnesite, or magnesium carbonate, is a stable mineral that can hold a lot of CO2 naturally: A metric ton of magnesite can contain about half a metric ton of the greenhouse gas.
But magnesite isn’t quick to make — at least, not at Earth’s surface. Previous researchers have considered pumping CO2 deep into Earth’s interior, where high temperatures and pressures can speed up the gas’s reaction with a magnesium-bearing upper mantle rock called olivine. But many barriers remain to making this idea commercial, including finding the right locations to insert the CO2 to produce large amounts of magnesite and the costs of transportation and storage for the gas.
Another option is to try to make magnesite in the laboratory — but at room temperature, that can take a very long time. One place where magnesite forms naturally at Earth’s surface is in arid basins called playas in northern British Columbia. From previous work at such sites, Power and his colleagues determined that groundwater circulating through former mantle rocks such as olivine in the region becomes enriched in magnesium and carbonate ions. The ions eventually react to form magnesite, which settles out of the water. In British Columbia, the process began as far back as 11,000 years ago, Power says. “We knew it was slow, but no one had ever measured the rate.” Under very high temperatures, scientists can quickly create magnesite in the lab, using olivine as a feedstock. But that process uses a lot of energy, Power says, and could be very costly.
The problem with making magnesite quickly, Power’s team found, is that water gets in the way. To make the rock in the lab, Power and colleagues put magnesium ions into water. When the magnesium ions — atoms that have a charge due to a gain or loss of electrons — are put into water to create the magnesite, the water molecules themselves tend to surround the ions. That “shell” of water molecules hinders the magnesium’s ability to bond with carbonate ions to form magnesite. “It’s difficult to strip away those water molecules,” Power says. “That’s one of the reasons why magnesite forms very slowly.”
To get around this problem, Power and his colleagues used thousands of tiny polystyrene microspheres, each about 20 micrometers in diameter, as catalysts to speed up the reaction. The microspheres were coated with carboxyl, molecules with a negative charge that can pull the water molecules away from the magnesium, freeing it up to bond with the carbonate ions. Thanks to these microspheres, Power says, the researchers managed to make magnesite in just about 72 days. Theoretically, he adds, the microspheres would also be reusable, as the spheres weren’t used up by the experiments.
That result doesn’t mean the technique is ready for prime time, Power says. So far, the scientists have made only a very small amount of magnesite in the lab — about a microgram or so. “We’re very far away from upscaling,” or making the technology commercially viable.
“What we’ve shown is that it’s possible to form this at room temperatures,” Powers says. Now, having demonstrated the proof of concept, the team can explore next steps. “We want to better understand some of the fundamental science” involved in magnesite formation, he adds.
“The result really surprised me,” says Patricia Dove, a geochemist at Virginia Tech in Blacksburg. Many questions remain about how cost-effective and energy-efficient the process might be, she says, but it’s “certainly very intriguing.”
Talk about blended families. A 13-year-old girl who died about 50,000 years ago was the child of a Neandertal and a Denisovan.
Researchers already knew that the two extinct human cousins interbred (SN Online 3/14/16). But the girl, known as Denisova 11 from a bone fragment previously found in Siberia’s Denisova Cave, is the only first-generation hybrid ever found.
Genetic analyses revealed that the girl inherited 38.6 percent of her DNA and her mitochondrial DNA from a Neandertal, meaning that her mother was Neandertal. Her dad was Denisovan, and contributed 42.3 percent of the girl’s DNA, Viviane Slon of the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, and colleagues report online August 22 in Nature. The girl’s father had Neandertal ancestry, too, but way back in his lineage, about 300 to 600 generations before his birth.
Although the girl’s remains were found in Siberia, her Neandertal DNA more closely matches a western European Neandertal from Vindija Cave in Croatia — thousands of kilometers to the west — than an older Neandertal from the same cave as the girl. That finding may mean that eastern Neandertals spread into western Europe sometime after 90,000 years ago, or that western Neandertals beat them to the punch, invading eastward into Siberia before 90,000 years ago and partially replacing the Neandertals living there. Researchers need to test more DNA from western European Neandertals to determine which scenario is correct.