It’s time to retire the five-second rule

For some dropped foods, the five-second rule is about five seconds too long. Wet foods, such as watermelon, slurp up floor germs almost immediately, scientists report online September 2 in Applied and Environmental Microbiology.

Robyn Miranda and Donald Schaffner of Rutgers University in New Brunswick, N.J., tested gummy candy, watermelon and buttered and unbuttered bread by dropping morsels onto various surfaces coated with Enterobacter aerogenes bacteria. Food was left on each surface — stainless steel, ceramic tile, wood and carpet — for time periods ranging from less than a second to five minutes. Afterward, the researchers measured the amount of E. aerogenes on the food, harmless bacteria that share attachment characteristics with stomach-turning Salmonella.

As expected, longer contact times generally meant more bacteria on the food. But the transfer depended on other factors, too. Carpet, for instance, was less likely to transfer germs than the other surfaces. Gummy candies, particularly those on carpet, stayed relatively clean. But juicy watermelon quickly picked up lots of bacteria from all surfaces in less than a second. These complexities, the authors write, mean that the five-second rule is probably a rule worth dropping.

After Big Bang, shock waves rocked newborn universe

Shock waves may have jolted the infant cosmos. Clumpiness in the density of the early universe piled up into traveling waves of abrupt density spikes, or shocks, like those that create a sonic boom, scientists say.

Although a subtle effect, the shock waves could help scientists explain how matter came to dominate antimatter in the universe. They also could reveal the origins of the magnetic fields that pervade the cosmos. One day, traces of these shocks, in the form of gravitational waves, may even be detectable.
Scientists believe that the early universe was lumpy — with some parts denser than others. These density ripples, known as perturbations, serve as the seeds of stars and galaxies. Now, scientists have added a new wrinkle to this picture. As the ripples rapidly evolved they became steeper, like waves swelling near the shore, until eventually creating shocks analogous to a breaking wave. As a shock passes through a region of the universe, the density changes abruptly, before settling back down to a more typical, slowly varying density. “Under the simplest and most conservative assumptions about the nature of the universe coming out of the Big Bang, these shocks would inevitably form,” says cosmologist Neil Turok of the Perimeter Institute for Theoretical Physics in Waterloo, Canada.

In a paper published September 21 in Physical Review Letters, Turok and Ue-Li Pen of the Canadian Institute for Theoretical Astrophysics in Toronto performed calculations and simulations that indicate shocks would form less than one ten-thousandth of a second after the Big Bang.

“It’s interesting that nobody’s actually noticed that before,” says cosmologist Kevork Abazajian of the University of California, Irvine. “It’s an important effect if it actually happened.”

These shocks, Turok and Pen found, could produce magnetic fields, potentially pointing to an answer to a cosmological puzzle. Magnetic fields permeate the Milky Way and other parts of the cosmos, but scientists don’t know whether they sprang up just after the birth of the universe or much later, after galaxies had formed. Shock waves could explain how fields might have formed early on. When two shocks collide, they create a swirling motion, sending electrically charged particles spiraling in a way that could generate magnetic fields.
Shocks could also play a role in explaining why the universe is made predominantly of matter. The Big Bang should have yielded equal amounts of matter and antimatter; how the cosmic scales were tipped in matter’s favor is still unexplained. Certain theorized processes could favor the production of matter, but it’s thought they could happen only if temperatures in the universe are uneven. Shocks would create abrupt temperature jumps that would allow such processes to occur.

Scientists may be able to verify these calculations by detecting the gravitational waves that would have been produced when shocks collided. Unfortunately, the gravitational ripples produced would likely be too small to detect with current technologies. But under certain theories, in which large density fluctuations create regions so dense that they would collapse into black holes, the gravitational waves from shocks would be detectable in the near future. “If there was anything peculiar in the early universe, you would actually be able to detect this with upcoming technology,” says Abazajian. “I think that is remarkable.”

Out-of-sync body clock causes more woes than sleepiness

When the body’s internal sense of time doesn’t match up with outside cues, people can suffer, and not just from a lack of sleep.

Such ailments are similar in a way to motion sickness — the queasiness caused when body sensations of movement don’t match the external world. So scientists propose calling time-related troubles, which can afflict time-zone hoppers and people who work at night, “circadian-time sickness.” This malady can be described, these scientists say, with a certain type of math.
The idea, to be published in Trends in Neurosciences, is “intriguing and thought-provoking,” says neuroscientist Samer Hattar of Johns Hopkins University. “They really came up with an interesting idea of how to explain the mismatch.”

Neuroscientist Raymond van Ee of Radboud University in the Netherlands and colleagues knew that many studies had turned up ill effects from an out-of-whack circadian clock. Depression, metabolic syndromes and memory troubles have been found alongside altered daily rhythms. But despite these results, scientists don’t have a good understanding of how body clocks work, van Ee says.

Van Ee and colleagues offer a new perspective by using a type of math called Bayesian inference to describe the circadian trouble. Bayesian inference can be used to describe how the brain makes and refines predictions about the world. This guesswork relies on the combination of previous knowledge and incoming sensory information (SN: 5/28/16, p. 18). In the case of circadian-time sickness, these two cues don’t match up, the researchers propose.

Some pacemaking nerve cells respond directly to light, allowing them to track the outside environment. Other pacemakers don’t respond to light but rely on internal signals instead. Working together, these two groups of nerve cells, without any supervision from a master clock, can set the body’s rhythms. But when the two timekeepers arrive at different conclusions, the conflict muddies the time readout in the body, leading to a confused state that could cause poor health outcomes, van Ee and colleagues argue.

This description of circadian-time sickness is notable for something it leaves out — sleep. While it’s true that shifted sleep cycles can cause trouble, a misalignment between internal and external signals may cause problems even when sleep is unaffected, the researchers suggest. That runs counter to the simple and appealing idea that out-of-sync rhythms cause sleep deprivation, which in turn affects the body and brain. That idea “was totally linear and beautiful,” Hattar says. “But once you start looking very carefully at the data in the field, you find inconsistencies that people ignored.”
It’s difficult to disentangle sleep from circadian misalignments, says neuroscientist Ilia Karatsoreos of Washington State University in Pullman. Still, research by him and others has turned up detrimental effects from misaligned circadian rhythms — even when sleep was normal. This new paper helps highlight why “it is important to be able to study and understand the contribution of each,” he says.

The concept of circadian-time sickness is an idea that awaits testing, Karatsoreos cautions. Yet it’s a “useful way for us to talk about this general problem, if only for the fact that it’s a way of thinking that I’ve really never seen before.”

Climate change shifts how long ants hang on to coveted real estate

Heating small patches of forest shows how climate warming might change the winner-loser dynamics as species struggle for control of prize territories. And such shifts in control could have wide-ranging effects on ecosystems.

The species are cavity-nesting ants in eastern North America. Normally, communities of these ant species go through frequent turnovers in control of nest sites. But as researchers heated enclosures to mimic increasingly severe climate warming, the control started shifting toward a few persistent winners. Several heat-loving species tended to stay in nests unusually long, instead of being replaced in faster ant upheavals, says Sarah Diamond of Case Western Reserve University in Cleveland.
That’s worrisome not only for the new perpetual losers among ants but for the ecosystem as a whole, she and her colleagues argue October 26 in Science Advances. Ants have an outsized effect on ecosystems. They churn up soil, shape the flow of nutrients and disperse seeds to new homes. Ant species that can’t compete in a warmer climate may blink out of the community array, with consequences for other species they affect.

Teasing out the indirect effects of climate change has been difficult. “We’ve all sort of thrown up our hands and said probably these interactions are quite important, but they’re really hard to measure so we’re just going to ignore that for now,” Diamond says.
Experiments have begun tackling those interactions, and the ant enclosures were among the most ambitious. At each of two experimental sites — in North Carolina and Massachusetts — researchers set up 15 roomy plots to mimic various warming scenarios, from 1.5 degrees Celsius above the surrounding air temperature to an extra 5.5 degrees C. To install outdoor heating, “we had backhoes in there digging trenches,” Diamond says. Giant propane tanks fueled boilers that forced warmer air into the enclosures to heat the soil. Computers monitored soil temperature and fine-tuned air flow.
At least 60 species of local ants came and went naturally, some of them nesting in boxes the researchers placed in the enclosures. For five years, the researchers regularly monitored which common species were living in the boxes.
Warmth gave an edge to a few heat-tolerant species such as Temnothorax longispinosus in the forest in Massachusetts. This tiny ant can build colonies inside an acorn and is a known target for attacks by slavemaker ants that invade nests instead of establishing their own. With increased warming, however, it and a few other heat-loving ants tended to hold their nests longer.

Those longer stints destabilize the ant community with its usual faster pace of turnovers of nests, which typically gives more species a chance at decent shelter and better luck in surviving in the community. What’s more, the analysis showed that the more a plot was heated, the more time the ants would need after some disturbance to return to the equilibrium of their usual affairs.

“A key strength of this study is their regular sampling,” says Jason Tylianakis, who holds joint appointments at the University of Canterbury in New Zealand and Imperial College London. Those data gave the scientists an unusually detailed picture of subtle community effects, he says.

The authors have “documented a new consequence of temperature change on communities,” says marine ecologist Sarah Gilman of the Claremont Colleges in California. Other studies have talked about climate change pushing communities to dramatically new, but ultimately stable states. But the ant experiment shows that climate change may be undermining the stability of communities that, at least for the moment, still look fairly normal.

Riding roller coasters might help dislodge kidney stones

Passing a kidney stone is not exactly rocket science, but it could get a boost from Space Mountain.

It seems that shaking, twisting and diving from on high could help small stones dislodge themselves from the kidney’s inner maze of tubules. Or so say two researchers who rode the Big Thunder Mountain Railroad roller coaster at Disney’s Magic Kingdom in Orlando, Fla., 20 times with a fake kidney tucked inside a backpack.

The researchers, from Michigan State University College of Osteopathic Medicine in East Lansing, planned the study after several of their patients returned from the theme park announcing they had passed a kidney stone. Finally, one patient reported passing three stones, each one after a ride on a roller coaster.
“Three consecutive rides, three stones — that was too much to ignore,” says David Wartinger, a kidney specialist who conducted the study with Marc Mitchell, his chief resident at the time.
Since neither of the two had kidney stones themselves, the pair 3-D printed a life-size plastic replica of the branching interior of a human kidney. Then they inserted three stones and human urine into the model. The stones were of the size that usually pass on their own, generally smaller in diameter than a grain of rice. After arriving at the park, Wartinger and Mitchell sought permission from guest services to do the research, fearing that two men with a backpack boarding the same ride over and over might strike workers as suspect.
“Luckily, the first person we talked to in an official capacity had just passed a kidney stone,” Wartinger says. “He told us he would help however we needed.”

Even when a stone is small, its journey through the urinary tract can be excruciating. In the United States alone, more than 1.6 million people each year experience kidney stones painful enough to send them to the emergency room. Larger stones — say, the size of a Tic Tac — can be treated with sound waves that break the stones into smaller pieces that can pass.

For the backpack kidney, the rear of the train was the place to be. About 64 percent of the stones in the model kidney cleared out after a spin in the rear car. Only about 17 percent passed after a single ride in the front car, the researchers report in the October Journal of the American Osteopathic Association.

Wartinger thinks that a coaster with more vibration and less heart-pounding speed would be better at coaxing a stone on its way.

The preliminary study doesn’t show whether real kidneys would yield their stones to Disney magic. Wartinger says a human study would be easy and inexpensive, but for now, it’s probably wise to check with a doctor before taking the plunge.

Marijuana use weakens heart muscle

NEW ORLEANS — Marijuana use is associated with an almost doubled risk of developing stress cardiomyopathy, a sudden life-threatening weakening of the heart muscle, according to a new study. Cannabis fans may find the results surprising, since two-thirds believe the drug has no lasting health effects. But as more states approve recreational use, scientists say there’s a renewed urgency to learn about the drug’s effects.

An estimated 22 million Americans — including 38 percent of college students — say they regularly use marijuana. Previous research has raised cardiovascular concerns: The drug has been linked to an increased risk of heart attack immediately after use, and a 2016 study in rodents found that one minute of exposure to marijuana smoke impairs the heart’s inner lining for 90 minutes, longer than tobacco’s effect.

The new study, presented November 13 during the American Heart Association’s Scientific Sessions, examined the occurrence of stress cardiomyopathy, which temporarily damages the tip of the heart. Researchers from St. Luke’s University Health Network in Bethlehem, Pa., searched a nationwide hospital database and found more than 33,000 admissions for stress cardiomyopathy from 2003 to 2011. Of those, 210 were identified as marijuana users, and had about twice the odds of developing the condition, said Amitoj Singh, who led the study. Young men were at highest risk and more likely to go into cardiac arrest despite having fewer cardiovascular risk factors. Notably, the number of marijuana-linked cardiomyopathies increased every year, from 17 in 2007 to 76 in 2011. “With recent legalization, I think that’s going to go up,” Singh said.

A Pap smear can scoop up fetal cells for genome testing

Scanning a fetus’s genome just a few weeks after conception may soon be an option for expecting parents. Mom just needs to get a Pap smear first.

By scraping a woman’s cervix as early as five weeks into a pregnancy, researchers can collect enough fetal cells to test for abnormalities linked to more than 6,000 genetic disorders, researchers report November 2 in Science Translational Medicine. It’s not clear exactly how fetal cells make their way down to the cervix, says study coauthor Sascha Drewlo of Wayne State University School of Medicine in Detroit. But the cells may invade mom’s mucus-secreting glands, and then get washed into the cervical canal.

Current prenatal tests include amniocentesis and chorionic villus sampling, but they work later in pregnancy: at least 12 weeks for amnio and at least nine weeks for CVS. Amnio requires a long needle threaded through a pregnant woman’s belly and uterus; CVS often does, too. Instead, Drewlo’s team gathered fetal trophoblast cells, which give rise to the placenta, and were able to examine the genomes of 20 fetuses.

The new technique, which can work with as few as 125 fetal cells, could one day help physicians care for their tiniest patients. For some genetic conditions, such as congenital adrenal hyperplasia, early detection means mom can take some medicine to “actually treat the fetus in utero,” Drewlo says.

Gaggle of stars get official names

For centuries, stargazers have known which star was Polaris and which was Sirius, but those designations were by unofficial tradition. The International Astronomical Union, arbiter of naming things in space, has now blessed the monikers of 227 stars in our galaxy. As of November 24, names such as Polaris (the North Star) and Betelgeuse (the bright red star in Orion) are approved.

Until now, there has been no central star registry or guidelines for naming. There are many star catalogs, each one designating stars with different combinations of letters and numbers. That excess of options has left most stars with an abundance of labels (HD 8890 is one of over 40 designations for Polaris).

The tangle of titles won’t disappear, but the new IAU catalog is a stab at formalizing the more popular names. Before this, only 14 stars (included in the 227) had been formally named, as part of the IAU’s contest to name notable exoplanets and the stars that they orbit (SN: 2/6/16, p. 5). One famous star is returning to its ancient roots. The brightest member of Alpha Centauri, the pair of stars that are among the closest to our solar system, is now officially dubbed Rigil Kentaurus, an early Arabic name meaning “foot of the centaur.”

Earth’s mantle is cooling faster than expected

SAN FRANCISCO — Earth’s innards are cooling off surprisingly fast.

The thickness of new volcanic crust forming on the seafloor has gotten thinner over the last 170 million years. That suggests that the underlying mantle is cooling about twice as fast as previously thought, researchers reported December 13 at the American Geophysical Union’s fall meeting.

The rapid mantle cooling offers fresh insight into how plate tectonics regulates Earth’s internal temperature, said study coauthor Harm Van Avendonk, a geophysicist at the University of Texas at Austin. “We’re seeing this kind of thin oceanic crust on the seafloor that may not have existed several hundred million years ago,” he said. “We always consider that the present is the clue to the past, but that doesn’t work here.”
The finding is fascinating, though the underlying data is sparse, said Laurent Montési, a geodynamicist at the University of Maryland in College Park. Measuring the thickness of seafloor crust requires seismic studies, and “you don’t have that everywhere; there’s nothing in the South Pacific, for example.” Still, he said, “it’s amazing that we can see the signature of the cooling of the Earth.” The finding could help explain why supercontinents such as Pangaea break apart, he added.

Earth’s mantle is made up of hot rock under high pressures. As material rises toward Earth’s surface, pressures drop and the rock starts melting. This molten material can spew onto the surface at mid-ocean ridges and construct new crust. When mantle temperatures are hotter, the melt zone is larger and the resulting crust is thicker. Near the upper boundary with the crust, mantle temperatures range from about 500° to 900° Celsius.

Comparing the thickness of oceanic crust of different ages, Van Avendonk and colleagues discovered that 170-million-year-old crust is 1.7 kilometers thicker on average than fresh crust. Chemical analyses of lava rocks suggest Earth’s mantle has cooled about 6 to 11 degrees on average every 100 million years over the last 2.5 billion years. But since the mid-Jurassic about 170 million years ago, the mantle has cooled around 15 to 20 degrees on average per 100 million years, the researchers estimate. While scientists expected the mantle to cool over time as heat left over from Earth’s formation and from radioactive decay dissipates, this degree of cooling was surprising.

Plate tectonics is causing this rapid cooldown, the researchers proposed. The sinking, shifting and formation of plates drive convection in the planet’s interior that shifts heat. This process also controls how fast different regions of the mantle cool. While mantle temperatures below the Pacific Ocean have decreased around 13 degrees per 100 million years, the mantle below the Atlantic and Indian oceans has cooled about 37 degrees per 100 million years.

The difference between the oceans is their distance to continents. The Atlantic and Indian oceans opened when the Pangaea supercontinent ripped apart. Before then, the underlying mantle was covered by continental crust, which served as an insulating blanket that kept mantle temperatures toasty, Van Avendonk proposed. As Pangaea split and the continents shifted, the mantle beneath the young oceans began cooling much faster. Over that same time period, the Pacific Ocean was largely isolated from the continents and cooled more gradually.
The insulating effect of the Pangaea supercontinent could help explain what drives Earth’s cycle of supercontinent formation and breakup, Montési said. Heat may build up underneath supercontinents, ultimately generating a massive rising plume of hot rock that splits the landmass apart. “This could explain why you get a breakup about 100 million years after you get a supercontinent assembled,” he said.

Megadiamonds point to metal in mantle

Imperfections in supersized diamonds are a bummer for gem cutters but a boon for geologists. Tiny metal shards embedded inside Earth’s biggest diamonds provide direct evidence that the planet’s rocky mantle contains metallic iron and nickel, scientists report in the Dec. 16 Science.

The presence of metal in the mantle is “something that’s been predicted in theory and experiments for a number of years, but this is the first time that we have samples that give us physical evidence,” says Evan Smith, a geologist at the Gemological Institute of America in New York City who led the study.
Understanding the composition of the mantle can help scientists figure out how it drives plate tectonics, and how elements like carbon and oxygen are cycled and stored in the Earth.

Diamonds form in the mantle when carbon-containing compounds are compressed and heated to great temperatures (SN: 4/30/16, p. 8). During this process, traces of other elements can sneak in as “inclusions.” Inclusions make the mineral less valuable as a gemstone, and chunks with many such imperfections are often rejected by jewelers. But to scientists, such stowaways are priceless: They provide some of the only physical evidence available of what’s going on deep inside Earth.
Smith and his colleagues used lasers to identify inclusions in samples from some of Earth’s largest diamonds. Some diamonds like these were originally hundreds or even thousands of carats. The Cullinan Diamond, for instance, was as big as two stacked decks of cards and weighed 621 grams, or more than a pound.
By analyzing the inclusions in 53 diamond samples, Smith’s team found that the megadiamonds formed much deeper than ordinary diamonds — hundreds of kilometers down, in a different part of the mantle.

“Not only are [these diamonds] very large and very valuable, but they appear to be very special in terms of their origin,” says Graham Pearson, a geochemist at the University of Alberta in Canada who wasn’t part of the study. Other minerals brought up from such a depth don’t preserve chemical clues to their environment in the same way.

In the deep diamonds, Smith found inclusions made of a mixture of solidified iron, nickel, carbon and sulfur, surrounded by a thin skin of methane and hydrogen.

Aside from the molten metal core, most iron inside Earth is thought to be incorporated into other minerals, not as a stand-alone metal, says Pearson.

But the metallic inclusions suggest that the part of the mantle where the large diamonds formed contains iron and nickel as metals. And the fact that the metals hadn’t reacted with other elements to form compounds suggests that the diamonds’ birthplace probably contains very little oxygen.

Smith thinks the metallic mixture was probably molten at the time the diamonds were forming. Instead of being sprinkled throughout like grated parmesan on spaghetti, ribbons of iron and nickel might have rippled through the mantle like gooey cheddar-jack through mac-and-cheese.

“You need very special conditions to get a metallic melt,” says Catherine McCammon, a geologist at the University of Bayreuth in Germany. So the idea that diamonds carry evidence of these conditions in the mantle is “pretty mind-blowing.”