Like a quantum version of a whirling top, protons have angular momentum, known as spin. But the source of the subatomic particles’ spin has confounded physicists. Now scientists have confirmed that some of that spin comes from a frothing sea of particles known as quarks and their antimatter partners, antiquarks, found inside the proton.
Surprisingly, a less common type of antiquark contributes more to a proton’s spin than a more plentiful variety, scientists with the STAR experiment report March 14 in Physical Review D. Quarks come in an assortment of types, the most common of which are called up quarks and down quarks. Protons are made up of three main quarks: two up quarks and one down quark. But protons also have a “sea,” or an entourage of transient quarks and antiquarks of different types, including up, down and other varieties (SN: 4/29/17, p. 22).
Previous measurements suggested that the spins of the quarks within this sea contribute to a proton’s overall spin. The new result — made by slamming protons together at a particle accelerator called the Relativistic Heavy Ion Collider, or RHIC — clinches that idea, says physicist Elke-Caroline Aschenauer of Brookhaven National Lab in Upton, N.Y., where the RHIC is located.
A proton’s sea contains more down antiquarks than up antiquarks. But, counterintuitively, more of the proton’s spin comes from up than down antiquarks, the researchers found. In fact, the down antiquarks actually spin in the opposite direction, slightly subtracting from the proton’s total spin.
“Spin has surprises. Everybody thought it’s simple … and it turns out it’s much more complicated,” Aschenauer says. Editor’s note: This story was updated April 3, 2019, to correct the subheadline to say that up antiquarks (not up quarks) add more angular momentum than do down antiquarks (not down quarks).
A tweaked laboratory protocol has revealed signs of thousands of newborn nerve cells in the brains of adults, including an octogenarian.
These immature neurons, described online March 25 in Nature Medicine, mark the latest data points in the decades-old debate over whether people’s brains churn out new nerve cells into adulthood. The process, called neurogenesis, happens in the brains of some animals, but scientists have been divided over whether adult human brains are capable of such renewal (SN Online: 12/20/18). Researchers viewed slices of postmortem brains of 13 formerly healthy people aged 43 to 87 under a microscope, and saw thousands of what appeared to be newborn nerve cells. These cells were in a part of the hippocampus called the dentate gyrus, a suspected hot spot for new neurons. Brain samples from 45 people with Alzheimer’s disease, however, had fewer of these cells — a finding that suggests that neurogenesis might also be related to the neurodegenerative disease.
Most of the brain samples used in the study were processed within 10 hours of a donor’s death, and spent no more than 24 hours soaking in a chemical that preserves the tissue. Those factors may help explain why the new neurons were spotted, the researchers write. Some earlier experiments that didn’t find evidence of neurogenesis used samples that were processed later after a donor’s death, and that had sat for longer in the fixing chemical.
DNA is the glamour molecule of the genetics world. Its instructions are credited with defining appearance, personality and health. And the proteins that result from DNA’s directives get credit for doing most of the work in our cells. RNA, if mentioned at all, is considered a mere messenger, a go-between — easy to ignore. Until now.
RNAs, composed of strings of genetic letters called nucleotides, are best known for ferrying instructions from the genes in our DNA to ribosomes, the machines in cells that build proteins. But in the last decade or so, researchers have realized just how much more RNAs can do — how much they control, even. In particular, scientists are finding RNAs that influence health and disease yet have nothing to do with being messengers.
The sheer number and variety of noncoding RNAs, those that don’t ferry protein-building instructions, give some clues to their importance. So far, researchers have cataloged more than 25,000 genes with instructions for noncoding RNAs in the human genome, or genetic instruction book (SN: 10/13/18, p. 5). That’s more than the estimated 21,000 or so genes that code for proteins.
Those protein-coding genes make up less than 2 percent of the DNA in the human genome. Most of the rest of the genome is copied into noncoding RNAs, and the vast majority of those haven’t been characterized yet, says Pier Paolo Pandolfi of Boston’s Beth Israel Deaconess Medical Center. “We can’t keep studying just two volumes of the book of life. We really need to study them all.” Scientists no longer see the RNAs that aren’t envoys between DNA and ribosomes as worthless junk. “I believe there are hundreds, if not thousands, of noncoding RNAs that have a function,” says Harvard University molecular biologist Jeannie Lee. She and other scientists are beginning to learn what these formerly ignored molecules do. It turns out that they are involved in every step of gene activity, from turning genes on and off to tweaking final protein products. Those revelations were unthinkable 20 years ago.
Back in the 1990s, Lee says, scientists thought only proteins could turn genes on and off. Finding that RNAs were in charge “was a very odd concept.”
Here are five examples among the many noncoding RNAs that are now recognized as movers and shakers in the human body, for good and ill. Sometimes anticancer drugs stop working for reasons researchers don’t entirely understand. Take the chemotherapy drug cytarabine. It’s often the first drug doctors prescribe to patients with a blood cancer called acute myeloid leukemia. But cytarabine eventually stops working for about 30 to 50 percent of AML patients, and their cancer comes back.
Researchers have looked for defects in proteins that may be the reason cytarabine and other drugs fail, but there still isn’t a complete understanding of the problem, Pandolfi says. He and colleagues now have evidence that drug resistance may stem from problems in some of the largest and most bountiful of the newly discovered classes of RNAs, known as long noncoding RNAs. Researchers have already cataloged more than 18,000 of these “lncRNAs” (pronounced “link RNAs”). Pandolfi and colleagues investigated how some lncRNAs may work against cancer patients who are counting on chemotherapy to fight their disease. “We found hundreds of new players that can regulate response to therapy,” he says.
When the researchers boosted production of several lncRNAs in leukemia cells, the cells became resistant to cytarabine, Pandolfi and colleagues reported in April 2018 in Cell. They also found that patients with AML who had higher than normal levels of two lncRNAs experienced a cancer recurrence sooner than people who had lower levels of those lncRNAs.
Researchers are just beginning to understand how these lncRNAs influence cancer and other diseases, but Pandolfi is hopeful that someday he and other researchers will devise ways to control the bad actors and boost the helpful ones.
MicroRNAs Sparking a tumor’s spread
MicroRNAs are barely more than 20 RNA units, or bases, long, but they play an outsized role in heart disease, arthritis and many other ailments. These pipsqueaks can also lead to nerve pain and itchiness, researchers reported last year in Science Translational Medicine and in Neuron (SN Online: 8/13/18).
Hundreds of clinical studies are testing people’s blood and tissues to determine if microRNAs can be used to help doctors better diagnose or understand conditions ranging from asthma and Alzheimer’s disease to schizophrenia and traumatic brain injury. Some researchers are beginning to develop microRNAs as drugs and seeking ways to inhibit rogue microRNAs.
So far, the little molecules’ most firmly established roles are as promoters of and protectors against cancer (SN: 8/28/10, p. 18). Pancreatic cancer, for example, is a deadly foe. Only 8.5 percent of people are still alive five years after being diagnosed with this disease, according to U.S. National Cancer Institute statistics.
Cancer biologist Brian Lewis of the University of Massachusetts Medical School in Worcester and colleagues have learned that some microRNAs spur this lethal cancer’s initial attack and help the tumor spread from the pancreas to other organs.
MicroRNAs are mirror images of portions of the messenger RNAs that shuttle protein-making instructions from DNA to the ribosomes, where proteins are built. The microRNAs pair up with their larger messenger RNA mates and slate the bigger molecules for destruction, or at least prevent their instructions from being translated into proteins. One microRNA might have hundreds of mates, or targets, through which it influences many different body functions.
Lewis studies one gang of six microRNAs, known as the miR-17~92 cluster, the first group of microRNAs found to play a role in cancer. The six normally help strike a balance between cell growth and death, but an imbalance of these little molecules can push cells toward cancer.
Tumors in pancreatic cancer patients tend to have elevated levels of the cluster. To learn what the microRNAs were doing to goad cancer into taking hold, Lewis and colleagues used a genetic trick to remove the microRNAs from the pancreas in mice that were genetically engineered to develop pancreatic tumors. Early in their lives, mice with and without the microRNA cluster had about the same number of precancerous cells. But by the time the animals were 9 months old, a clear difference emerged. In mice with the miR-17~92 microRNAs, nearly 60 percent of the pancreas was tending toward cancer, compared with less than 20 percent in mice lacking the cluster. The finding, reported in 2017 in Oncotarget, suggests that the microRNAs aid the cancer’s start.
The researchers developed bits of RNA that block some of the cluster members from spurring on the tumor. Using human pancreatic cancer cells grown in lab dishes, Lewis and colleagues found that taking out two of the six cluster members, miR-19a and miR-19b, stopped cancer cells from forming structures called invadopodia. As their name suggests, invadopodia allow tumors to break through blood vessel walls and other barriers to spread through the body.
Transfer RNA fragments The virus helpers
For some young children and older adults, an infection with respiratory syncytial virus, or RSV, feels like a simple cold. But each year in the United States, more than 57,000 children younger than age 5 and about 177,000 people older than 65 are hospitalized because of the virus, the U.S. Centers for Disease Control and Prevention estimates. The infection kills hundreds of babies and about 14,000 adults over 65 annually.
Slightly higher than normal levels of some microRNAs had been linked to severe RSV infections. But molecular virologist Xiaoyong Bao of the University of Texas Medical Branch in Galveston wasn’t convinced that modestly increasing amounts of a few microRNAs could really mean the difference between a child getting a slight cold and dying from the respiratory virus.
She consulted her Texas Medical Branch colleague, cancer researcher Yong Sun Lee, for advice on studying microRNAs. Lee said Bao would need to deeply examine, or sequence, RNA in cells infected with the virus. That was an expensive proposition in 2012 when Bao started the project. “But I squeezed from my [lab’s] dry bank account,” she says, to pay for the experiment. The investment paid off. Cells infected with RSV had more of one particular RNA than did uninfected cells. Surprisingly, it was a piece of a transfer RNA. Transfer RNAs, or tRNAs, are the assembly line workers of protein building. tRNAs read instructions in a messenger RNA and deliver the amino acids the ribosome needs to make a protein. Scientists knew that working tRNAs are essential employees. Fragments, when they were found, were considered leftover bits of decommissioned tRNAs. But the fragments that Bao and colleagues discovered aren’t just worn out bits of tRNAs. Each fragment, about 30 bases long, is precisely cut from a tRNA when RSV infects cells. The fragments aid the virus’s infection in more than one way. For instance, two fragments help the virus make copies of itself in cells, Bao and colleagues reported in 2017 in the Journal of General Virology.
tRNA fragments may also boost the body’s susceptibility to a virus. Last year, Bao’s group described in Scientific Reports how exposure to some heavy metals, via air or water pollution, can produce tRNA fragments that trigger inflammation, which may make people more susceptible to respiratory infections such as RSV.
SINE RNAs Sacrificing infected cells
Another type of RNA may help protect against infection by certain viruses, including herpesvirus. Virologist Britt Glaunsinger has long marveled at the way viruses manipulate host cells by controlling RNAs in the cell. She became intrigued by transposons, mobile stretches of DNA that can jump from one location to another in the genome. Transposons make up nearly half of all the DNA in the human genome (SN: 5/27/17, p. 22). “We tend to think of [transposons] as parasites and things our own cells are constantly trying to shut down,” says Glaunsinger, a Howard Hughes Medical Institute investigator at the University of California, Berkeley. That’s because some are relics of ancient viruses. “While they may have initially been bad, some of them may actually be useful to us,” she says.
One class of transposons, called SINEs for short interspersed nuclear elements, are peppered throughout the genome. People have more than a million of one type of SINE known as Alu elements. Mice have similar SINEs, called B2s.
When active, SINE transposons make RNA copies of themselves. These SINE RNAs don’t carry instructions for building proteins and alone don’t enable the transposons to jump around the genome. So researchers puzzled over their role. Glaunsinger and colleagues discovered that some SINE RNAs may protect against viral infections. Normally, cells keep a tight lock on transposons, preventing them from making any RNA. But in Glaunsinger’s experiments, cells infected with herpesvirus “were producing tons of these noncoding RNAs in response to infection,” she says. “That sort of captured our interest.”
Details of the process are still being worked out, but Glaunsinger and others have discovered that SINE RNA production triggers a cascade of events that eventually kills infected human and mouse cells. Once the RNA production gets going, Glaunsinger says, “the cell is destined to die.” Inflammation appears to be an important step in the cell-killing chain reaction. It’s all for the greater good: Killing the infected cell may protect the rest of the organism from the infection’s spread.
But there’s a wrinkle: In mice, at least, one type of herpesvirus benefits from the flood of B2 RNAs in the cells it infects. The virus hijacks part of the inflammation chain reaction to boost its own production, Glaunsinger and colleagues reported in 2015 in PLOS Pathogens. “This is an example of the back-and-forth battle that’s always going on between virus and host,” she says. “Now the ball is back in the host’s court.”
piRNAs Shielding the brain from jumping genes
Autopsies of people who died with Alzheimer’s disease show a buildup of a protein called tau in the brain. That tau accumulation is tied to loss of some guardian RNAs, according to work by Bess Frost, a neurobiologist at UT Health San Antonio.
Frost studies fruit flies genetically engineered to make a disease-causing version of human tau in their nerve cells. Flies with the disorderly tau get a progressive nerve disease that causes movement problems and kills nerves. The insects live shorter lives than normal.
Part of the reason the flies, as well as people with tau tangles, have problems is because some RNAs known to guard the genome fall down on the job, Frost and colleagues discovered. These piwi-interacting RNAs, or piRNAs (pronounced “pie RNAs”), help keep transposons from jumping around. When transposons jump, they may land in or near a gene and mess with its activity. Usually cells prevent jumping by stopping transposons from making messenger RNA, which carries instructions to make proteins that eventually enable the transposon to hop from place to place. If a transposon gets past the cell’s defenses and produces its messenger RNA, piRNAs will step up to pair with the messenger and cause its destruction.
When disease-causing tau builds up in flies (and maybe in people), a class of transposon with a lengthy name — class I long terminal repeat retrotransposons — makes much more RNA than usual. And when flies have the disease-causing version of tau, they also have lower than normal levels of piRNAs, Frost and colleagues reported in August 2018 in Nature Neuroscience. “Both arms of control are messed up,” Frost says. Brains of people who died with Alzheimer’s disease or supranuclear palsy, another tau-related disease, also show signs that transposons were making extra RNA, suggesting that when tau goes bad, it can beat piRNA’s defenses.
In search of a work-around, Frost’s team found that genetically boosting piRNA production in flies or giving a drug that stops transposon hops reduced nerve cell death in the insects. The researchers are preparing to test the drug in mice prone to a rodent version of Alzheimer’s disease. The team is also examining human brain tissue to see if the increase in transposon RNAs actually leads to transposon jumping in Alzheimer’s patients. If transposons don’t hop more than usual, the finding may suggest that transposon RNAs themselves can cause mischief — no jumping necessary.
This story appears in the April 13, 2019 issue of Science News with the headline, “The Secret Powers of RNA: Overlooked molecules play a big role in human health.”
Life after shingles In “With its burning grip, shingles can do lasting damage” (SN: 3/2/19, p. 22), Aimee Cunningham described the experience of Nora Fox, a woman whose bout with shingles nearly 15 years ago left her with a painful condition called postherpetic neuralgia. Fox hadn’t found any reliable treatments, Cunningham reported.
Fox praised Science News for our portrayal of shingles-related pain. “The cover is excellent and looks just like I felt,” she wrote.
As the story went to press, Fox had a surgery during which doctors placed electrodes under the skin near sites of pain. A device lets Fox control when stimulation is delivered to those areas. But the treatment, called peripheral nerve stimulation, may not work for all patients with postherpetic neuralgia. There are reports in scientific journals of individual patients experiencing relief from their neuropathic pain after the procedure, Cunningham says. Fox’s husband, Denver C. Fox, sent Science News an update on her pain since the procedure: “There [has] been a significant change to the unbearable pain my wife has endured EVERY afternoon and evening for 14 years, despite trying every possible treatment the MDs knew of.” Shortly after the procedure, “her pain is greatly and markedly diminished.” Stone Age throwback Tests with replicas of a 300,000-year-old wooden spear suggest that Neandertals could have hunted from a distance, Bruce Bower reported in “Why modern javelin throwers hurled Neandertal spears at hay bales” (SN: 3/2/19, p. 14).
Reader Brenda Gray suggested that Neandertals’ spears could have been used for fighting instead of hunting.
The ancient spear found in Germany, on which the spear replicas were based, came from sediment that also contained stone tools and thousands of animal bones displaying marks made by stone tools, Bower says. “Such evidence indicates that the spears were used as hunting weapons. Neandertals could have used wooden spears in different ways, but there is no evidence that I know of for Neandertals using spears in warfare,” he says.
Young and restless Earth’s inner core began hardening sometime after 565 million years ago, Carolyn Gramling reported in “Earth’s core may have hardened just in time to save its magnetic field” (SN: 3/2/19, p. 13). The core may have solidified just in time to strengthen the planet’s magnetic field, saving it from collapse.
Reader John Bunch thought that the timing of the inner core’s solidification “lines up nicely” with the Cambrian explosion, when life rapidly diversified about 542 million years ago. “It leads me to wonder if there may be some cause and effect or some other relationship between the two that’s going on here.”
That extremely low-intensity magnetic field actually roughly lines up with the Avalon explosion, an earlier proliferation of new life forms called the Ediacaran biota, between about 575 million and 542 million years ago, Gramling says. It’s an intriguing coincidence that researchers noted.
Earth’s magnetic field helps protect the planet from radiation. So a weak magnetic field might somehow be linked with the Avalon explosion. One idea is that increased radiation reaching Earth’s surface hundreds of millions of years ago might have increased organisms’ mutation rates, Gramling says. But there just isn’t any evidence to support a causal link at the moment.
Multiplying 2 x 2 is easy. But multiplying two numbers with more than a billion digits each — that takes some serious computation.
The multiplication technique taught in grade school may be simple, but for really big numbers, it’s too slow to be useful. Now, two mathematicians say that they’ve found the fastest way yet to multiply extremely large figures.
The duo claim to have achieved an ultimate speed limit for multiplication, first suggested nearly 50 years ago. That feat, described online March 18 at the document archive HAL, has not yet passed the gauntlet of peer review. But if the technique holds up to scrutiny, it could prove to be the fastest possible way of multiplying whole numbers, or integers. If you ask an average person what mathematicians do, “they say, ‘Oh, they sit in their office multiplying big numbers together,’” jokes study coauthor David Harvey of the University of New South Wales in Sydney. “For me, it’s actually true.”
When making calculations with exorbitantly large numbers, the most important measure of speed is how quickly the number of operations needed — and hence the time required to do the calculation — grows as you multiply longer and longer strings of digits.
That growth is expressed in terms of n, defined as the number of digits in the numbers being multiplied. For the new technique, the number of operations required is proportional to n times the logarithm of n, expressed as O(n log n) in mathematical lingo. That means that, if you double the number of digits, the number of operations required will increase a bit faster, more than doubling the time the calculation takes. But, unlike simpler methods of multiplication, the time needed doesn’t quadruple, or otherwise rapidly blow up, as the number of digits creeps up, report Harvey and Joris van der Hoeven of the French national research agency CNRS and École Polytechnique in Palaiseau. That slower growth rate makes products of bigger numbers more manageable to calculate.
The previously predicted max speed for multiplication was O(n log n), meaning the new result meets that expected limit. Although it’s possible an even speedier technique might one day be found, most mathematicians think this is as fast as multiplication can get.
“I was very much astonished that it had been done,” says theoretical computer scientist Martin Fürer of Penn State. He discovered another multiplication speedup in 2007, but gave up on making further improvements. “It seemed quite hopeless to me.”
The new technique comes with a caveat: It won’t be faster than competing methods unless you’re multiplying outrageously huge numbers. But it’s unclear exactly how big those numbers have to be for the technique to win out — or if it’s even possible to multiply such big numbers in the real world.
In the new study, the researchers considered only numbers with more than roughly 10214857091104455251940635045059417341952 digits when written in binary, in which numbers are encoded with a sequence of 0s and 1s. But the scientists didn’t actually perform any of these massive multiplications, because that’s vastly more digits than the number of atoms in the universe. That means there’s no way to do calculations like that on a computer, because there aren’t enough atoms to even represent such huge numbers, much less multiply them together. Instead, the mathematicians came up with a technique that they could prove theoretically would be speedier than other methods, at least for these large quantities.
There’s still a possibility that the method could be shown to work for smaller, but still large, numbers. That could possibly lead to practical uses, Fürer says. Multiplication of these colossal numbers is useful for certain detailed calculations, such as finding new prime numbers with millions of digits (SN Online: 1/5/18) or calculating pi to extreme precision (SN Online: 12/10/02).
Even if the method is not widely useful, making headway on a problem as fundamental as multiplication is still a mighty achievement. “Multiplying numbers is something people have been working on for a while,” says mathematical physicist John Baez of the University of California, Riverside. “It’s a big deal, just because of that.”
“Pikobodies,” bioengineered immune system proteins that are part plant and part animal, could help flora better fend off diseases, researchers report in the March 3 Science. The protein hybrids exploit animals’ uniquely flexible immune systems, loaning plants the ability to fight off emerging pathogens.
Flora typically rely on physical barriers to keep disease-causing microbes at bay. If something unusual makes it inside the plants, internal sensors sound the alarm and infected cells die. But as pathogens evolve ways to dodge these defenses, plants can’t adapt in real time. Animals’ adaptive immune systems can, making a wealth of antibodies in a matter of weeks when exposed to a pathogen.、 In a proof-of-concept study, scientists genetically modified one plant’s internal sensor to sport animal antibodies. The approach harnesses the adaptive immune system’s power to make almost unlimited adjustments to target invaders and lends it to plants, says plant immunologist Xinnian Dong, a Howard Hughes Medical Institute investigator at Duke University who was not involved in the work.
Crops especially could benefit from having more adaptable immune systems, since many farms grow fields full of just one type of plant, says Dong. In nature, diversity can help protect vulnerable plants from disease-spreading pathogens and pests. A farm is more like a buffet.
Researchers have had success fine-tuning plant genes to be disease-resistant, but finding the right genes and editing them can take more than a decade, says plant pathologist Sophien Kamoun of the Sainsbury Laboratory in Norwich, England. He and colleagues wanted to know if plant protection could get an additional boost from animal-inspired solutions.
To create the pikobodies, the team fused small antibodies from llamas and alpacas with a protein called Pik-1 that’s found on the cells of Nicotiana benthamiana, a close relative of tobacco plants. Pik-1 typically detects a protein that helps a deadly blast fungus infect plants (SN: 7/10/17). For this test, the animal antibodies had been engineered to target fluorescent proteins
Plants with the pikobodies killed cells exposed to the fluorescent proteins, resulting in dead patches on leaves, the team found. Of 11 tested versions, four were not toxic to the leaves and triggered cell death only when the pikobodies attached to the specific protein that they had been designed bind.
What’s more, pikobodies can be combined to give plants more than one way to attack a foreign invader. That tactic could be useful to hit pathogens with the nimble ability to dodge some immune responses from multiple angles.
Theoretically, it’s possible to make pikobodies “against virtually any pathogen we study,” Kamoun says. But not all pikobody combos worked together in tests. “It’s a bit hit or miss,” he says. “We need some more basic knowledge to improve the bioengineering.”
Whales are known for belting out sounds in the deep. But they may also whisper.
Southern right whale moms steer their calves to shallow waters, where newborns are less likely to be picked off by an orca. There, crashing waves mask the occasional quiet calls that the pairs make. That may help the whales stick together without broadcasting their location to predators, researchers report July 11 in the Journal of Experimental Biology.
While most whale calls are meant to be long-range, “this shows us that whales have a sort of intimate communication as well,” says Mia Nielsen, a behavioral biologist at Aarhus University in Denmark. “It’s only meant for the whale right next to you.”
Nielsen and colleagues tagged nine momma whales with audio recorders and sensors to measure motion and water pressure, and also recorded ambient noise in the nearshore environment. When the whales were submerged, below the noisy waves, the scientists could pick up the hushed calls, soft enough to fade into the background noise roughly 200 meters away. An orca, or killer whale, “would have to get quite close in the big ocean to be able to detect them,” says biologist Peter Tyack at the University of St. Andrews in Scotland. Tyack was not involved with the study, but collaborates with one of the coauthors on other projects.
The whispers were associated with times when the whales were moving, rather than when mothers were stationary and possibly suckling their calves. Using hushed tones could make it harder for the pair to reunite if separated. But the observed whales tended to stay close to one another, about one body length apart, the team found.
Eavesdropping biologists have generally focused on the loud noises animals make, Tyack says. “There may be a repertoire among the calls of lots of animals that are specifically designed only to be audible to a partner who’s close by,” he says.
A week after two large earthquakes rattled southern California, scientists are scrambling to understand the sequence of events that led to the temblors and what it might tell us about future quakes.
A magnitude 6.4 quake struck July 4 near Ridgecrest — about 194 kilometers northeast of Los Angeles — followed by a magnitude 7.1 quake in the same region on July 5. Both quakes occurred not along the famous San Andreas Fault but in a region of crisscrossing faults in the state’s high desert area, known as the Eastern California Shear Zone.
The San Andreas Fault system, which stretches nearly 1,300 kilometers, generally takes center stage when it comes to California’s earthquake activity. That’s where, as the Pacific tectonic plate and the North American tectonic plate slowly grind past each other, sections of ground can lock together for a time, slowly building up strain until they suddenly release, producing powerful quakes.
For the last few tens of millions of years, the San Andreas has been the primary origin of massive earthquakes in the region. Now overdue for a massive earthquake, based on historical precedent, many people fear it’s only a matter of time before the “Big One” strikes. But as the July 4 and July 5 quakes — and their many aftershocks — show, the San Andreas Fault system isn’t the only source of concern. The state is riddled with faults, says geophysicist Susan Hough of the U.S. Geological Survey in Pasadena, Calif. That’s because almost all of California is part of the general boundary between the plates. The Eastern California Shear Zone alone has been the source of several large quakes in the last few decades, including the magnitude 7.1 Hector Mine quake in 1999, the magnitude 6.7 Northridge quake in 1994 and the magnitude 7.3 Landers quake in 1992 (SN Online: 8/29/18).
Here are three questions scientists are trying to answer in the wake of the most recent quakes.
Which faults ruptured, and how? The quakes appear to have occurred along previously unmapped faults within a part of the Eastern California Shear Zone known as the Little Lake Fault Zone, a broad bunch of cracks difficult to map, Hough says. “It’s not like the San Andreas, where you can go out and put your hand on a single fault,” she says. And, she adds, the zone also lies within a U.S. Navy base that isn’t generally accessible to geologists for mapping.
But preliminary data do offer some clues. The data suggest that the first rupture may actually have been a twofer: Instead of one fault rupturing, two connected faults, called conjugate faults, may have ruptured nearly simultaneously, producing the initial magnitude 6.4 quake.
It’s possible that the first quake didn’t fully release the strain on that fault, but the second, larger quake did. “My guess is that they will turn out to be complementary,” Hough says.
The jury is still out, though, says Wendy Bohon, a geologist at the Incorporated Research Institutions for Seismology in Washington, D.C. “What parts of the fault broke, and whether a part of the fault broke twice … I’m waiting to see what the scientific consensus is on that.” And whether a simultaneous rupture of a conjugate fault is surprising, or may actually be common, isn’t yet clear, she says. “In nature, we see a lot of conjugate fault pairs. I don’t think they normally rupture at the same time — or maybe they do, and we haven’t had enough data to see that.”
Is the center of tectonic action moving away from the San Andreas Fault? GPS data have revealed exactly how the ground is shifting in California as the giant tectonic plates slide past one another. The San Andreas Fault system bears the brunt of the strain, about 70 percent, those data show. But the Eastern California Shear Zone bears the other 30 percent. And the large quakes witnessed in that region over the last few decades raise a tantalizing possibility, Hough says: We may be witnessing the birth pangs of a new boundary.
“The plate boundary system has been evolving for a long time already,” Hough says. For the last 30 million years or so, the San Andreas Fault system has been the primary locus of action. But just north of Santa Barbara lies the “big bend,” a kink that separates the northern from the southern portion of the fault system. Where the fault bends, the Pacific and North American plates aren’t sliding sideways past one another but colliding.
“The plates are trying to move, but the San Andreas is actually not well aligned with that motion,” she says. But the Eastern California Shear Zone is. And, Hough says, there’s some speculation that it’s a new plate boundary in the making. “But it would happen over millions of years,” she adds. “It’s not going to be in anyone’s lifetime.”
Will these quakes trigger the Big One on the San Andreas? Such large quakes inevitably raise these fears. Historically, the San Andreas Fault system has produced a massive quake about every 150 years. But “for whatever reason, it has been pretty quiet in the San Andreas since 1906,” when an estimated magnitude 7.9 quake along the northern portion of the fault devastated San Francisco, Hough says. And the southern portion of San Andreas is even more overdue for a massive quake; its last major event was the estimated magnitude 7.9 Fort Tejon quake in 1857, she says.
The recent quakes aren’t likely to change that situation. Subsurface shifting from a large earthquake can affect strain on nearby faults. But it’s unlikely that the quakes either relieved stress or will ultimately trigger another earthquake along the San Andreas Fault system, essentially because they were too far away, Hough says. “The disruption [from one earthquake] of other faults decreases really quickly with distance,” she says (SN Online: 3/28/11).
Some preliminary data do suggest that the magnitude 7.1 earthquake produced some slippage, also known as creep, along at least one shallow fault in the southern part of the San Andreas system. But such slow, shallow slips don’t produce earthquakes, Hough says.
However, the quakes could have more significantly perturbed much closer faults, such as the Garlock Fault, which runs roughly west to east along the northern edge of the Mojave Desert. That’s not unprecedented: The 1992 Landers quake may have triggered a magnitude 5.7 quake two weeks later along the Garlock Fault.
“Generations of graduate students are going to be studying these events — the geometry of the faults, how the ground moved,” even how the visible evidence of the rupture, scarring the land surface, erodes over time and obscures its traces, Bohon says.
At the moment, scientists are eagerly trading ideas on social media sites. “It’s the equivalent of listening in on scientists shouting down the hallway: ‘Here’s my data — what do you have?’ ” she says. Those preliminary ideas and explanations will almost certainly evolve as more information comes in, she adds. “It’s early days yet.”
No one has ever probed a particle more stringently than this.
In a new experiment, scientists measured a magnetic property of the electron more carefully than ever before, making the most precise measurement of any property of an elementary particle, ever. Known as the electron magnetic moment, it’s a measure of the strength of the magnetic field carried by the particle.
That property is predicted by the standard model of particle physics, the theory that describes particles and forces on a subatomic level. In fact, it’s the most precise prediction made by that theory. By comparing the new ultraprecise measurement and the prediction, scientists gave the theory one of its strictest tests yet. The new measurement agrees with the standard model’s prediction to about 1 part in a trillion, or 0.1 billionths of a percent, physicists report in the February 17 Physical Review Letters.
When a theory makes a prediction at high precision, it’s like a physicist’s Bat Signal, calling out for researchers to test it. “It’s irresistible to some of us,” says physicist Gerald Gabrielse of Northwestern University in Evanston, Ill.
To measure the magnetic moment, Gabrielse and colleagues studied a single electron for months on end, trapping it in a magnetic field and observing how it responded when tweaked with microwaves. The team determined the electron magnetic moment to 0.13 parts per trillion, or 0.000000000013 percent.
A measurement that exacting is a complicated task. “It’s so challenging that nobody except the Gabrielse team dares to do it,” says physicist Holger Müller of the University of California, Berkeley. The new result is more than twice as precise as the previous measurement, which stood for over 14 years, and which was also made by Gabrielse’s team. Now the researchers have finally outdone themselves. “When I saw the [paper] I said, ‘Wow, they did it,’” says Stefano Laporta, a theoretical physicist affiliated with University of Padua in Italy, who works on calculating the electron magnetic moment according to the standard model.
The new test of the standard model would be even more impressive if it weren’t for a conundrum in another painstaking measurement. Two recent experiments, one led by physicist Saïda Guellati-Khélifa of Kastler Brossel Laboratory in Paris and the other by Müller, disagree on the value of a number called the fine-structure constant, which characterizes the strength of electromagnetic interactions (SN: 4/12/18). That number is an input to the standard model’s prediction of the electron magnetic moment. So the disagreement limits the new test’s precision. If that discrepancy were sorted out, the test would become 10 times as precise as it is now. The stalwart standard model has stood up to a barrage of experimental tests for decades. But scientists don’t think it’s the be-all and end-all. That’s in part because it doesn’t explain observations such as the existence of dark matter, an invisible substance that exerts gravitational influence on the cosmos. And it doesn’t say why the universe contains more matter than antimatter (SN: 9/22/22). So physicists keep looking for cases where the standard model breaks down.
One of the most tantalizing hints of a failure of the standard model is the magnetic moment not of the electron, but of the muon, a heavy relative of the electron. In 2021, a measurement of this property hinted at a possible mismatch with standard model predictions (SN: 4/7/21).
“Some people believe that this discrepancy could be the signature of new physics beyond the standard model,” says Guellati-Khélifa, who wrote a commentary on the new electron magnetic moment paper in Physics magazine. If so, any new physics affecting the muon could also affect the electron. So future measurements of the electron magnetic moment might also deviate from the prediction, finally revealing the standard model’s flaws.
The James Webb Space Telescope’s first peek at the distant universe unveiled galaxies that appear too big to exist.
Six galaxies that formed in the universe’s first 700 million years seem to be up to 100 times more massive than standard cosmological theories predict, astronomer Ivo Labbé and colleagues report February 22 in Nature. “Adding up the stars in those galaxies, it would exceed the total amount of mass available in the universe at that time,” says Labbé, of the Swinburne University of Technology in Melbourne, Australia. “So you know that something is afoot.” The telescope, also called JWST, released its first view of the early cosmos in July 2022 (SN: 7/11/22). Within days, Labbé and his colleagues had spotted about a dozen objects that looked particularly bright and red, a sign that they could be massive and far away.
“They stand out immediately, you see them as soon as you look at these images,” says astrophysicist Erica Nelson of the University of Colorado Boulder.
Measuring the amount of light each object emits in various wavelengths can give astronomers an idea of how far away each galaxy is, and how many stars it must have to emit all that light. Six of the objects that Nelson, Labbé and colleagues identified look like their light comes from no later than about 700 million years after the Big Bang. Those galaxies appear to hold up to 10 billion times the mass of our sun in stars. One of them might contain the mass of 100 billion suns.
“You shouldn’t have had time to make things that have as many stars as the Milky Way that fast,” Nelson says. Our galaxy contains about 60 billion suns’ worth of stars — and it’s had more than 13 billion years to grow them. “It’s just crazy that these things seem to exist.”
In the standard theories of cosmology, matter in the universe clumped together slowly, with small structures gradually merging to form larger ones. “If there are all these massive galaxies at early times, that’s just not happening,” Nelson says.
One possible explanation is that there’s another, unknown way to form galaxies, Labbé says. “It seems like there’s a channel that’s a fast track, and the fast track creates monsters.”
But it could also be that some of these galaxies host supermassive black holes in their cores, says astronomer Emma Curtis-Lake of the University of Hertfordshire in England, who was not part of the new study. What looks like starlight could instead be light from the gas and dust those black holes are devouring. JWST has already seen a candidate for an active supermassive black hole even earlier in the universe’s history than these galaxies are, she says, so it’s not impossible. Finding a lot of supermassive black holes at such an early era would also be challenging to explain (SN: 3/16/18). But it wouldn’t require rewriting the standard model of cosmology the way extra-massive galaxies would.
“The formation and growth of black holes at these early times is really not well understood,” she says. “There’s not a tension with cosmology there, just new physics to be understood of how they can form and grow, and we just never had the data before.”
To know for sure what these distant objects are, Curtis-Lake says, astronomers need to confirm the galaxies’ distances and masses using spectra, more precise measurements of the galaxies’ light across many wavelengths (SN: 12/16/22).
JWST has taken spectra for a few of these galaxies already, and more should be coming, Labbé says. “With luck, a year from now, we’ll know a lot more.”