Financial health takes a hit among people who smoke a lot of marijuana from adolescence into young adulthood, even if they don’t get hooked on the drug, researchers say.
The more years that individuals smoke pot four or more days a week, the more likely they are to experience serious money problems, say social epidemiologist Magdalena Cerdá of the University of California, Davis and her colleagues. Cash woes include defaulting on credit card payments, struggling to pay for food and rent and going on welfare. In a representative sample of 947 New Zealanders studied from birth to age 38, adult economic and social problems — which also include a fall from middle-class status, stealing money at work and domestic violence — occurred about equally among regular marijuana and alcohol users, the scientists report March 22 in Clinical Psychological Science. Of 29 persistent pot smokers who grew up in middle-class families, 15 experienced downward social mobility, versus only 23 of 160 middle-class peers who never used marijuana.
Participants who consistently qualified as dependent on marijuana after age 18 encountered the worst money troubles over time, even exceeding those of alcoholic peers.
These findings don’t prove that regular pot smoking caused Kiwis’ financial difficulties, the investigators caution. But the association between marijuana and money troubles remained after accounting for childhood poverty, IQ, teenage delinquency and depression, impulsiveness, self-reported motivation to succeed in life, pot-related criminal convictions and abuse of alcohol and other drugs on top of frequent marijuana use.
Thousands of national parks have been established around the world to preserve wildlife. But towns, farms, ranches and roads have grown up around many of these parks, creating islands of wilderness in a sea of humanity. If the creatures inside are to thrive, they need ways to travel between the islands, contends “Wild Ways,” a new documentary from the TV series NOVA.
Isolation can be especially troublesome for large predators, such as lions, that live alone or in small groups. In some areas of Africa, lions can move between populations to avoid inbreeding. But some lions, such as the few in Tanzania’s Ngorongoro Crater, are cut off from other groups. In such populations, cubs are born smaller, die younger and are more susceptible to disease. And drought or overhunting could easily wipe them out, the show notes. To connect these smaller populations, conservationists are now building wildlife corridors between parks. One of the most ambitious projects is the Yellowstone to Yukon Conservation Initiative, which aims to create connections between grizzly bears in the Canadian Arctic and the western United States. Other large wildlife corridors are being planned in Central America, eastern Australia and the Himalayas. But there are often roadblocks. It can be difficult to persuade people to spend money on wildlife, and it can be even harder when those animals kill livestock or humans.
“It is important that we provide incentives for local communities, in particular, who should now look at wildlife as some form of economic asset to themselves,” says Simon Munthali of the Kavango-Zambezi Transfrontier Conservation Area, which is attempting to connect parks in five countries across southern Africa. With the right incentives, people will be more accepting of wildlife moving across land and may even benefit from it, he says in the documentary. Botswana, for instance, has developed a large ecotourism industry that provides jobs and money for local people, motivating animal protection.
The documentary is a bit too optimistic about the removal of hurdles that stand in the path of wildlife corridors, especially in the American West, where there is ongoing debate about how to manage public lands. And then there is the question of whether these corridors can be created fast enough to save the world’s dwindling animal populations. But, as Michael Soulé, one of the founders of the field of conservation biology, says: “It’s our last chance to protect the diversity of life on Earth.” “Wild Ways” makes a convincing case that we should be willing to try.
Parasitic worms may hold the secret to soothing inflamed bowels.
In studies of mice and people, parasitic worms shifted the balance of bacteria in the intestines and calmed inflammation, researchers report online April 14 in Science. Learning how worms manipulate microbes and the immune system may help scientists devise ways to do the same without infecting people with parasites.
Previous research has indicated that worm infections can influence people’s fertility (SN Online: 11/19/15), as well as their susceptibility to other parasite infections (SN: 10/5/13, p. 17) and to allergies (SN: 1/29/11, p. 26). Inflammatory bowel diseases also are less common in parts of the world where many people are infected with parasitic worms. P’ng Loke, a parasite immunologist at New York University School of Medicine, and colleagues explored how worms might protect against Crohn’s disease. The team studied mice with mutations in the Nod2 gene. Mutations in the human version of the gene are associated with Crohn’s in some people.
The mutant mice develop damage in their small intestines similar to that seen in some Crohn’s patients. Cells in the mice’s intestines don’t make much mucus, and more Bacteroides vulgatus bacteria grow in their intestines than in the guts of normal mice. Loke and colleagues previously discovered that having too much of that type of bacteria leads to inflammation that can damage the intestines. In the new study, the researchers infected the mice with either a whipworm (Trichuris muris) or a corkscrew-shaped worm (Heligmosomoides polygyrus). Worm-infected mice made more mucus than uninfected mutant mice did. The parasitized mice also had less B. vulgatus and more bacteria from the Clostridiales family. Clostridiales bacteria may help protect against inflammation. “Although we already knew that worms could alter the intestinal flora, they show that these types of changes can be very beneficial,” says Joel Weinstock, an immune parasitologist at Tufts University Medical Center in Boston.
Both the increased mucus and the shift in bacteria populations are due to what’s called the type 2 immune response, the researchers found. Worm infections trigger immune cells called T helper cells to release chemicals called interleukin-4 and interleukin-13. Those chemicals stimulate mucus production. The mucus then feeds the Clostridiales bacteria, allowing them to outcompete the Bacteroidales bacteria. It’s still unclear how the mucus encourages growth of one type of bacteria over another, Loke says.
Blocking interleukin-13 prevented the mucus production boost and the shift in bacteria mix, indicating that the worms work through the immune system. But giving interleukin-4 and interleukin-13 to uninfected mice could alter the mucus and bacterial balance without worms’ help, the researchers discovered.
Loke and colleagues also wanted to know if worms affect people’s gut microbes. So the researchers took fecal samples from people in Malaysia who were infected with parasitic worms.
After taking a deworming drug, the people had less Clostridiales and more Bacteriodales bacteria than before. That shift in bacteria was associated with a drop in the number of Trichuris trichiura whipworm eggs in the people’s feces, indicating that getting rid of worms may have negative consequences for some people.
Having data from humans is important because sometimes results in mice don’t hold up in people, says Aaron Blackwell, a human biologist at the University of California, Santa Barbara. “It’s nice to show that it’s consistent in humans.”
Worms probably do other things to limit inflammation as well, Weinstock says. If scientists can figure out what those things are, “studying these worms and how they do it may very well lead to the development of new drugs.”
Before anybody even had a computer, Claude Shannon figured out how to make computers worth having.
As an electrical engineering graduate student at MIT, Shannon played around with a “differential analyzer,” a crude forerunner to computers. But for his master’s thesis, he was more concerned with relays and switches in electrical circuits, the sorts of things found in telephone exchange networks. In 1937 he produced, in the words of mathematician Solomon Golomb, “one of the greatest master’s theses ever,” establishing the connection between symbolic logic and the math for describing such circuitry. Shannon’s math worked not just for telephone exchanges or other electrical devices, but for any circuits, including the electronic circuitry that in subsequent decades would make digital computers so powerful.
It’s now conveniently a good time to celebrate Shannon’s achievements, on the occasion of the centennial of his birth (April 30) in Petoskey, Michigan, in 1916. Based on the pervasive importance of computing in society today, it wouldn’t be crazy to call the time since then “Shannon’s Century.”
“It is no exaggeration,” wrote Golomb, “to refer to Claude Shannon as the ‘father of the information age,’ and his intellectual achievement as one of the greatest of the twentieth century.”
Shannon is most well-known for creating an entirely new scientific field — information theory — in a pair of papers published in 1948. His foundation for that work, though, was built a decade earlier, in his thesis. There he devised equations that represented the behavior of electrical circuitry. How a circuit behaves depends on the interactions of relays and switches that can connect (or not) one terminal to another. Shannon sought a “calculus” for mathematically representing a circuit’s connections, allowing scientists to be able to design circuits effectively for various tasks. (He provided examples of the circuit math for an electronic combination lock and some other devices.)
“Any circuit is represented by a set of equations, the terms of the equations corresponding to the various relays and switches in the circuit,” Shannon wrote. His calculus for manipulating those equations, he showed, “is exactly analogous to the calculus of propositions used in the symbolic study of logic.”
As an undergraduate math (and electrical engineering) major at the University of Michigan, Shannon had learned of 19th century mathematician George Boole’s work on representing logical statements by algebraic symbols. Boole devised a way to calculate logical conclusions about propositions using binary numbers; 1 represented a true proposition and 0 a false proposition. Shannon perceived an analogy between Boole’s logical propositions and the flow of current in electrical circuits. If the circuit plays the role of the proposition, then a false proposition (0) corresponds to a closed circuit; a true proposition (1) corresponds to an open circuit. More elaborate math showed how different circuit designs would correspond to addition or multiplication and other features, the basis of the “logic gates” designed into modern computer chips.
For his Ph.D. dissertation, Shannon analyzed the mathematics of genetics in populations, but that work wasn’t published. In 1941 he began working at Bell Labs; during World War II, he wrote an important (at the time secret) paper on cryptography, which required deeper consideration of how to quantify information. After the war he developed those ideas more fully, focusing on using his 1s and 0s, or bits, to show how much information could be sent through a communications channel and how to communicate it most efficiently and accurately.
In 1948, his two papers on those issues appeared in the Bell System Technical Journal. They soon were published, with an introductory chapter by Warren Weaver, in a book titled The Mathematical Theory of Communication. Today that book is regarded as the founding document of information theory.
For Shannon, communication was not about the message, or its meaning, but about how much information could be communicated in a message (through a given channel). At its most basic, communication is simply the reproduction of a message at some point remote from its point of origin. Such a message might have a “meaning,” but such meaning “is irrelevant to the engineering problem” of transferring the message from one point to another, Shannon asserted. “The significant aspect is that that actual message is one selected from a set of possible messages.” Information, Shannon decided, is a measure of how much a communication reduces the ignorance about which of those possible messages has been transmitted.
In a very simple communication system, if the only possible messages are “yes” and “no,” then each message (1 for yes, 0 for no) reduces your ignorance by half. By Shannon’s math, that corresponds to one bit of information. (He didn’t coin the term “bit” — short for binary digit — but his work established its meaning.) Now consider a more complicated situation — an unabridged English dictionary, which should contain roughly half a million words. One bit would correspond to a yes-or-no that the word is in the first half of the dictionary. That reduces ignorance, but not very much. Each additional bit would reduce the number of possible words by half. Specifying a single word from the dictionary (eliminating all the ignorance) would take about 19 bits. (This fact is useful for playing the game of 20 Questions — just keep asking about the secret word’s location in the dictionary.)
Shannon investigated much more complicated situations and devised theorems for calculating information quantity and how to communicate it efficiently in the presence of noise. His math remains central to almost all of modern digital technology. As electrical engineer Andrew Viterbi wrote in a Shannon eulogy, Shannon’s 1948 papers “established all the key parameters and limits for optimal compression and transmission of digital information.”
Beyond its practical uses, Shannon’s work later proved to have profound scientific significance. His math quantifying information in bits borrowed the equations expressing the second law of thermodynamics, in which the concept of entropy describes the probability of a system’s state. Probability applied to the ways in which a system’s parts could be arranged, it seemed, mirrored the probabilities involved in reducing ignorance about a possible message. Shannon, well aware of this connection, called his measure entropy as well. Eventually questions arose about whether Shannon’s entropy and thermodynamic entropy shared more than a name.
Shannon apparently wasn’t sure. He told one writer in 1979 that he thought the connection between his entropy and thermodynamics would “hold up in the long run” but hadn’t been sufficiently explored. But nowadays a deep conceptual link shows up not only between Shannon’s information theory and thermodynamics, but in fields as diverse as quantum mechanics, molecular biology and the physics of black holes. Shannon’s understanding of information plays a central role, for instance, in explaining how the notorious Maxwell’s demon can’t violate thermodynamics’ second law. Much of that work is based on Landauer’s principle, the requirement that energy is expended when information is erased. In developing that principle, Rolf Landauer (an IBM physicist) was himself influenced both by Shannon’s work and the work of Sadi Carnot in discerning the second law in the early 19th century.
Something Shannon and Carnot had in common, Landauer once emphasized to me, was that both discovered mathematical restrictions on physical systems that were independent of the details of the system. In other words, Carnot’s limit on the efficiency of steam engines applied to any sort of engine, no matter what it was made of or how it was designed. Shannon’s principles specifying the limits on information compression and transmission apply no matter what technology is used to do the compressing or sending. (Although in Shannon’s case, Landauer added, certain conditions must be met.)
“They both find limits for what you can do which are independent of future inventions,” Landauer told me. That is, they have grasped something profound about reality that is not limited to a specific place or time or thing.
So it seems that Shannon saw deeply not only into the mathematics of circuits, but also into the workings of nature. Information theorist Thomas Cover once wrote that Shannon belongs “in the top handful of creative minds of the century.” Some of Shannon’s original theorems, Cover noted, were not actually proved rigorously. But over time, details in the sketchy proofs have been filled in and Shannon’s intuitive insights stand confirmed. “Shannon’s intuition,” Cover concluded, “must have been anchored in a deep and natural theoretical understanding.” And it seems likely that Shannon’s intuition will provide even more insights into nature in the century ahead.
A decent office scanner has beaten X-ray blasts from multimillion-dollar synchrotron setups in revealing how air bubbles kill plant leaves during drought.
Intricate fans and meshes of plant veins carrying water are “among the most important networks in biology,” says Timothy Brodribb of the University of Tasmania in Hobart, Australia. When drought weakens the water tension in veins, air from plant tissues bubbles in, killing leaves much as bubble embolisms and clots in blood vessels can kill human tissue. As climate change and population growth increase risks of water shortage, Brodribb and other researchers are delving into the details of what makes some plants more resistant than others to drought. The high energy of X-rays destroys delicate leaf tissue. So, based on a chat with microfluidics specialist Philippe Marmottant of the French National Center for Scientific Research, Brodribb tried repeatedly scanning a leaf with a light source below it to reveal darkening lines as air bubbles shot through the veins. A microscope or scanner proved perfect. Tracked this way, an invasion of killer bubbles “looks like a lightning storm,” he says.
He was surprised to see that bigger veins, despite their robust looks, fail before tiny ones (blue indicates earliest failures; red, the latest), as seen in an oak leaf (lower right) and Pteris fern (top). And networks in ferns with simpler branching patterns, as in the Adiantum ferns at bottom left, crash quickly.
This system of visualizing plant plumbing gave better resolution than expensive and elaborate X-ray techniques had, Brodribb, Marmottant and Diane Bienaimé report online April 11 in the Proceedings of the National Academy of Sciences.
Something catapulted a pair of stars from the outer rim of our galaxy, but astronomers aren’t sure what. A binary star known as PB 3877 is rocketing away at about 2 million kilometers per hour — possibly fast enough to escape the galaxy’s gravitational pull — and all the usual explanations for such speedy stars fall short. Astrophysicist Péter Németh of the University of Erlangen-Nuremberg in Germany and colleagues report the discovery in the April 10 Astrophysical Journal Letters.
Many galactic escapees get kicked out after a close brush with the supermassive black hole in the Milky Way’s center. But PB 3877, first noticed in 2011 and currently about 18,000 light-years away in the constellation Coma Berenices, has been nowhere near that behemoth. A supernova could be responsible; it has happened before (SN Online: 3/5/15). But PB 3877 is two stars traveling together. A supernova would have torn the two apart. Németh and colleagues propose that the duo may be left over from a smashup between the Milky Way and a smaller galaxy. If that’s the case, then there might be others like PB 3877 lurking in the galactic outskirts.
On April 19, 1966, Roberta Gibb became the first woman to (unofficially) finish the Boston marathon. Women were officially allowed to enter the race in 1971, and Boston medaled its first female winner in 1972 — the year that also saw the passage of Title IX — the amendment that prohibits discrimination based on sex in education programs or any program receiving federal funding. This year, 13,751 women crossed the Boston marathon finish line, making the finisher list 45 percent female. In the last 50 years, other sports have also welcomed in women, from weightlifting to rugby to wrestling. And of course, women exercise noncompetitively, lifting weights, holding yoga poses and putting in hours on the track and in the gym.
Women are making up for a historical bias against them in sports. Not surprisingly, there’s also historically been a bias in sports science. “If you went all the way back to the 1950s, a lot of exercise physiology studies about metabolism talk about the 150-pound-man,” says Bruce Gladden, an exercise physiologist at Auburn University in Alabama and the editor in chief of the journal Medicine and Science in Sports and Exercise. “That was the average medical student.” It was a matter of convenience, studying the people nearest at hand, he explains.
Over time, athletes (and convenient student populations) have become more diverse, but diversity in studies of those athletes has continued to lag behind. When Joe Costello, an exercise physiologist at the University of Portsmouth in England, began studying the effects of extreme cold exposure on training recovery in athletes, he found that women were under-represented in the field compared to men. He wondered, he says, “is that the case across the board in sports science?”
Digging through three influential journals in the field — Medicine and Science in Sports and Exercise, the British Journal of Sports Medicine and the American Journal of Sports Medicine — Costello and his colleagues analyzed 1,382 articles published from 2011 to 2013, which added up to more than six million participants. The percentage of female participants per article was around 36 percent, and women represented 39 percent of the total participants, the scientists reported in April 2014 in the European Journal of Sport Science.
“In my opinion, it’s not enough,” he says. The numbers are relatively close to the gender breakdowns in competitive sport, he notes, but participation in noncompetitive exercise and casual running is a lot closer to a 50:50 breakdown, and the studies don’t reflect that. Despite the gap, Costello’s study did show that women are represented in exercise science studies in general. But I wondered if the trend was improving — and if the type of study mattered. Are scientists studying women in, say, studies of metabolism, but neglecting them in studies of injury? I looked at published studies in two top exercise physiology journals and found that women remain under-studied, especially when it comes to studies of performance. Reasons for this under-representation abound, from menstrual cycles to funding to simple logistics. But with recent requirements for gender parity from funding agencies, reasons are no longer excuses. When it comes to the race to fitness, women are well out of the starting blocks, but the science still has some catching up to do. Let’s look at the data I followed Costello’s lead and looked at studies published in Medicine and Science in Sports and Exercise and the American Journal of Sports Medicine, this time looking at the first five months of 2015(the former journals had articles available for free through May 2015; the latter granted me access. The third journal in the previous study, the British Journal of Sports Medicine, would only grant me access on a case-by-case basis). I excluded single case studies, animal studies, cell studies, studies involving cadavers and studies that dealt with coaches’ or doctors’ evaluations. I also excluded studies where the gender breakdown of participants wasn’t given (11 studies that included people didn’t mention the gender of the participants), and studies where there would be no reason to include women (such as those involving prostate cancer recovery).
That left me with 188 studies that included 254,813 participants. Of the 188 studies, 138, or 73 percent involved at least some women. But overall, women made up only 42 percent of participants. While 27 percent of the studies included only men, only 4 percent were studies of only women.
These results were similar to those Costello and his group showed in 2014. But I also wondered what, exactly, those women were being studied for. I took the 188 studies and divided them into six categories:
Studies on metabolism, obesity, sedentary behavior, weight loss and diabetes Studies of nonmetabolic diseases Basic physiology studies Social studies, including uses of pedometers and group exercise Sports injury Performance studies. In studies of metabolism, obesity, weight loss and diabetes (23 total studies), women were included in 87 percent of studies and represented 45 percent of participants, getting relatively close to gender parity. For nonmetabolic diseases (18 studies), 85 percent of studies included women, and they represented 44 percent of participants. Out of 188 studies, the number of studies involving women ranged from 36 percent in performance to 100 percent in social studies.
In basic physiology studies (11 total studies), including studies of knee and muscle function and studies of people in microgravity, women were included in 45 percent of studies, and represented 42 percent of all participants.
Women were represented in 100 percent of social studies (seven papers) and made up 60 percent of the participants. These included studies such as self-cognition, how well people adhere to wearing activity trackers, and the influence of meet-up groups on exercise. “Women are more likely to take part in [or] be recruited to group training programs than men,” notes Charlotte Jelleyman, an exercise physiologist at the University of Leicester in England.
The most striking differences came when studying performance and sports injury. There were 102 studies of sports injury and recovery, from concussions and elbow and shoulder repair in baseball players to studies of injury in surfers. Women were present in 80 percent of these studies, but made up 40 percent of participants.
I was especially interested in the large number of studies (38 total) on knee and ACL repair. In these studies, women were present in 94 percent of studies, but were only about 42 percent of participants. “That’s a case where you would think there would be more emphasis,” Gladden notes. “ACL injuries are much more prevalent in female athletes.” Out of more than 250,000 participants in the 188 studies analyzed, the majority were men, particularly in analyses of sports performance and injury.
But the biggest difference came in sports performance — training to get better, recover faster and perform stronger. Of 30 studies, 39 percent involved women, and women made up almost 40 percent of participants. But this result was heavily skewed by a single study of more than 90,000 participants, which examined sex differences in pacing during marathons. When this study was removed, the total number of participants in all performance studies dropped to 4,001. And the percentage of female participants dropped with it — to 3 percent. Scientists may be trying to get at the secrets of the best athletes, but to do so, they are mostly looking in men.
Time, money and menstrual cycles There are many reasons why women might be under-represented in exercise science. One is the same reason that haunts many sex disparities in biological research — the menstrual cycle.
With monthly hormone cycles, “[we] have to test [women] at certain phases,” even if you’re studying something seemingly unrelated, such as knee pain, explained Mark Tarnopololsky, a neurometabolic specialist at McMaster University, who has extensively studied sex differences in exercise. “One has to choose which phase — follicular or luteal phase — so I think when physiologists are limited in their funds, it’s easier to get guys to come in at any time.”
For some types of studies, scientists note that no previous studies have found sex differences. So scientists just study men — with no menstrual cycle to worry about — and apply the results to women. But “it’s not good enough,” says Jelleyman. “Just to say that because it works in men and previous studies have found no sex differences we assume it will work on women too – you have to show it.” Many scientists worry that cycling hormones means variable data points, so it’s easier to study men and state that the results probably apply to women, too. But that’s a cop out, says Marie Murphy, an exercise scientist at Ulster University in Northern Ireland. “If you revisit [women] in the same phase, they should be no more variable than a man,” she notes. “You return to them 28 days later and that’s easy enough. It’s not a difficult thing to do. But I think if you’re looking for an excuse you’ll find one.”
Using that excuse can mean missing important differences. Before Gibb’s Boston run in 1966, many people — including scientists — viewed distance running and extreme exercise as somehow unhealthy for women, Tarnopololsky explains. After his lab studied differences in metabolism in men and women during endurance exercise, his group found that “Women were at least as good, if not better able to withstand the rigors of the exercise.”
But menstrual cycles aside, studies are expensive, particularly studies involving people. In many cases, simplifying the study population is the only way to complete the work on time and within budget. As a member of the coaching teams associated with elite athletes, Louise Burke, a sports nutritionist at the Australian Institute of Sport, says she takes her research chances where she can find them. For a recent study of male race walkers, “when we decided to do the study I did think we’d have female race walkers,” she says. But she found that the pool of potential female participants was small. “We didn’t have a lot in Canberra,” she recalls. “Of that ones that were of the right caliber, we had people being injured, a couple who were doing a race that wouldn’t make them available.”
And when logistics shoot down one sex in a study, it will be the women who lose out. “Conference organizers are careful and include symposia on sex differences,” says John Hawley, an exercise physiologist at Australian Catholic University. But when it comes to actually doing studies, there can be challenges. Many of Hawley’s studies are invasive, involving biopsies that leave scars. And many women aren’t willing to get scarred for science. “If I go out to a triathlon and say to the females, ‘we’d like to do invasive work,’ they’re like ‘ooh, no biopsies,’” Hawley says. “It’s a legitimate practical issue.”
Finally, there are also cultural reasons that women end up underrepresented. Female athletes don’t get the same TV time as male athletes, and the players don’t get paid as much, even though, as in soccer, the women’s national team is more highly ranked than the men’s. This disparity might also result in [gender] disparity in performance studies, Gladden suggests. “Science unfortunately isn’t immune to those same problems.”
Leveling the playing field Calls for equality in exercise research continue. In a recent article in The Sport and Exercise Scientist, Murphy looked at the March issue of the Journal of Sports Sciences, and found that the 13 papers in the issue included 852 participants, but only 103 women, a dismal participation rate of only 12 percent.
While Murphy notes that other fields of study may have similar findings, exercise science needs to do better. “It’s quite simple,” she says. “If we want to apply the findings to men and women, we need to test our hypotheses and do our measures in research involving men and women.”
The lack of parity for female research participants “should be alarming,” Hawley says. He notes that while scientists bear some responsibility, “the funding bodies and editors of journals should be asking more serious questions.” Scientists who peer-review each other’s work should also ask hard questions, he says. “Peer review is failing as well….The typical responses [are] ‘unfortunately the budget does not permit females’ (a complete white lie of course), and time and practicalities. It’s not an excuse.”
As is true in many areas of science, as more women join the ranks of scientists studying exercise, they are more likely to include women in their studies. But Murphy notes that it won’t solve the problem. “I don’t think scientists think of it unless they have a particular interest in the area,” she says. “There are really good women researchers [in exercise science], but they study men, and the men study men! We’re not doing ourselves any favors.”
The broader impact of this gender imbalance is that training, fitness and diet recommendations for performance and recovery are based on science that may have only been done in men, and then downsized to fit women. Sometimes it may make no difference. But what if it could? In the end, the road to stronger, better, faster and healthier is one with studies that include everyone. “It is important to show that the general principles of exercise effectiveness are applicable to all populations whether it be males or females, older or younger, ethnically different or diseased populations,” says Jelleyman. “Sometimes it emerges that there are differences, other times less so. But it is still important to know this so that recommendations can be based on relevant evidence.”
Dogs were domesticated at least twice, a new study suggests.
Genetic analyses of a 4,800-year-old Irish dog and 59 other ancient dogs suggest that canines and humans became pals in both Europe and East Asia long before the advent of farming, researchers report June 3 in Science. Later, dogs from East Asia accompanied their human companions to Europe, where their genetic legacy trumped that of dogs already living there, the team also concludes.
That muddled genetic legacy may help explain why previous studies have indicated that dogs were domesticated from wolves only once, although evidence hasn’t been clear about whether this took place in East Asia, Central Asia or Europe. The idea that dogs came from East Asia or Central Asia is mostly based on analysis of DNA from modern dogs, while claims for European origins have been staked on studies of prehistoric pups’ genetics. “This paper combines both types of data” to give a more complete picture of canine evolution, says Mietje Germonpré, a paleontologist at the Royal Belgian Institute of Natural Sciences in Brussels, who was not part of the study.
Understanding this domestication process may illuminate humans’ distant past — dogs were probably the first domesticated animal and may have paved the way for taming other animals and plants.
In the study, evolutionary geneticist Laurent Frantz of the University of Oxford and colleagues compiled the complete set of genes, or genome, of an ancient dog found in a tomb near Newgrange, Ireland. Researchers drilled into the hard-as-stone petrous portion of the dog’s temporal bone, which contains the inner ear, to get well-protected DNA, Frantz says. The researchers don’t know much about what the midsize dog looked like; it doesn’t bear any genetic markers of particular modern dog breeds, Frantz says. “He wasn’t black. He wasn’t spotted. He wasn’t white.” Instead, the Newgrange dog was probably a mongrel with fur similar to a wolf’s.
But the ancient mutt has something special in his genes — a stretch of enigmatic DNA, says Germonpré. “This Irish dog has a component that can’t be found in recent dogs or recent wolves.” That distinct DNA could represent the genetic ancestry of indigenous European prehistoric dogs, she says. Or it could be a trace of an extinct ancient wolf that may have given rise to dogs (SN: 7/13/13, p. 14). Unraveling the prehistoric mutt’s DNA may help researchers understand dogs’ history. Already, comparisons of the ancient Irish dog’s DNA with that of modern dogs reveal that East Asian dogs are genetically different from European and Middle Eastern dogs, the researchers have found. Other researchers may have missed the distinction between the two groups because they were working with subsets of the data that Frantz and colleagues amassed. Frantz’s team generated DNA data from the Newgrange dog and other ancient dogs, but also used data from previous studies of modern dogs, including the complete genomes of 80 dogs and less-complete sampling of DNA from 605 dogs, a collection of 48 breeds and village dogs of no particular breed.
The distinct genetic profiles of today’s Eastern and Western dogs suggests that two separate branches of the canine family tree once existed. The Newgrange dog’s DNA is more like that of the Western dogs. Since the Irish dog is 4,800 years old, the Eastern and Western dogs must have formed distinct groups before then, probably between about 6,400 to 14,000 years ago. The finding suggests that dogs may have been domesticated from local wolves in two separate locations during the Stone Age.
The ancient dog’s DNA may also help pinpoint when domestication happened. Using the Newgrange dog as a calibrator and the modern dogs to determine how much dogs have changed genetically in the past 4,800 years, Frantz and colleagues determined that dogs’ mutation rate is slower than researchers have previously calculated. Then, using the slower mutation rate to calculate when dogs became distinct from wolves, the researchers found that separate branches of the canine family tree formed between 20,000 and 60,000 years ago. Many previous calculations put the split between about 13,000 and about 30,000 years ago, but the new dates are consistent with figures from a study of an ancient wolf’s DNA (SN: 6/13/15, p. 10). Frantz and colleagues emphasize that their estimate doesn’t necessarily pinpoint the time of domestication. It could indicate that different populations of wolves were evolving into new species at that time. One of those could later have evolved into the ancestor of dogs. Although the new study indicates there were two origin points for dogs, humans’ canine companions have since mixed and mingled. By comparing mitochondrial DNA, the genetic material inside energy-generating organelles, from 59 ancient European dogs and 167 modern dogs, the researchers determined that East Asian dogs at least partially genetically replaced European dogs in the distant past. Mitochondria are inherited from the mother. Ancient European dogs’ mitochondrial DNA varieties, or haplogroups, differed from those of modern dogs, the researchers found. Of the ancient dogs, 63 percent carried haplogroup C and 20 percent carried haplogroup D. But in present-day dogs, 64 percent carry haplogroup A and 22 percent carry haplogroup B. That shift and other evidence indicate that dogs from the East moved west with humans, and Eastern dogs passed more of their genetic heritage to descendants than Western dogs did.
Archaeological evidence backs up the dual origin story. Dogs as old as 12,500 years old have been found in East Asia. In Europe, dogs date back to 15,000 years ago. But there is a dearth of dog remains older than 8,000 years old in Central Eurasia. That lack possibly rules out this in-between region as a domestication site, despite some genetic evidence from village dogs that says otherwise (SN:11/28/15, p. 8). “The argument in this paper, pointing out a pattern in the archaeological data of an absence of early dog remains in the period [before] 10,000 years ago, should be taken very seriously,” says Pontus Skoglund, an evolutionary geneticist at Harvard University.
He’s not yet won over by the double-domestication hypothesis, though. The researchers admit they can’t yet rule out that dogs were domesticated once, then transported to different places where isolation, random chance and other factors caused them to drift apart genetically.
More ancient DNA may help clarify the still-hazy picture of dog domestication. Says Skoglund: “It’s going to be an exciting time going forward.”
Fire was one of our ancient ancestors’ first forays into technology. Controlled burns enabled early hominids to ward off cold, cook and better preserve game. New evidence places fire-making in Europe as early as 800,000 years ago, much earlier than previously thought and closer to scientists’ best estimate for hominids’ first use of fire, about 1 million years ago in Africa.
It’s unclear how early Homo species came to master fire, but it was perhaps an attempt at problem solving — capturing a natural phenomenon and harnessing it for use. That tradition has persisted in human cultures. It thrives today among scientists, especially those engaged in problem solving related to society’s most pressing issues. Take drug addiction, a vexing problem that has grown in urgency in the last decade as more and more people have become dependent on opioids — not only street drugs like heroin but also prescription pain meds like OxyContin and fentanyl. Opioids can be extremely difficult to give up because of their strong addictive pull. So scientists are trying to develop vaccines that would block the effects of heroin and other drugs of abuse, as Susan Gaidos reports. Eliciting a strong immune response, researchers theorize, could stop the drug from reaching the brain, preventing the high that fuels addiction. Success with such biotechnology, now being tested only in lab animals, would offer hope to many battling to stay off drugs.
Another modern scourge is terrorism, and anthropologists like Scott Atran have been exploring the psychological and cultural factors that drive some individuals to extreme acts of violence. There is no technology to prevent people from committing such acts — at least not yet. Basic explorations must always precede any practical use of new knowledge: Hominids could not use fire until they understood its nature and limits — which things burn, which do not; water and sand douse flame, oil and fat fuel it. Mapping terrorism’s contours is just a beginning on a long journey toward developing tactics for undercutting its power.
So it is with many other reports in this issue about basic explorations that may well precede the birth of new technologies. A few favorites:
A report on insights into how the microbial denizens of the gut influence weight gain and obesity. Scientists have now revealed a molecule made by microbes that sends a signal to the brain, influencing fat storage and appetite.
An intriguing study of mice with genetic mutations similar to those found in some people with autism. The findings suggest a role in the disorder for nerve cells involved with touch, as well as a new way to think about autism that may one day identify a target for novel therapies and interventions.
News of a second detection of gravitational waves from LIGO. It’s less dramatic and showy than the first black hole merger detection, announced in February. But it is nonetheless a further sign that a new era, one in which astronomers probe the heavens by watching for violent if subtle wakes in the fabric of spacetime, is upon us.
Feeling good may help the body fight germs, experiments on mice suggest. When activated, nerve cells that help signal reward also boost the mice’s immune systems, scientists report July 4 in Nature Medicine. The study links positive feelings to a supercharged immune system, results that may partially explain the placebo effect.
Scientists artificially dialed up the activity of nerve cells in the ventral tegmental area — a part of the brain thought to help dole out rewarding feelings. This activation had a big effect on the mice’s immune systems, Tamar Ben-Shaanan of Technion-Israel Institute of Technology in Haifa and colleagues found.
A day after the nerve cells in the ventral tegmental area were activated, mice were infected with E. coli bacteria. Later tests revealed that mice with artificially activated nerve cells had less E. coli in their bodies than mice without the nerve cell activation. Certain immune cells seemed to be ramped up, too. Monocytes and macrophages were more powerful E. coli killers after the nerve cell activation.
If a similar effect is found in people, the results may offer a biological explanation for how positive thinking can influence health.