This past year it was easy for any story about science or nature to get, quite literally, eclipsed. When the world wasn’t fixated on the total solar eclipse that traversed the continental U.S., scientists were making headlines by marching through major U.S. cities.
But there were many other important if less publicized events, ideas, trends, problems and discoveries in 2017. Take, for one, an announcement in July that strongly suggested ravens were capable of cause-and-effect reasoning, planning ahead and skilled bartering.
Psychologists commenting on the paper noted a profound implication — that evolution produced intelligence independently at least twice. Intelligent behavior in apes may have stemmed from the same root as our own, but birds are perched on a different branch of the evolutionary tree, separated from ours by 300 million years.
The experiments were small — featuring just five birds — but the findings were striking. Ravens consistently turned down a small piece of food in order to get a tool that would allow them to pry a bigger piece of food from a box. They also turned down a small food treat in favor of a bottle cap they’d been trained to redeem for a bigger treat. They made those choices even when they had to wait more than 15 minutes to cash in — an act of patience that eludes most 4-year-old humans and a few of us over 4 as well.
This all follows a trend in bird research showing that ravens and their relatives can outperform apes in a number of puzzles that seem to require the kind of cause-and-effect reasoning once thought unique to humanity.
The Washington Post called the findings an “indignity,” perhaps assuming it diminishes humanity to find intelligence in other creatures. But given that we’re still waiting for a visit from intelligent space aliens, people should be delighted to recognize that other intelligent life forms have lived among us all along.
Another demonstration of human-like behavior comes courtesy of a more distant relative. The humble jellyfish apparently sleeps at night. It’s not clear which is the bigger surprise: that an animal without a brain sleeps, or that in the daytime, it’s capable of being awake.
These are the first brainless animals known to show sleep-wake cycles but not the first invertebrates. Three biologists won a Nobel Prize this year for exploring fruit-fly sleep and showing that humans and flies share many of the same genes that control cycles of sleep and wakefulness.
To probe the sleep of jellyfish, a team of graduate students at Caltech used a tank of creatures of the genus Cassiopea. The main daytime activity of these animals is lying near the bottom of the tank and undulating their bell-shaped bodies to waft in nutrients and waft away waste.
The Caltech team used motion sensors to show that at night their jellyfish undulated at a more languid pace. When the researchers roused the restful creatures with food, or by moving them, they were much slower to respond during the night than during the day. To top it off, the team sleep-deprived the animals by squirting them periodically with jets of water, and found that this made them sluggish the next day. But following a good night’s sleep the next night, the jellyfish were back to normal.
Some news from the human realm was equally promising. While scientists and interested citizens marched for science last spring, statisticians have been more quietly laboring for the cause of science by helping scientists produce less bunk.
Not all scientists are equally prone to producing dubious results. The main culprits are in social science and medical research. Both fields came under questioning when reviews of published studies showed that fewer than half were readily reproducible. The source of the problem doesn’t seem to be that scientists are making up data but that too many are making big mistakes in the way they use statistical calculations to draw conclusions from their data.
The American Statistical Association has been on a mission to help scientists in these fields find a better way. In 2016, ASA issued a set of guidelines for scientists on how to avoid the most common abuses. Then, statisticians and interested scientists held a meeting in October in Bethesda, Maryland, to start hashing out new systems for doing things right.
The week after the meeting, the depth of the problem came through in a New York Times Magazine story headlined “When the Revolution Came for Amy Cuddy.” The protagonist, a young Harvard professor, was portrayed as a victim. Her claim — which led to a bestselling book and the most popular TED talk in history — was that the less powerful people of the world could get a leg up through a sort of body language she’d dubbed “power posing.”
Several independent researchers tried and failed to replicate her alleged scientific proof of the power of posing. Then, a group of statistics-savvy psychologists found flaws in her math and reasoning. As the subtitle of the Times story proclaimed, “She played by the rules and won big . . . then, suddenly, the rules changed.”
And yet, the old “rules” were never rules at all, but common statistical errors, or cheats that got accepted in problematic fields just as drivers sometimes know they can get away with driving 40 mph in certain 25-mph-zones. When physicist Richard Feynman started nosing around in psychology labs way back in the 1960s, he found some researchers were making errors (or cheating) in a way that inflated a quantity known as statistical significance, which is poorly understood and yet a primary criterion for publication in psychology journals. Psychologist Gerd Gigerenzer identified statistical trouble in a 2004 paper, “Mindless Statistics.” In 2011, University of Pennsylvania psychologist Uri Simonsohn and colleagues published a paper demonstrating that widely used statistical cheats made standards so loose that they could derive just about any absurd claim.
The Times story makes a good case that Amy Cuddy was a victim of inexcusable bullying, mostly in anonymous comments following allegedly respectable academic blog posts. But abuse of statistical methods has its own victims — patients who may not be getting the best treatments possible and taxpayers whose money can end up funding flashy but flawed science while honest, quality science gets starved.
These are challenging times for scientists. The March for Science happened because, after the 2016 election, scientists got worried they were not valued. Trump failed to choose a science adviser and ignored the opinions of climate scientists when he exited the Paris climate treaty. He’s cut areas of science funding and disbanded important advisory groups, including one panel aimed at improving the poor-quality forensic science often used in the criminal justice system. If scientists are going to fight back, they need to shore up their weakest areas. Getting scientists to improve their use of statistics won’t be easy. It will take some serious planning and patience. But if ravens can do it, surely researchers can as well.