U.P. 1: In Defense of Bad Health Journalism
Reporters are supposed to be skeptical of their sources. So why do mainstream medicine and health reporters have such unconditional faith in the authorities?
Trying to determine what’s going on in the world by reading the newspapers is like trying to tell the time by watching the second hand of a clock.” Ben Hecht, Child of the Century, 1944
Let’s take stock of the general metabolic state of the nation these days, based on what we learned in the past month.
Three in every four Americans are now considered overweight and obese (77 percent, to be precise), according to a study published in The Lancet and as reported in The New York Times.
One in five children in America are obese, and health authorities are now more or less universally acknowledging that there is a child health crisis in this country,
Donald Trump has been elected president and nominated Robert F. Kennedy Jr. to be head of the Department of Health and Human Services -- “the country’s top health official,” as the Wall Street Journal writes.
And Kennedy has vowed to Make America Healthy Again, which would be a wonderful accomplishment. He has accepted the challenge of “combating chronic disease in the U.S.” and he’s going to do it, so the WSJ says, “by ridding the country’s food supply of harmful chemicals and ingredients.”
Hence, the central topic of discussion, as I write this, is… Fruit Loops.
And not just Fruits Loops in general, but the artificial dyes that give them their fruity colors. It’s these food dyes that have landed the cereal “at the Center of U.S. Food Politics.” The WSJ again:
“They get brighter colors in Froot Loops, but it’s literally poisoning our kids,” Kennedy said in a Fox News interview in September. Kellogg said the colors it uses in its cereals have been deemed safe by scientific bodies around the world.
Fruit Loops may be “literally poisoning our kids” and that’s a hell of a story. But are the food dyes really worse than the sugar in Fruit Loops, constituting, as it does, a hair under one third of all the calories? That seems to be an obvious question and maybe the first in this context I would want our nation’s top health official to address.1
But the food dyes are news, and so that’s what we’re reading about. The sugar content is not. (The food dyes also represent a winnable challenge. We can imagine RFK Jr making progress here. The sugar content may not be, as I’ll discuss also parenthetically.)
This brings us back to the Ben Hecht2 epigraph above, the second-hand-of-a-clock metaphor for the educational value of the daily or even weekly news. It’s a clever line, which is why I used it as an epigraph, but it’s clever because it’s a self-evident truth.
What we read, watch or listen to that constitutes “the news” is, by definition, what just happened, all too often absent any context. We read about what’s new, literally, not what it means or the historical perspective. The shorter the reporter’s deadline, the newer the news, the more the resulting article will resemble the single tick of that second hand.3
This is why, for instance, the New York Times can run the article I linked to above about that newly-published “sweeping” report of overweight and obesity in America–the 77 percent prevalence number–and then quote a co-author of the report saying “I would consider it an epidemic,” when the precise term “obesity epidemic" itself has already been used over 550 times in the paper.4
This is why we’re reading about Fruit Loops and toxic food dyes, rather than Fruit Loops and sugar. If you care about the larger picture—that obesity epidemic and the ever-rising rates of obesity and diabetes in kids, which in turn could also be driving up cancer rates and causing neurological problems—you’re not getting it in these stories. Sugar is an old story. (Fruit Loops, too, may be an old story by the time I post this. If not yesterday’s news, then last week’s.)
It's also why we’re reading about ultraprocessed food (UPFs) and the damage they purportedly do to our health, rather than food-like substances, the catchy term that Michael Pollan memorably used in his 2008 best-seller In Defense of Food, or junk food, which, of course, includes fast food, and is how earlier generations described these foods, not apparently feeling the need to give these concepts the veneer of science by getting multisyllabic. The media can discuss the UPF problem as though its news, because the term itself is new(ish), even as the idea itself is not.
I’m guessing this is also a reason why RFK Jr is talking about food dyes. He’s certainly identified sugar in other contexts as a likely cause of the obesity and diabetes epidemics. But sugar is not just old news; it’s an old and unappealing story. You can go after food dyes (and ultraprocessed foods), and you come across as protecting Americans from the evils of the food industry. You’re a courageous reformer.
Target sugar itself in the crap (another single-syllable UPF synonym) Americans are letting their kids eat for breakfast, or perhaps trying to convince Americans that they shouldn’t let their kids eat that ultraprocessed crap (a compromise) at all, and you’ve joined up with a long history of nags and scolds, members of the dreaded nanny state or food police, and waded into a swamp of public relations issues that’s likely to kill your credibility (or bore your constituency) before you even get started.
(Then there’s that winnability issue: General Mills is likely to accede to political pressure to remove the particular food dyes that are purportedly toxic, because it has alternatives, and it can still sell its quota of Fruit Loops, with or without those particular dyes. Take the sugar out, or cut the sugar content significantly, and you’re selling a different cereal, one far less likely to appeal to the tastes of children. Use artificial sweeteners instead of the sugar and someone will assuredly say the sweeteners are toxic.)
The SM&H beat: science, medicine and health reporting
The second-hand-of-a-clock issue is a problem with the reporting of all current events, but the reporting of science, medicine, and health (let’s call that the SM&H beat) has unique issues that exacerbate the situation considerably. The SM&H beat is fundamentally unlike any other journalistic discipline in three ways, none of which offer reason for optimism:
Unlike reporters on other beats, SM&H reporters do not make their living covering events -- a baseball game, a concert, a crime, a battle, an election, a plane crash, a new film, the rise (or fall) of the stock market, etc. They’re typically reporting on the interpretation of an event, either observed or created in a clinical trial or meta-analyzed from a host of observations and/or trials, and they’re merely passing along or translating for the lay public what the authors of these papers are saying they’ve done, not what the reporters themselves have actually witnessed. Hence, per the New York Times: “Three-Quarters of U.S. Adults are Now Overweight or Obese; A sweeping new paper reveals… [my italics]” If they’re not reporting on the latest articles, they’re reporting on what’s in the news for other reasons, but once again, unwitnessed.
SM&H reporters rely on authority figures and experts—if nothing else, the authors of the latest paper—to tell them what was done and why it was important, because they have no choice. That’s their defense. The reporters don’t believe that they’re capable of understanding the nuances of what they’re reporting, and they’re almost assuredly right. If nothing else, SM&H reporters tend to report each week on a very different aspect of their beats: vaccines this week, for instance, obesity drugs next week, antibiotic-resistant bacteria the week after that.
While reporters on other beats are expected to know or learn their subjects well enough to trust their own judgement, SM&H reporters are not. These subjects are complicated. There’s a reason why work in these fields requires an advanced post-graduate degree. Hence, the best that SM&H reporters believe they can do is to translate for the lay public what the experts or authorities tell them, not to question it. The reporters would like to think that peer reviewers and the journal editors have already done the necessary questioning. (This is what the British medical statistician Stuart Pocock called “a naive faith in the clinical gospels” in his 1983 textbook Clinical Trials: A Practical Approach.) The reporters themselves? No.
Unlike reporters on other beats, those on the SM&H beat are reporting on news that is all too likely to be misinterpreted or simply wrong.
Reporters covering the latest baseball game can be confident they got the score correct, who homered and who didn’t, and even whose error or ill-conceived curve ball gave away the game in the 9th inning. They don’t have to ask anyone, because they can see it for themselves. A reporter covering a political rally, a riot, a concert, can describe what happened without asking authorities for their interpretation. Yes, the further they get from being an eyewitness to the event, the more they have to rely on others for their information. Crime reporters, for instance, need to rely on police sources and available eyewitnesses to get the facts, but not to interpret the significance of the crime to our general state of knowledge.
SM&H beat reporters (per point #1), though, are not reporting on events, but about the interpretation of events in that latest paper or report. Technically, they’re not reporting what was actually done, because they never witnessed that. They’re reporting what the authors of the paper think or say they’ve done, and what their chosen authorities or expert sources in turn say about its significance.
But those are interpretations of what was done, and those interpretations, and so the sources and chosen experts, are very often wrong.
That’s the implication of the concept of the reproducibility or replication crisis in science. And that implication should not be open to debate.
The problem is nothing new, only the use of the word crisis to describe it (which made it seem like news a few years back). That some large proportion of all newly published results are either wrong or meaningless–that they fail to replicate or offer nothing at all new or meaningful to the relevant field of research—is an unavoidable aspect of the job of doing science.
Sociologists of science have long assumed that the latest scientific publications are either mostly wrong or meaningless—chaff, in effect, not wheat. From their informed perspective, the process of science is figuring out which is which, separating out the wheat and disposing of the chaff.
This is what is meant by the idea that science progresses: it might start with a radical claim—i.e., one that is newsworthy—and then more and researchers get involved, better, more meticulous work is done and, (probably far) more often than not, the original radical claim turns out to have been overinterpreted or simply wrong.
The better scientists accept this as a reality of their world. Consider Richard Feynman’s famous Cargo Cult Science commencement address at Cal Tech, which dates to 1974. (This is the one in which he famously defined the first principle of science as “you must not fool yourself--and you are the easiest person to fool.”)
Feynman implied back then, half a century ago, that the entire field of psychology was pathological; that the psychologists could not be trusted to do reliable research and so learn anything meaningful about their subject of study. If their work was reproducible, he implied, it was only because the psychologists trying to do the replications were as clueless about what it took to do good science as the psychologists whose work they were trying to replicate in the first place.
Despite the harshness of the critique, Feynman’s talk is justly considered a classic in the field.
Think about it, and the problem it presents to the media reporting of science, health and medicine again becomes manifestly obvious. Researchers in any scientific discipline who are doing meaningful work—trying to learn something both new and nontrivial—are working by definition at the edge of the known universe. They are trying to do something, to observe or create something, that no one has ever seen or created before.
Almost by definition, they will be working with technologies, techniques and methodologies that may or may not be capable of doing the jobs for which they’re intended. The only way to know if the researchers really get it right is if their work can be independently reproduced by others (at which point it is no longer news) and if this independent reproduction itself is reproducible (definitely not news).
Trusting the science requires deferring to the textbooks, and even then…
Back in 1991, in his book Reliable Knowledge, the physicist-turned-philosopher-of-science John Ziman aptly captured this problem by describing the front lines of science, as “the place where controversy, conjecture, contradiction, and confusion are rife." And it’s the front lines of science, of course, where the SM&H beat reporters do their jobs.
To find science that can be trusted to be most likely right, Ziman said, we have to defer to the textbooks. This is the stuff that is no longer on the front lines. The front, in effect, has moved on. Ziman quantified this thinking with an estimate that may or may not be hyperbolic: The physics in undergraduate textbooks, he said, is 90 percent true; the physics-related claims in the research journals are 90 percent false. The chaff-to-wheat-sifting job of the scientific process is establishing which 10 percent in the journals (the news) is both reproducible and important—i.e., right, and meaningfully so—and then making sure that that becomes the content of the next generation of textbooks (not news).
While we can hope that Ziman’s estimates were exaggerations for rhetorical effect, it’s also likely that the batting average in physics is higher than it is in medicine or health, public or personal. Physics is a subject in which experimental claims can be checked by experiment. If the latest claims are interesting enough to be newsworthy, they will be checked. Physicists who publish irreproducible results of any interest to their peers are in imminent danger of being embarrassed, and so (we can hope) will do everything in their power to avoid that.
In medicine and health-related subjects, replicating an experiment or a clinical trial can all too often be impossible or simply impractical. The experiments themselves are messier, dependent on a host of variables that are difficult to interpret and may differ from lab to lab. Even the statistical analyses are open to very legitimate debate. Under those circumstances, the researchers who publish an irreproducible result can always blame the folks who tried and failed to replicate it for not doing it right. Often nobody cares enough to do the work of replication, even when the media considers it newsworthy.
One of my favorite comments to this effect came from a highly respected New England Journal of Medicine editor who said to me, back in the oughts, at a symposium on health journalism, “we have trouble getting good papers to publish in the New England Journal once a week. Imagine the shit the other journals are publishing.”
The front lines in medicine and health cover an enormous number of subjects; they’re expansive. There’s always something new to do that might generate funding and even a career. But no one gets funding or a promotion by establishing that someone else got the right answer (or wrong), vitally important to the process of science as that endeavor happens to be. This is one likely reason why medical school instructors have been known famously to tell their students that half of what they’re being taught is wrong, but nobody knows, of course, which half.
Is the solution to trust your sources?
Getting around these SM&H beat problems requires that the reporters rely on their sources—the people who did the work—to reliably communicate what was done and why it is significant. A necessary job of all reporters is to cultivate sources that can be trusted, but those choices will depend as much on the sensibilities of the reporter as they do of the source.
I assume that SM&H reporters care whether what they’re reporting is fundamentally correct or meaningful, but it’s not their job.
Their job is to report what’s new, and to do so without having the expertise themselves to make the necessary judgements about the likelihood that what they’re reporting on is actually correct. So they trust the people who did the work to have gotten it right and to communicate its significance correctly, and they trust the folks whom they think of as the experts to further provide credibility. They have few to no other options. (This trusting-the-experts concept then got translated during the Covid years into trusting the science, something we’re all told to do these days, even though that’s a fundamental misunderstanding of the nature of science itself. If I get into that problem now, however, this post will run to book length. It’s already too long.)
This conflict between what’s news and what’s right is the reality of science reporting in the best of all worlds, which are those scientific disciplines that have little to no influence on our immediate lives: physics, for instance, cosmology, anthropology, or the latest research on mitochondrial DNA.5
Reporting on medicine and personal or public health, though, is very much not the best of all worlds. Covid, of course, exposed this reality in a dramatic fashion. In these fields, lives are on the line. Real harm will be done by misinterpreting the evidence, and very obvious controversies exist. Indeed, if “controversy, conjecture, contradiction, and confusion” were not manifesting themselves in these situations, we’d have to wonder why not. Mistakes are always being made; it’s the nature of science. Without the kind of definitive experiments that can be done in the harder sciences–physics, chemistry, biology, etc.—it’s always possible that the majority opinion, and so the authorities are wrong.
In such a situation, with lives on the lines, harm to be avoided, the reporters have to trust someone, not just to adjudicate the evidence in these controversies, but to establish whether a controversy is even legitimate—i.e., based on meaningful evidence—and not the stuff of quackery.
The safest bet in these situations, the choice least likely to do harm, is to trust the majority opinion, the conventional wisdom, and the authorities and institutions that promote it. They are more likely to be right, perhaps far more likely, than anyone else. Trusting anyone else to have done this job of assessing the evidence correctly would be to increase the risk of harming their readers. Again, this was clearly a force in the Covid reporting. In the midst of an ongoing crisis, the reporters accepted that their job was to communicate widely what the authorities thought and said. And the reporters chose to defend the authorities against criticism, even if it meant belittling the critics, because to do this job, they had to convince themselves that the authorities were right.
While independent researchers—bloggers and social media influencers, renegade MDs, and courageous academic—could interpret the evidence themselves and conclude publicly that the authorities got it wrong, the reporters for the mainstream media could not. Not only were lives on the line, but so was the credibility of the established institutions for which they worked. A reporter for The New York Times or the Wall Street Journal represents that institution. Do they want to bet its credibility on their contrarian judgement? What if they are the ones who are fooling themselves? Trusting the authorities is always the safest bet.
In this, SM&H reporters are, again, very different than the reporters on any other beats. Political reporters, crime reporters, even sports reporters would never imagine that they should report what their sources tell them as though it’s true, let alone unassailable. In science, medicine and health, reporters can’t image doing anything but that. If a controversy exists in a subject of any importance, the majority opinion rules and if the majority opinion (i.e., what most scientists believe, as SM&H reporters like to say in their articles) is that there is no controversy, that’s what the reporters will faithfully assume and communicate to their readers. The smart money is that they’re right to do so.
In an ideal world, they would examine the evidence themselves, but, again… not their job. Even if they had the necessary training to do that job, it’s not only an unaffordable luxury for a reporter on a deadline, it can also be a ticket out of the mainstream media, that august institution that the reporter represents.
Think of it from the reporters’ perspective. After convincing the editors that they should take whatever time is necessary to investigate the evidence-base, two scenarios are likely to play out. In the first, they side with the authorities. Their editors might consider that wasting valuable time because they’ve now come to a conclusion that could have been accepted weeks or months earlier.
But what happens when they conclude the authorities got it wrong?
Whether you consider this the best- or worst-case scenario depends on whether you value job security over the reporter’s obligation to get the story right. Now the reporters assess the evidence-base themselves and conclude that the conventional thinking, the majority position is incorrect, and that’s what they want to report. Now they’re concluding that the authorities themselves cannot be trusted, and they become, in effect, the journalistic equivalent of whistleblowers.
Before such an article can run, the reporters have to defend that position to their editors, who have to defend it to their bosses. Both reporters and editors have to continue to defend that unconventional position each time a new article is published supporting what most scientists believe. (And that will always happen because the majority opinion is what most scientists believe, and so how most scientists will continue to interpret—rightly or wrongly—their research.)
Now the reporters can no longer trust their authoritative sources, because they’ve already concluded that they’re unworthy of trust. They may be skeptical of all authoritative sources for the same reason, an occupational hazard in the investigation business. While business as usual for reporters on the SM&H beat is to support the significance of the latest news by quoting a source at Harvard, say, or Stanford, because these institutional names themselves bestow credibility (more or less). But this is no longer an option once the reporters realize that Harvard or Stanford professors also engage in bad science. The reporters have seen the man behind the curtain or taken the red pill, depending on your choice of cinematic metaphors. There’s no unseeing.6
Meanwhile, the usual sources of authority may no longer trust the reporters, in any case. Who do the reporters quote in their next stories, on this subject or any? Who do they turn to for reliable judgments in the fields they’re covering, who satisfy the two necessary criteria: 1) the reporters themselves consider them credible, and 2) their editors and their readers would consider them credible?
From this perspective, examining the evidence-base and siding with the minority on a controversial issue (with the fringe epidemiologists, as they were disparagingly called in Covid, or the fad diet doctors, as they’re disparaged in our world) can be a no-win proposition for the journalist’s career. Now, like any whistleblower, the reporter has to fight daily to maintain credibility while all the institutional forces work to undermine it. I don’t recommend it as a career move, even as it benefits the readers and society at large by revealing, ideally, the truth, which should, of course, be the ultimate obligation of all journalism.
Ultraprocessed foods and seed oils, controversies of a different color
For those who like case studies, let’s see how these issues played out last month in two recent articles published in The New York Times just three days apart. Both were by Alice Callahan, a reporter who does have a Ph.D. in nutritional biology, the subject she’s tasked to cover. The Times calls her “a reporter with expertise in the uncertainty of nutrition,” which should inform Callahan with the confidence to critically assess the evidence herself. But she passes on the opportunity in her work. Rather she writes from the perspective of what the authorities believe (perhaps because she has the doctorate, rather than despite it).
Both articles are about controversial subjects—ultraprocessed foods and seed oils. In both she defends the majority opinion, even as she has to waffle with the rules of evidence to do so. (My biases: I am skeptical that characterizing foods as ultraprocessed or not tells us anything meaningful about whether they cause common chronic diseases and, if so, why.7 I find the evidence that seed oils cause common chronic diseases to be uncompelling. I’m more confident that I’m right about the former than about the latter.)
The first article was published on November 6. Headline: “Why the Next Dietary Guidelines Might Not Tackle Ultraprocessed Foods.” And the subhead, which captured the gist of the article: “there’s not enough evidence to recommend avoiding them, a scientific advisory committee says. Some experts disagree.”
In this article, Callahan accepts the legitimacy of a controversy because the authorities themselves do. Here’s the lead:
Hardly a day passes without a new study, and an ensuing round of headlines, sounding the alarm on ultraprocessed foods.
This wide-ranging category — including sodas, processed meats and many breakfast cereals, snack foods, frozen meals and flavored yogurts — has been linked to a range of health issues such as obesity, Type 2 diabetes, heart disease, gut conditions and depression.
So it may come as a surprise that when a committee of 20 of the country’s leading nutrition scientists met in late October to preview their recommendations for the next edition of the Dietary Guidelines for Americans, they said that there was not enough evidence to steer people away from the foods.
Why not?
No clinical trials have been done testing this hypothesis that UPFs cause chronic disease, or at least none that are even remotely capable of establishing that they do, let alone how they might do it—i.e., the mechanism. Assuming they are harmful, is it the processing itself that is the problem, or the chemical additives, or the sugar, refined carbs, and maybe even seed oils, or all (or none) of the above?
Essentially all the evidence linking UPFs to chronic disease is observational; UPF consumption associates with chronic disease prevalence. Callahan allowed that “most of the studies” were observational (conceding to the reality of the problem, albeit underrepresenting it), and then acknowledged that this means “they couldn’t prove cause and effect in the way that clinical trials can.”
The catch is that Callahan and the experts she respects do think UPFs are harmful; they do believe the observational studies are revealing a causal relationship between UPF consumption and disease, even if they have no real understanding of why. They just don’t think the evidence is sufficient yet to do anything about it. Hence, we have a controversy, but one that they imply will assuredly be resolved in favor of their beliefs.
In this scenario, Callahan respectfully cites the “experts” who disagree with the committee’s decision to refrain from recommending against consumption of these foods. And she ends with a thoughtful commentary on why the controversy exists, giving credence to both sides:
It’s challenging for health experts to make recommendations on ultraprocessed foods when the science is still emerging, said Maya Vadiveloo, an associate professor of nutrition at the University of Rhode Island.
The ultraprocessed food category is broad, and if you recommend avoiding all of them, you might be cutting out some foods that are actually beneficial, she added. Breakfast cereals and yogurts, for instance, have been associated with lower risks of cardiovascular disease and Type 2 diabetes.
We need more research to tease out these nuances, several experts agreed. But some said that it’s not too soon to start incorporating some of this guidance into federal recommendations.
Article number two was published 3 days later, on November 9th. Headline: “Are Seed Oils Actually Bad for You?”
The subhead could have mirrored the subhead on the UPF story— “there’s not enough evidence to recommend avoiding them… Some experts disagree.” But it did not. Instead, it was “Robert F. Kennedy Jr. and others claim they’re harming our health, but the evidence suggests otherwise.” And here’s the lead:
To their many vocal detractors, they’re referred to as “the hateful eight.” Canola oil, corn oil, sunflower oil and other refined oils made from the seeds of certain plants have become lightning rods for wellness influencers — and some politicians.
Robert F. Kennedy Jr., who has been tapped to lead the Department of Health and Human Services by President-elect Donald Trump, says Americans are being “unknowingly poisoned” by them. Online forums, blogs and influencers say they’re “toxic,” “slowly killing you” and driving up rates of diabetes, obesity and other chronic diseases.
The claim that seed oils are ruining our health is especially rankling to nutrition scientists, who see them as a big step forward from butter and lard.
In this article, the majority opinion is that seed oils are benign and there is no legitimate controversy. Anyone who argues otherwise is simply wrong. (RFK Jr’s involvement makes this an easy accusation, whether he is or not on any particular issue). This is why Callahan quotes no experts to defend the seed oil toxicity position. Only the conventional thinking is represented:
“Decades of research have shown that consuming seed oils is associated [my italics] with better health, said Christopher Gardner, a professor of medicine at Stanford University. To suggest otherwise, he added, `just undermines the science.’”
In this case, the “science” being undermined is also almost exclusively observational and so correlations between diet and health. (Worth noting is that Callahan’s source, Christopher Gardner, has significant cognitive and financial conflicts, as Nina Teicholz has reported in Unsettled Science, that would never go unmentioned if he was challenging the majority opinion himself, rather than defending it.) Hence, at the very least this evidence, too, is ambiguous and more research is necessary, but that’s not the message being conveyed.
Indeed, a major piece of evidence against both ultraprocessed foods and seed oils is the association/correlation between how prevalent they have become in our diets and the rising tide of obesity and diabetes in our population. But on the basis of this association, Callahan and her chosen experts are willing to believe with implicit confidence that it’s the UPFs that are the problem, not the seed oils that are in them, because that’s been the conventional wisdom.
A final claim is that we’re eating more of these oils than in the past, and that is also increasing certain chronic health conditions. One study, for example, found that levels of linoleic acid — the main omega-6 in seed oils — in U.S. adults have more than doubled during the last 50 years.
But correlation does not equal causation. We’re eating more of these oils because they’re used in ultraprocessed and fast foods, which make up a larger share of our diets today than in past decades, Dr. Gardner said. Those foods aren’t good for us, he said, but there’s no evidence to suggest that seed oils are what makes them unhealthy.
“That’s just bizarre to blame them and not the foods that they’re in,” Dr. Gardner said.
What’s the bottom line?
If you want to reduce your consumption of seed oils, do so by eating fewer ultraprocessed foods, Dr. Gardner said. That would likely be a health win.
In both cases, Callahan could have critically assessed the state of the evidence, interviewed sources on each side of the controversy, cited authorities on both sides—those with the kind of titles from respected institutions of higher learning that bestow credibility—and given the existence of legitimate controversy the credibility it deserves.8 Instead, she did her job. She biased her articles to faithfully report the consensus of opinion, what the authorities in the field believe.
She probably did no immediate harm. Garder’s advice (despite his cognitive and financial conflicts of interest) is benign, but the science is lost along the way. And having dismissed the legitimacy of the seed oil controversy, Callahan will have cognitive issues of her own to confront should further nutrition research (of the kind everyone agrees is necessary) demonstrate that she was wrong to do so.
An AI Addendum
Some fields of medicine and health are so new that no conventional wisdom or majority opinion has yet to develop. The influential authorities don’t know what to think about them, so the reporters don’t know how to represent that consensus in their reporting.
In these cases, the reporting may still be biased, but it’s harder to predict which way it will go. The use of artificial intelligence in medical diagnosis is one such subject. Is it a good thing or a bad thing? AI is likely to revolutionize medicine, but the jury is very much out still on whether that’s for good or bad.
Hence, my favorite conflicting articles of the month.
Both The New York Times and the Washington Post reported on a new study that tested the ability of an AI program to diagnose illness compared to doctors. The results:
1) Doctors who used AI did only very slightly better than doctors who didn’t.
2) AI alone trumped both.
The Post reported the first finding in the headline, discussed it at length in the article, and mentioned the second only in passing. The Times did the opposite, reporting the second in the headline, discussing it at length in the article, and mentioning the first only in passing. Hence, the conflicting headlines:
What’s interesting to me (but not to the Post reporter, apparently) is that if AI alone is better than physicians using AI, it suggests that it’s the physicians’ biases and experience that hold them back. In short, when the human MDs disagreed with AI, the humans trusted their own judgement.
The study suggests that they should not have. I look forward to seeing if this is a reproducible result, and what we learn if it is.
Let’s do a very much simplified thought experiment: take 10,000 kids and have them eat Fruit Loops every morning for a decade. Half are chosen at random to eat fruits loops made without the artificial dyes. Half eat the Fruit Loops with the dyes, but without the sugar. Which half do you think will be healthier?
And why very much simplified? Because we’d have to replace the sugar calories with some other source, and that choice, and the interpretation based on that choice, would get complicated.
Ben Hecht was a renowned mid-century playwright (The Front Page), novelist and screenwriter (The Front Page and His Girl Friday). He started his career, though, as a newspaper reporter in Chicago giving him the personal experience to know of what he speaks.
The issues I'm discussing here are not relevant to editorials and opinion pieces, in which the bias of the authors are made clear by designating these pieces as opinions (although the bias of the editors will determine which writers they will allow or enlist to opine), and to long-form journalism, in the magazine sections, for instance, and the writers are likely to spend many months on a single article.
The obesity epidemic terminology first appears in this 1988 article as one potential explanation for the geographical disparity in the prevalence of different cancers. This was a decade before the 1998 CDC article that first established the obesity epidemic as, well, news. In other words, it seemed obvious already in 1988 but took another decade to be quantified and, so, newsworthy.
Reading that story reveals another interesting parallel with the news today: On the subject of whether it’s safe to take mammograms, the author, physician-turned-anthropologist Melvin Konnor, says, “The fear of radiation is almost unfounded. If 1 million women at age 40 received mammograms, 10 excess cancers could be caused by the test; but almost 800 cancers would be caught. One might as well refuse vaccinations for one's children on the theory that some deaths are caused by the shots.” Hmmm…? Imagine using that as an analogy in today’s America.
When astrophysicists and cosmologists, for instance, argue one of my favorite scientific controversies—whether the speed at which galaxies rotate can be explained best by missing mass and so dark matter or by a theory known as Modified Newtonian Dynamics (MOND)—that controversy also lives on the front lines. It, too, depends on the interpretation of the latest experimental evidence—what’s right and what’s wrong, and so who’s right and who’s wrong. But nobody’s life or health hangs in the balance. If the science reporters cover it, they may not even care which side is right. It’s not their job to care. Those that do care, however, are likely to bias the reporting because of the nature of their concern. Some reporters, for instance, favor a good underdog story and so will side with the minority on MOND and write the story from that perspective. Some take comfort in believing that the conventional thinking/majority opinion is right—dark matter exists. They have learned, rightly or wrongly, to trust in the expertise of the majority, and they’ll write up the latest publication (or choose to ignore it) from that perspective.
Nor is there any way to assess how common bad science might be. The reporter’s experience and perspective have been shifted (usually) by a single data point, but it will bias the reporter’s work ever after.
In this, I side with Steven Novella’s assessment at Science-Based Medicine.
This is a very good piece with wide application, and anyone who's inclined to think for him or herself will appreciate it. It does an excellent job of laying out the various conflicts and challenges news reporters in certain fields (in this case, health) face. An obvious next step might be to integrate what "the bro science" (as we're calling it these days) suggests, i.e. what all those guys at the gym who look fit and seem to be healthy have figured out for themselves by experimenting on themselves in ways that academic medicine will never be able to get away with. For example, thirty years ago, as Snackwells cookies and Olestra were being invented and pushed on us, a bodybuilder relative casually told me the government food pyramid was ludicrous and backwards and that every bodybuilder knew it. Stick to leafy greens, proteins, fats, and have some complex carbs, but not too many. He was obviously right. "The science" has taken decades to catch up and look where we are now. Mr. Taubes is surely right about sugar, so I'm eager to see what else he's right about.
Thanks, G. I've read Kahneman's book. I recognize that my understanding of AI is as a user and nothing more, but what I was wondering is the comparator: not that humans won't make mistakes based on cognitive biases of the kind Kahneman discusses, but whether AI brings with it a whole new world of issues that might mislead it.