Signs of Bad Science
What Do BPA, Cell Phones and Cancer, and the Vaccines-Autism Link Have in Common?
One critic of this body of research on BPA characterised it to me as, “a land of one-sided comparisons, confirmation bias, post-hoc reasoning, ‘any finding at any dose’ effects untethered from matters of precursor lesions, internal dosimetry, and mechanism.”
Geoffrey Kabat, “The BPA Panic and the Distortion of Science,” Quillette
“Kabat never saw an environmental chemical that he thought was a problem,” [Linda] Birnbaum wrote in an email to Undark…. And when a range of studies, from a range of places, using a range of methods, consistently show a certain result, she said, it presents a strong signal.
Michael Schulson, “The Cultural and Political Moment for Toxins Research,” Undark
There, in a nutshell, is the essence of a scientific controversy. One expert’s collection of “one‑sided comparisons” and “confirmation bias” is another expert’s “strong signal.” What you believe—and who you believe—will depend on your own preconceptions, experience and biases.
Since I’ve written so critically about nutrition, obesity and chronic disease science, I’m often asked for my opinion on issues about which I know little. And yet I do have an opinion (unsurprisingly). I’m going to take this post‑Thanksgiving edition to explain why I invariably lean the way I do.
Should we fear the plastic in our water bottles?
The epigraphs above are from two articles addressing concerns that Bisphenol A (BPA) is a toxic chemical. BPA is used to make hard plastics and it “seems to be everywhere,” as The New York Times has said, most notably in water bottles. Some studies suggest it’s toxic. Obvious questions: should we avoid it if we can, and should we regulate against it?
Geoffrey Kabat, author of the first epigraph, thinks the concern is overblown. Kabat is an epidemiologist and author of several books on science and public health from a critical perspective, including Hyping Health Risks and Getting Risk Right. His article in Quillette on what he calls the “BPA panic” is a response to Michael Schulson’s article in Undark, from which the second epigraph comes. Schulson is a science journalist who has written thoughtfully about scientific controversies and how they’re covered in the media.
The two epigraphs address the same body of research from competing perspectives. Schulson says as much in his article. Some of the disagreements, he writes, “are less over the substance of the science, and more over how much evidence needs to accumulate before officials should issue a warning or roll out new regulations.”
But that raises the question of whether the officials you trust think the evidence will accumulate and, when it does, whether it will be any more reliable. Kabat is dismissive. He has seen this before—a health risk hyped—and thinks he understands the pathology at work. Schulson notes (as does Wikipedia) that Kabat has had industry conflicts, including the tobacco industry among them.
The primary protagonist in Schulson’s Undark article is Linda Birnbaum, who ran a research program on potentially toxic chemicals at the Environmental Protection Agency and then, from 2009 to 2019, was director of the National Institute of Environmental Health Sciences. While Schulson discusses the complexity of the issue, he seems to side with Birnbaum—more accepting of the message that BPA is likely enough to be toxic (“a strong signal”) that we should treat it as such.
I have never done my own research on chemical toxins and so I’m ill‑equipped to make a judgement. And yet my knee‑jerk response is to side with Kabat. I do not believe that a “range of studies, from a range of places, using a range of methods, consistently showing a certain result” necessarily constitutes a strong signal. I would assume the opposite.
Indeed, this is my knee‑jerk opinion on many of today’s public‑health scares: cell phones and cancer, toxic chemicals in the environment, and, yes, the idea that vaccines cause autism. The evidence strikes me as uncompelling—because of my context and experience, my “pre‑existing convictions,” to borrow a turn of phrase from one of Kabat’s earlier articles.
A brief memoir of a knee‑jerk skeptic
One legitimate criticism of my work is that I see bad science everywhere. It’s not surprising because I may have spent more of my life studying bad science than anyone alive. I think I’ve learned to recognize it when I see it—as Kabat does—and I do think it’s far more common than we’d ideally prefer.
My first two books were about contained episodes of bad science—what Nobel Laureate chemist Irving Langmuir called “pathological science,” the “science of things that aren’t so.” My first was Nobel Dreams, published in 1987. I was a young science writer who went off to CERN, the European physics lab outside Geneva, to watch physicists nail down what had been advertised as the greatest discovery in forty years: physics beyond the standard model.
Those physicists thought they had a strong signal, although they knew they needed more data. They needed the evidence to accumulate. Instead of writing about a discovery, though, I spent seven months watching them realize that they had botched the initial analysis. What the physicists had thought was a strong signal was anything but. The evidence didn’t accumulate for their discovery but against it.
Rather than celebrating and documenting the greatest breakthrough in physics in decades, Nobel Dreams became an exposé on the sociology and personalities of high energy physics, a case study in pathological science and what I came to think of as the extraordinary difficulty of getting the right answer in any scientific endeavor.
When the book came out, and the Nobel Laureate leader of the experiment, Carlo Rubbia, was quoted calling me an “asshole” in the New York Post, I assumed my career in science journalism had ended. It hadn’t. Instead, researchers I interviewed afterwards often passed along stories similar to the one I told in Nobel Dreams. A common denominator was the hyper‑ambitious researcher who would overinterpret evidence—whether to achieve career ambitions (from publish‑or‑perish to Nobel Prizes) or to save lives, so he or she thought, in unimaginable numbers.
My obsession with how hard it is to do good science, and how easy it is to get the wrong answer, found copious opportunities for further study: electromagnetic fields and cancer, prion diseases, observational epidemiology, and eventually nutrition, obesity and chronic‑disease research. As I learned, it’s human nature to overinterpret evidence; the insanely difficult part of science is not doing so.
I also learned that all these controversies had very similar characteristics. The science was controversial because the available research tools, whether experimental or observational, were incapable of providing unambiguous evidence. The researchers were working at the limits of their capabilities, like 19th‑century astronomers trying to discern canals on Mars with inadequate telescopes.
This is why the “frontline of research,” as the physicists-turned-philosopher-of-science, John Ziman put it in his 1978 book Reliable Knowledge, is “where controversy, conjecture, contradiction and confusion are rife.” It’s where the researchers would like to think they reliably understand the evidence they’re accumulating, but their tools and their techniques don’t allow them to do so. It’s where strong signals come and go like mirages in the (Martian?) desert.
My second book, Bad Science (1993), was about cold fusion—the great scientific fiasco of its era. It emerged in a frenzy in the early spring of 1989. It began with a University of Utah press conference, announcing the creation of nuclear fusion in a beaker of heavy water—infinite, clean energy!—a remarkable claim. Within weeks, researchers from around the world were announcing that they, too, had created cold nuclear fusion. And then, one‑by‑one, these replicating researchers learned how they had screwed up.
The more rigorous the experiment, the smaller to non-existent the signal. What mattered wasn’t accumulating more evidence but accumulating reliable evidence.
Once good scientists got involved and insisted on rigorous experiments and meticulous controls, and because cold fusion experiments were inexpensive and relatively quick to do, and to do right, the initial claims were easily and quickly refuted. Cold fusion emerged and then mostly died (in a Princess Bride type of way) over a single spring.
Faith as an emergent property of ambiguous evidence
Because I was fascinated by how something so likely to be wrong could gain such traction in science and the media, I interviewed over 300 sources for my book, virtually everyone who had played any meaningful role in the episode.1 I used the same obsessive approach when I later shifted into nutrition and chronic‑disease research.
One reason for this obsessive reporting is because of a lesson I had learned doing the research for Nobel Dreams. When I arrived in Geneva to live at the laboratory, I came to realize that a clear hierarchy had evolved in the experimental collaboration.
At the bottom of the hierarchy were the physicists and technicians who had actually built the experimental apparatus . They had had little confidence that it could do what Rubbia, the leader of the collaboration (the Nobel Laureate who called me an asshole), wanted to believe it could do. They were all too aware of the shortcomings of the apparatus, because they had built it. They thought the evidence that had brought me to CERN was being promoted publicly with far too much confidence.
Because they were critical of the optimistic interpretation of the data, Rubbia was unappreciative of their attitude. He thought they were overly negative, unwilling to take the kind of daring leap of faith on which great science (so Rubbia thought) was made. They learned to keep their thoughts to themselves in group meetings, or they left the experiment entirely.
The physicists who drifted to the top of the hierarchy were those who believed as Rubbia did. They tended to be young, having joined the collaboration after the equipment had been built. The reason they had joined the experiment was because they wanted to be in on just this kind of discovery. They were (mostly) biased from the get-go. And not having built the experiment themselves, they had an idealized concept of what the apparatus could do, quite a different perspective from the physicists and technicians who had actually built it. These young physicists tended to tell Rubbia what he wanted to believe because they believed it, too.
As it turned out, the physicists and technicians who had built the experiment were the ones who were right. They were the ones whose opinions had to be valued, not just by a journalist covering the story, but by Rubbia and the other leaders of the collaboration.
My understanding of this kind of hierarchy was reinforced when I interviewed Leon Lederman, who was also a Nobel Laureate, and leader at the time of Fermilab, the leading physics laboratory in the U.S.. Lederman told me that he liked to “walk around the laboratory at night and talk to the graduate students because they hadn’t learned to lie yet.” (Whether Lederman meant that line in all seriousness is open to interpretation. He was famous for his sense of humor.)
Hence, the 300 sources I interviewed for the cold fusion book included the graduate students and post-docs who had actually done the work in the various laboratories. These young researchers, still idealistic, typically had a different take on what their experiments had found from their group leaders and the professors.
My suspicion is these kinds of hierarchies emerge naturally in these episodes of controversial science. If the evidence is clear, no faith is necessary and is everyone is on the same page.
On the front-lines, though, where the evidence is de facto ambiguous, faith that something new and meaningful will be discovered becomes the currency necessary to make progress. That’s what’s communicated to get funding, if nothing else, to keep doing the research. The critics are dismissed (the messengers shot) or their criticisms (the message) ignored because the negativity, by this thinking, is considered uniquely unhelpful.
Pascal’s Wager and the precautionary principle
Let ne get back to the point Michael Schulson made, that controversies over potentially toxic chemicals are “less over the substance of the science, and more over how much evidence needs to accumulate before officials should issue a warning or roll out new regulations.”
This is true of all these scientific controversies, whether in public health or science in general. The difference is that the threshold—the evidentiary standard—for issuing a warning or rolling out regulations in public health is equivalent to the decision to go public or claim a discovery in these other disciplines.
A critical point is that the more important the science, the more likely researchers are to lower their standards for what constitutes a “strong enough” signal. It should be the opposite. The more important the science, the greater the implications to the public health, the more the researchers should want to assure they’re right, but that’s not how it works.
Here Pascal’s Wager and the precautionary principle come into play. As I wrote in Bad Science, Blaise Pascal—the renowned 17th-century mathematician, and philosopher—was the patron saint of every researcher, administrator and politician who took the leap of faith to assume that cold fusion was real and they had to act on it
Pascal renounced a life of science for one of faith, which many of the proponents of cold fusion seem to have done, and he wrote down the terms of the wager that, for him, made this choice inevitable. Pascal argued that to bet on the existence of God and to be wrong is to lose little or nothing. To wager correctly that there is a God is to be rewarded with an “‘infinity of infinitely happy life.” “Let us assess the two cases,” he wrote: “if you win you win everything, if you lose you lose nothing. Do not hesitate then; wager that he does exist.”
Throughout the cold fusion episode, the proponents of cold fusion would subscribe to the logic of Pascal’s wager. To bet that cold fusion existed and to win was to be rewarded with a payoff that, while not literally infinite, certainly seemed like it at times. To bet wrongly cost relatively nothing: a few million dollars, a few months of work, or a reputation would always seem inconsequential in comparison to the potential reward.
One year later, for instance, Chase Peterson [then president of the University of Utah, where the “discovery” had been made] insisted that he had never believed that cold fusion necessarily was real, but that what was important was that it could have been real. Here was Pascal’s wager. Peterson said, “You get burned if cold fusion doesn’t work, but you sure get burned if you don’t do anything about it and it does work. So you’ve just got to be smart.”
The precautionary principle in public health is the flip side of this thinking. Now you’re confronted with ambiguous evidence suggesting some aspect of the environment or of medical practice might be harmful. If you don’t act, people will die. If you’re right about the harm and you do nothing, they will die unnecessarily. And so you evoke the precautionary principle and act as though you are right, because if you are, you’ve prevented enormous harm.
When I was reporting Good Calories, Bad Calories, my first book on nutrition science, this was the logic evoked by the nutritionists and policy makers who set the nation on a low-fat diet in the1960s and 1970s. Americans were dying by the hundreds of thousands every year from heart disease, they said. That was undeniable. If these nutritionists were right that high fat diets were killing people, then they, the policy makers, had an obligation to tell people, to change how Americans eat, to stop the carnage. Like Pascal’s argument for believing in God, enormous good could be done if the nutritionists were right; enormous harm would be prevented. And neither the nutritionists nor the policy makers seemed to worry that they might be wrong. But even if they were, they couldn’t imagine that the consequences would be significant.
What you see (as Daniel Kahneman observed) is all there is
One problem with these public‑health scares is that the logic of the precautionary principle can always feels compelling. With industrial chemicals or pesticides, the alleged harms are obvious—the child or adult who gets ill after exposure. Not so the risks of regulating or restricting their use?
It’s far more complicated with vaccines because you have to balance benefits and harms. If you acknowledge the benefits, seemingly enormous, the logic of Pascal’s wager and the precautionary principle will be flipped. If you can convince yourself, though, that the COVID vaccine, for instance, provided no meaningful benefits, it is far easier to argue (and to believe) that the harms are unacceptable.
These controversies also raise another difficulty, which is the precedent set by lowering evidentiary standards. Acting on weak evidence can feel justified in the moment, but it risks degrading the norms of scientific rigor. If ambiguous data are treated as sufficient to declare harm—or safety—the entire scientific enterprise absorbs the cost.
Good scientists and philosophers of science emphasize humility for precisely this reason. Because the likelihood of being wrong is always high, ignoring ambiguity corrodes the method that is supposed to protect us from our own biases. A contribution to the edifice of scientific knowledge can no longer be trusted if it’s built on unreliable evidence, and neither can the research that follows from it (as I’ve discussed in previous posts).
This is a harm that can neither be seen nor quantified, but it may be enormous nonetheless. Perhaps not immediately, but eventually. It’s much like the legal profession’s concern over precedents: a rule or law meant to address one case shapes all future cases.
When the “four horsemen” of the New Atheism Movement—Richard Dawkins, Daniel Dennett, Christopher Hitchens, and Sam Harris—reject Pascal’s wager, they are proposing (among other arguments) that the consequences of religious belief, what an individual may never experience but nonetheless exist, outweigh the benefits.
So what about toxic chemicals, vaccines and… cell phones?
Why did I side with Kabat on the BPA issue, despite acknowledging my ignorance of the field? Because, as I’ve said, I’ve seen this pattern before. I know how a non-existent phenomenon can generate exactly the mishmash of ambiguous evidence that Kabat’s critic described and that Birnbaum finds convincing. It’s now what I expect from hyped health scares. The real ones manifest a very different pattern—or so I think.
In 2000, I wrote an essay for Technology Review on this issue, and I’m going to reprint it here. Back then it was about the idea that electromagnetic fields from cell phones cause cancer—an anxiety that’s still with us (as I predicted it would be in the essay). Robert F. Kennedy, Jr, unsurprisingly, thinks we should take it seriously.
The essay is a case study in how these elements of faith and human nature come together to turn a non-existent phenomenon into the appearance of a strong signal—a reason to invoke the precautionary principle. I could have written the essay about the notion that vaccines cause autism, but it was too new at the time. I wasn’t paying attention.
When readers ask today what I think about the vaccine–autism issue, though, I send them this Tech Review essay and suggest they replace the words “cell phones” with “vaccines” and “cancer” with “autism.” That sums up my opinion. It doesn’t mean vaccines are not a cause of autism; it explains why I find the evidence that they are so uncompelling.
The cell-phone scare
(Originally published in Technology Review: November/December 2000)
When fear is the opponent, science doesn’t stand a chance.
There is a good-news-bad-news rhythm to the introduction of any pervasive new technology. With cellular telephones, for instance, the good news came with the explosive growth of the industry itself, which by November 1992 had recorded its 10 millionth customer. Three months later came the bad news: David Reynard, bereaved husband, appeared on “Larry King Live” with the remarkable accusation that cell-phone use had caused the brain tumor that killed his wife. Reynard, not surprisingly, was suing the cell-phone companies he held responsible. With that single anecdotal incident, Reynard set in motion a health scare that continues to play in the press and our societal subconscious to this day. If history is any indication, it will continue indefinitely. I can make this prediction free of concern about whether cell-phone use is truly carcinogenic. If it’s not, in fact, our anxiety—and the amount of press that fuels this anxiety—is likely to last considerably longer. Such is the nature of fear and the nature of science, and the inability of the latter to dispel the former.
The noteworthy aspect of fear is that its shelf life is considerably longer when the object of fear is a threshold phenomenon—invisible, at the limits of detection, if not simply a figment of the imagination. This preternatural quality is crucial because both science and the human intellect have evolved to handle the material world with relative ease. After all, when automobiles kill tens of thousands of Americans each year, the mechanism of our demise is relatively obvious, as it is with guns. Anxiety is not the issue. Caution is. If science manages to unambiguously identify the agent of an illness, as happened with the AIDS virus, the shadow of impending doom is dispelled by the light of knowledge, and the medical research establishment marches off to deal with it. The rest of us, or most of us, alter our behavior accordingly and the prophylactic industry flourishes. But we don’t, by and large, panic.
If no immediate cause of death or illness can be identified, or if no mechanism links the alleged agent of our woe directly to the illness or death—as was the case, for instance, with electromagnetism from power lines, silicone seeping from breast implants or, at least so far, genetically manipulated agricultural products—then fear sets in like ice on a pond, and an entirely different set of societal forces go to work.
This is where science fails us. The primary problem is that science is incapable of proving a negative. Over the years, researchers have looked at the effects of electromagnetic radiation at cell-phone-like frequencies on cells (the biological kind) in petri dishes and on lab animals and even humans, without coming up with any particularly believable evidence that cell phones themselves would be harmful. But here’s the catch: No matter how ingenious and copious the experiments, they could no more prove that cell phones do not cause cancer than they could prove the nonexistence of God. “It is scientific only to say what is more likely and what less likely,” as Richard Feynman put it, “and not to be proving all the time the possible and impossible.” When it comes to what is more or less likely, however, everyone has a different opinion on how to weigh the odds. That the scientific community and the lay public do so by different standards of evidence is made obvious by the common belief in phenomena—from UFOs, ESP and ghosts to the continuing incarnation of Elvis—that are not considered likely by most working scientists.
This proving-a-negative problem comes with an important corollary: Experimental science is also inherently incapable of achieving perfection. The experiment does not exist, nor will it ever, that can unambiguously throw up zeros across the board simply because the phenomenon it has set out to study is nonexistent. Rather, if done honestly, it will result in a range of values around zero, and the midpoint of this range is even likely to be above zero—a positive result, in the lingo—because it will reflect a host of subconscious factors that will push researchers to be slightly optimistic rather than rigorously detached. For those who want to believe that the phenomenon is real, the existence of these positive results, however close to zero, will constitute all the evidence they need.
This is simply a fact of human nature, one that Francis Bacon, the Abner Doubleday of experimental science, pointed out 400 years ago when he created the scientific method as a tool to overcome our inherently delusional thinking. “The human understanding still has this peculiar and perpetual fault of being more moved and excited by affirmatives than by negatives,” Bacon wrote, “whereas rightly and properly it ought to give equal weight to both; rather, in fact, in every truly constituted axiom, a negative instance has the greater weight.” Those of us who believe in ESP, for instance, do so because we have anecdotal evidence that it does exist, despite the decades of scientific experiments that suggest it does not.
As a result, little or nothing is needed in a scientific vein to initiate health scares, and even less to perpetuate them indefinitely. Indeed, they become inevitable and play themselves out with a certain relentless predictability. Their procession from meaningless beginnings to full-blown national anxiety could be dictated by a flowchart or programmed with simple software.
Imagine that researchers from Laboratory A decide to study the possibility that cell phones cause cancer, assuming for the sake of argument that the hypothesis is false. If the researchers do their study well and find insufficient evidence of carcinogenicity, the story ends. Until, that is, Laboratory B gets involved. These researchers are likely to be slightly less detached than their predecessors. After all, they would not have chosen this line of research had they not believed, in effect, that the work of Laboratory A left open a window of doubt. If these researchers now perform their experiments less rigorously, or interpret their data less rigorously, then they are likely to publish an article suggesting that cell phones may cause cancer. Or if not Lab B, than Lab C, or D, ad virtually infinitum. This report will be picked up by the press because even the hint of a suggestion that some aspect of everyday life may cause illness or death constitutes news. This is not just the way of the press, it’s human nature, as Francis Bacon made clear. (A recent example is the coverage of a study on exposure to household levels of radon gas that was published in the June issue of the American Journal of Epidemiology. Over the years, a dozen studies have failed to show that household levels of radon increase cancer risk. When the 13th was published claiming the opposite, the newspaper headlines read: “University of Iowa Study Says Radon Greater Risk Than Thought Before.” The Iowa scientists, of course, said they simply used better techniques than their predecessors. They may be right, but the odds are against them.)
When the press publishes such reports, the relevant federal agencies have no choice but to get involved. If they do not, consumer-protection groups will accuse them of taking a cavalier approach to public health. The same goes for the relevant industry. To do nothing is to invite public reproach. Now both the agencies and the industry will respond publicly that little or no scientific evidence exists to support the claims, but they will also admit that the threat can’t be ruled out. Either or both will allocate money to do proper scientific studies. If these studies unambiguously identify a mechanism by which cell phones cause brain tumors, we have a real public health threat on our hands, and the authorities are mobilized. End of fear. We stop using our cell phones, or we stop using them in a way that could be dangerous.
If these studies come up negative, however, the scientists will report that their data suggest it is unlikely cell phones cause cancer, perhaps highly unlikely. They will also admit, if they are rigorously scientific, that they cannot rule out an effect. This may satisfy the public, the consumer protection groups and even the press, although the smart money would bet against it. As one consumer advocate put it in a recent news article on the cell-phone scare: “People just want to know whether phones are safe or not, yes or no.” Neither—the scientifically appropriate response—eases no one’s anxiety.
One other factor will come into play at this point. Experts refer to it as an epidemic of selection. We inevitably search for explanations when tragedy hits. The unfortunate individuals who have had brain tumors or have seen their loved ones succumb, as did David Reynard, will look for explanations and may seize on those that they read about in the newspapers—cell phones, for instance. The news reports on the possibility that cell phones cause cancer are likely to suggest to thousands of victims and their families that cell phones were the cause of their illness. (A number of liability lawyers will come to the same conclusion.) They will start advocacy groups and lobby Congress, and when industry-funded studies come up empty, they will suggest that the industry scientists involved had no motivation to find the truth. When cooler heads suggest that such cover-ups are unlikely, that even cell-phone company employees use cell phones and that these scientists are probably no more likely than you or I to allow innocent people to die needlessly for the sake of a modest paycheck, they will point to the cigarette industry as proof that this has happened before and so it may be happening again. As congressmen realize that votes are on the line, they will push the relevant government agencies to do more studies. But no amount of studies will resolve the residual ambiguity, will allow the scientists to say cell phones are definitively safe. The leftover uncertainty perpetuates itself indefinitely.
Eventually the anxiety-of-the-decade will fade, to be replaced in our minds and our newspapers by a more up-to-date apprehension. It would be nice to think that eventually we’ll outgrow the cycle, but I have to defer here to my late mother, who was a lay expert on anxiety. The time to really worry, she used to say, is when things seem so good you have nothing to worry about.
When Bad Science came out, Horace Freeland Judson, who had written the Eighth Day of Creation (which i discussed in a previous post) and was then a professor at Johns Hopkins University, told me that I had written the “best-researched book on the stupidest scientific subject ever.”




Excellent prose, as always. The article re-run from the year 2000 shows that your reporting stands the test of time.
I think a big part of the problem is that we, as citizens of our country, are chronically afraid for two reasons. First, we as citizens of our country are basically chronically ill, and it seems no one knows why. Example: I asked AI the cause of diabetes, and followed the bunny trail and landed on chronic alcohol use. Apparently we don't know the cause of diabetes. But one guy fed a high fructose diet to all possible laboratory animals and they all developed symptoms of diabetes. We cannot test this with humans because that would be unethical. If we know something is harmful to animals we cannot do the same test with humans and if we have only tested animals these findings cannot be extrapolated to humans.
And then there is hypertension -- you end up with the same rabbit hole. We really don't know the cause. I'm quite sure it is not a deficiency in ACE inhibitors, beta-blockers, or calcium channel blockers.
When my blood pressure started to become a bit wacky, I asked my doctor if my potassium might be low. He replied, "No, we measured that and you'r okay." I replied, "But that's serum potassium, not cellular potassium." He got defensive. So, I started supplementing by sprinkling just a small bit of potassium chloride on my food once per day and my blood pressure leveled out.
I told this to a friend who had just been diagnosed with hypertension 150/80. He did what I did and came back down to 120/70 within three months. Another friend was on a bata blocker as a treatment for hypertension, which slows down one's metabolism, by the way, and she was gaining weight. I told her my story and she quit the bata blocker, lost weight and her hypertension is settling down. The official opinion as to why bata blockers cause weight gain? They don't know.
I looked up some studies on supplementation with potassium and the doses they were giving were enough to choke a horse, and had "mixed results." My guess is that if you consume too much potassium your body works so hard to get rid of the surplus and overdoes things.
The second reason people are chronically afraid, is that they have lost trust in authorities. Oh, I just demonstrated that. So, they turn to the opinions of crackpots.