Discover
Brownstone Journal
395 Episodes
Reverse
By Peter C. Gøtzsche at Brownstone dot org.
As a young doctor, I joked about a general warning that can still be seen in Danish package inserts for drugs: "Caution is advised during pregnancy." What does that mean? If you take a pill, it is too late to be cautious, and if you don't take it, you don't need to be cautious because you will be totally safe. My joke was that caution meant placing the pill between the legs instead of swallowing it, which would also make it more difficult to become pregnant.
The authorities passed the buck. If your child is malformed, they can say that they did warn you.
Official statements that antidepressants are safe to take during pregnancy should be distrusted. No drug is safe. If drugs were safe, they would not be the leading cause of death, ahead of cardiovascular diseases and cancer. In this article, I shall explain why it is wrong to recommend or take antidepressants during pregnancy.
The Role of Serotonin in the Body
SSRIs stands for Selective Serotonin Reuptake Inhibitors, which is a misnomer. They are not selective at all. They have multiple effects throughout the body and are not directed against any chemical abnormality. People do not become depressed because they have too little serotonin in the body but mainly because they live depressing lives.
Serotonin plays a very important role for many processes in the body, also in many primitive organisms. It is usually a very bad idea to change the blood level of a chemical that has proved so useful during evolution.
Foetal development is a delicate process that can easily go wrong, which is why we tell pregnant women to avoid alcohol. A priori, we would expect any substance that affects serotonin levels to be harmful because serotonin is essential for foetal development. This is basic biology, but we live in a world dominated by financial interests, which is why many pregnant women take antidepressants during pregnancy.
How a Drug Company Fooled the Drug Regulators
The first SSRI approved for use in children was fluoxetine from Eli Lilly. It should never have been approved. When psychiatrist David Healy and I reviewed the confidential internal study reports for the two trials that led to approval of fluoxetine for children with depression, we found that fluoxetine is unsafe and ineffective. In the first trial, the investigators had omitted two suicide attempts on fluoxetine in their published paper, and many of the 48 children on the drug experienced restlessness and had nightmares, which increase the risk of suicide and violence.
In the other trial, one child was severely harmed for every 10 children treated with fluoxetine. Fluoxetine increased the QTc interval on the ECG (P = 0.02), which increases the risk of sudden death, increased serum cholesterol, and was an effective growth inhibitor, reducing the increases in height and weight over just 19 weeks by 1.0 cm and 1.1 kg, respectively (P = 0.008 for both).
The public does not have access to animal experiments with drugs because the drug companies know it would be bad for business if people saw the data. When I got access to Merck's animal studies for their HPV vaccine Gardasil in a US lawsuit where I was an expert witness, I saw that the data supported what the patients had reported: Gardasil can cause serious neurological harms and the vaccine adjuvant is also harmful. However, drug regulators all over the world have declared that both the adjuvant and Gardasil are safe.
The European Medicines Agency (EMA) had serious concerns about approving fluoxetine for use in children, which is clear in an 86-page document about animal studies from August 2005 that is nowhere to be found on the Internet: "Prozac Paediatric Indication. Arbitration Procedure No: EMEA/H/A-6(12)/671. Lilly Response to Questions from EMEA in Document EMEA/CHMP/175191/05". I have uploaded this document in the public interest. It illustrates the extent to which drug companies are willing to bend the truth for an econo...
By Jeffrey A. Tucker at Brownstone dot org.
The epithet "anti-vaxxer" is common in our time for anyone who resists mandates or resents the enormous legal privileges, protections, patents, and subsidies the industry receives today. It also pertains to those who attempt to bring attention to vaccine injury and death, a sensitive and even suppressed subject for an industry that relies on a utilitarian measure to demonstrate its social value.
The label does not always or often make sense. The dominant theme of the movement now – and this has always been true – is to reject intervention and instead regard this industry as any other in a free marketplace (hamburgers, bottled water, washing machines, etc.), neither subsidized, nor mandated, nor protected from liability from imposed harms. If that goal were achieved, the "anti-vaxx" movement would shrink dramatically.
The trouble is that no matter how deep we look into the history of vaccination in Western countries, and the US in particular, we find that vaccination has never been treated as a normal market good to accept or reject based on consumer preference.
Indeed, if this pharmaceutical product were as obviously glorious as advertised, it should be able to elicit sufficient economic demand to sustain itself profitably and competitively like any other product. It's simple: let this industry be subjected to the cold winds of a ruthless free market and see what happens.
From the outset, however, the vaccine industry has enjoyed some form of privilege under the law. I've detailed some of this history here.
This naturally gives rise to suspicions that something isn't quite right. Perhaps these products are neither safe nor effective, else why would the population need such heavy-handed nudging? Injury from shots further fuels the fervor to at least make them voluntary and stop the subsidies and liability protections. What's more, mandates have historically not led to higher vaccination rates but only more population resistance and lower rates.
An excellent example is the Leicester Anti-Vaccination League of the 1870s and 1880s England. This was one of the more effective anti-vaccine mandate movements in Western history. It rose up in response to the Vaccination Act of 1867 as passed by Parliament in compliance with intense industry lobbying and the familiar graft (nothing has changed).
This Act made vaccination mandatory for all children up to the age of 14. It paid vaccinators 1 and 3 shillings per successful vaccination (same as now). It required birth registrars to issue a notice of vaccination within seven days of a child's birth registration (same). Non-compliance led to criminal conviction and a fine of up to 20 shillings (millions were professionally displaced only recently with the Covid shot). The Act imposed repeated penalties until the child was vaccinated (same: doctors lost licenses). Failure to pay could result in imprisonment (some went to jail this time). It also banned variolation (the older method of exposure triggering an immune response) with imprisonment up to one month.
A question I keep asking myself about this period: if vaccination is so much and obviously superior to variolation, why was such hoopla and subsidies necessary for one to replace the other, all the way up to criminal penalties for using the older method? I do not have the answer, except to say this is another way in which this industry defies market dynamics in which innovations always organically replace inferior tech.
In short, the Vaccination Act of 1867 was an egregious law, passed in the face of growing population resistance that developed in the half-century since the famed Edward Jenner first brought the attention to the new method to replace variolation. While effectiveness of the cross immunity from cowpox to smallpox was never in question, injury from vaccination (via cuts in the arm, sniffed through the nose, and only later injected) had been a theme from the 1790s.
The Leicester An...
By Ann Bauer at Brownstone dot org.
I have loved many addicts in my life.
I have been exasperated, impoverished, and terrified by them. But also amused, warmed, enraptured, elevated…That's the thing about addicts. They contain multitudes, all drama and extremes. They're charismatic until they're repugnant, joyful until they're suicidal. Everything is in vivid, dangerous color. It's part of the ride and the reason they exert such a pull on cautious, ascetic people like me.
Some of my addicts are gone. My closest friend and "Damn Good Food" co-author, Mitch Omer, died at 61. Others have found God and turned their lives around (they're now exciting and dramatic people of faith). I love people who are addicted to alcohol, drugs, gambling, and food. Many surf between the four.
Recently, another category of people formed: the ones injecting themselves with GLP-1s, mostly to lose weight but also to control other impulses. It's clearly great for the handful whose life and health were being destroyed by obesity. But for the others? I'm dubious.
Ozempic and its cousins (Mounjaro, Wegovy, Zepbound, et al.) modify the pleasure centers of the brain, making everything people crave—food, sex, smoking, alcohol, shopping, gambling, cocaine—less appealing. It doesn't address the underlying problems of addiction, such as depression or dishonesty. It just eliminates the part of the person that enjoys and revels, the colorful, joyous side.
It's a version of the drug in Robert Louis Stevenson's Strange Case of Dr. Jekyll and Mr. Hyde, that the doctor ginned up to divide himself, creating a respectable man bound by reserve and a separate murderous, pleasure-seeking monster.
From Dr. Jekyll's own account:
Hence it came about that I concealed my pleasures; and that when I reached years of reflection, and began to look round me and take stock of my progress and position in the world, I stood already committed to a profound duplicity of life. Many a man would have even blazoned such irregularities as I was guilty of; but from the high views that I had set before me, I regarded and hid them with an almost morbid sense of shame. It was thus rather the exacting nature of my aspirations than any particular degradation in my faults, that made me what I was and, with even a deeper trench than in the majority of men, severed in me those provinces of good and ill which divide and compound man's dual nature. In this case, I was driven to reflect deeply and inveterately on that hard law of life, which lies at the root of religion and is one of the most plentiful springs of distress.
Though so profound a double-dealer, I was in no sense a hypocrite; both sides of me were in dead earnest; I was no more myself when I laid aside restraint and plunged in shame, than when I laboured, in the eye of day, at the furtherance of knowledge or the relief of sorrow and suffering. And it chanced that the direction of my scientific studies, which led wholly toward the mystic and the transcendental, re-acted and shed a strong light on this consciousness of the perennial war among my members. With every day, and from both sides of my intelligence, the moral and the intellectual, I thus drew steadily nearer to that truth, by whose partial discovery I have been doomed to such a dreadful shipwreck: that man is not truly one, but truly two.
Of course, the doctor's desire to split off his hedonistic self will have devastating consequences. The lesson of Jekyll and Hyde is that decoupling morality from desire is unnatural. It disrupts the natural order. My question for RLS, were he still with us to answer: Do GLP-1s pose similarly catastrophic risks?
I think they may. One reason is my Uncle Joe.
Joe was a quiet, careful religious man. He and his wife, Darla, had desperately wanted children but it just never happened. They raised boxer dogs that they treated like babies. Joe worked as a photographer in North Minneapolis in this little tufted studio from the 1930s that smelled like ros...
By Joseph Varon at Brownstone dot org.
Contemporary medicine is not failing for lack of knowledge. It is failing under the weight of its own complexity. The present era is defined by unprecedented access to data, advanced technologies, an ever-expanding network of subspecialties, and a dense architecture of protocols and performance metrics. Nearly every aspect of patient care can now be measured, quantified, and standardized. Interventions that were unimaginable only decades ago are now routine. Yet despite these advances, a fundamental element has been eroded. This erosion is philosophical.
Medicine has accumulated extraordinary capability, but it has lost clarity of purpose. Increasingly, it functions as a system optimized for processes rather than a profession oriented toward patients. The distinction is subtle but consequential. Without a clear understanding of its purpose, medicine risks becoming an efficient mechanism that delivers care without understanding the individual it serves.
In the 12th century, Maimonides (Rabbi Moses ben Maimon [1135–1204], known as the Rambam), one of history's most influential physician-philosophers and a court physician in Egypt, practiced medicine in an era devoid of modern diagnostics, randomized trials, or institutional oversight. Trained within the intellectual traditions of Andalusian and Islamic medicine, and deeply influenced by Greek philosophy, he integrated empirical observation with rigorous reasoning and ethical responsibility. Although he lacked contemporary tools, he possessed something far more important: clarity. In Regimen of Health, he asserted that the physician's foremost responsibility is to preserve health rather than simply treat disease¹. This principle stands in sharp contrast to the modern system, which frequently prioritizes intervention over prevention.
The Physician As Intellectual Practitioner Rather Than Technician
Maimonides regarded medicine as an intellectual discipline rooted in observation, reasoning, and adaptation. His clinical writings consistently emphasize individualized care guided by physician judgment, rather than strict adherence to generalized rules². In his model, the physician was not merely a technician following predefined steps, but a thinker adept at navigating uncertainty.
Modern medicine increasingly emphasizes compliance. Clinical guidelines and protocols, though valuable, have expanded to the extent that they often define practice rather than merely inform it. Evidence-based medicine, initially conceived as the integration of clinical expertise with the best available evidence, is now frequently implemented as strict guideline adherence³.
When adherence is used as the primary metric of quality, deviation is perceived as risk. However, no patient precisely matches the populations studied in clinical trials. Maimonides recognized this implicitly, treating individuals rather than statistical abstractions. This distinction is not merely philosophical; it has practical consequences at the bedside. A physician trained to follow protocols may deliver technically correct care, yet fail to recognize when a patient falls outside expected patterns.
In contrast, a physician trained to think can identify nuance, adapt in real time, and challenge assumptions when necessary. Maimonides' model required intellectual engagement with every patient encounter. Modern systems, in their effort to standardize care, risk reducing that engagement. The result is not necessarily incorrect medicine, but it is often incomplete medicine.
Prevention As the Core Principle of Medical Care
Maimonides positioned prevention as the central tenet of medicine. His recommendations regarding diet, exercise, sleep, and emotional balance reflect a systematic understanding of health maintenance as the physician's principal responsibility¹. In his framework, disease frequently resulted from an imbalance.
Modern medicine recognizes the significance of prevention but, structural...
By David Stockman at Brownstone dot org.
It was pretty obvious even before February 28th that the US economy was grinding to a halt, even as inflation was already working up a head of steam. But then came war.
We are going to get a globe-shaking economic conflagration erupting from the void that was the Persian Gulf commodity fountain. That includes between 20% and 50% of all the basic commodities that drive global GDP, including crude oil, LPGs, LNG, ammonia, urea, sulfur, helium, and sundry more.
Accordingly, the global share of crucial industrial commodities that now stand in harm's way. This includes both those directly transiting the Strait of Hormuz and also the share of supply from the wider Middle Eastern region that is also exposed to the current Iranian War disruptions but is delivered by pipeline, train, or alternative waterways like the Red Sea/Suez Canal route.
This ballooning dislocation of daily global commodity flows will have a double whammy effect: It will both cause production and output to fall immediately in response to soaring input costs or limited availability—even as it encourages the central banks to "help" by printing more inflationary money.
This all adds up to a bout of classic stagflation, but it is not going to be merely the mildly painful type that unfolded during the 1970s. After all, despite a 120% rise in the price level during the decade, it wasn't a total wipeout when measured from the vantage point of real median family income.
As it happened, the 1970s stagflation came on the heels of what had been an actual Golden Age by the standards of history between 1954 and 1969. During that period, real median family incomes rose from $39,700 to $66,870 or by a robust 3.53% per annum.
Of course, that uphill march of Main Street prosperity slowed sharply during the inflationary 1970s, but the blue line in the chart below did at least keep drifting higher. So between 1969 and 1980, real median family incomes grew by a not very impressive 0.61% per annum, but the direction of travel was still higher.
But here's the thing. The US economy of the 1970s was able to cope with the pressures of high inflation, oil, and other commodity shocks and the stop-and-go disruptions of a Federal Reserve that had been newly released from the disciplinary effects of the Bretton Woods gold standard. In large part that was because the aggregate level of debt on the US economy was relatively modest.
Total public and private debt in 1970 stood at $1.5 trillion, representing just 147% of GDP, as shown in the graph below. Moreover, the latter was the long-time national leverage ratio (total debt divided by national income) through historic times of thick and thin, going all the way back to 1870.
Moreover, even after the large government deficits of the 1970s and a surge of inflation-driven private borrowing during the decade, total US debt stood at $4.6 trillion by 1980. That was just 162% of GDP.
In a word, the US economy during this decade of stagflation was battered by unprecedented peacetime inflation, but it was not yet smothered by crushing debt. As shown by the graph, the soaring national leverage ratio did not really leap skyward until after the mid-1980s, when Alan Greenspan took the helm at the Fed and launched the US (and the world) into a four-decade spree of money-printing and what amounts to Keynesian central banking.
As a consequence, total public and private debt is in a wholly different zip code today. Debt outstanding now totals nearly $108 trillion and weighs in at 343% of national income (GDP). That is to say, as we head into the next stagflationary era, the US economy will be carrying two turns of extra debt relative to income than was the case in 1970.
That does make a difference. The national leverage ratio during the 1970s averaged about 153% of GDP, meaning that had it been maintained since then total debt outstanding would now be $48 trillion. As it is, however, the actual leverage ratio currentl...
By David Bell at Brownstone dot org.
The Covid-19 pandemic exposed deep failures in global health governance. That much is now widely acknowledged, even by institutions that initially resisted self-examination. For example, the recent Lancet Commission on Covid-19 substituted advocacy for analysis, evaded institutional accountability, and ultimately clarified little about why global pandemic governance failed.
What remains unsettled—and largely undiscussed in public—is what those failures imply for the future of international health cooperation, and especially for the role of the World Health Organization.
The International Health Reform Project (IHRP) was convened to confront that question directly. The IHRP is an independent international group, though its work is closely linked to Brownstone through the participation of three of its Fellows who wrote this article, two of whom served as co-chairs.
Its work is unusually detailed, wide-ranging, and blunt. It does not argue that the pandemic was inevitable, nor that failure was merely the product of bad luck or limited information. Instead, it documents how institutional incentives, governance structures, and political pressures shaped decisions in ways that repeatedly undermined transparency, proportionality, and scientific rigor.
The Panel's findings matter well beyond debates about the past. They arrive at a moment when the United States has withdrawn from the WHO, when the Organization is seeking expanded authority through amended International Health Regulations and a new pandemic agreement, and when governments around the world are quietly reassessing whether the current model of global health governance is fit for purpose.
The question now is not simply whether the WHO failed, but what should follow from that failure—especially for the United States and its allies.
I. What the IHRP Found: Failure Was Structural, Not Accidental
The IHRP report reaches a clear conclusion: the problems revealed during Covid-19 were not isolated mistakes, but the predictable outcome of institutional design choices made over decades.
Several findings are central.
First, the WHO failed in its core pandemic function. The Organization was created to detect, assess, and coordinate responses to transnational infectious disease threats. Yet during the early stages of Covid-19 it was slow to challenge incomplete or misleading information, reluctant to escalate warnings in the face of political pressure, and inconsistent in its guidance once the emergency was declared. These failures had real consequences, shaping national responses during the narrow window when early action mattered most.
Second, politicization was not an aberration but a recurring constraint. The Panel documents how deference to powerful member states, especially where transparency was most critical, distorted risk communication and delayed independent investigation. This was not simply a failure of leadership, but a consequence of governance rules that place political consensus above timely error correction.
Third, the Organization entered the pandemic already institutionally overstretched. Over time, the WHO's mandate expanded far beyond communicable disease control into a wide array of social, behavioral, and environmental domains, often with limited connection to pandemic preparedness. The result was an organization attempting to function simultaneously as a technical agency, a development actor, a norm-setting body, and a political convenor—without the clarity or discipline required for crisis response.
Fourth, post-pandemic reforms did not address these underlying weaknesses. Instead of a rigorous institutional autopsy, the response to failure was to seek expanded authority: broader emergency powers, new compliance expectations for states, and additional permanent structures. The Panel is explicit that expanding scope without correcting governance failures risks entrenching the very dynamics that contributed to poor perfor...
By David Bell at Brownstone dot org.
Hard times. Emerging from an apparently engineered pandemic, now in another war for ephemeral reasons, a resultant economic crisis that is exacerbating unmanageable debt, we find ethnic cleansing and inter-ethnic hatred are increasingly back in vogue.
It's easy to imagine a nefarious program is being orchestrated by a nasty and entrenched elite, aiming to plunder and enslave the rest of us. Such an idea is clearly not baseless, but nonetheless completely misleading in the solutions it suggests. 'If only we could jail them, or have a Nuremberg Two, things would be better…'
However, Nuremberg One did not stop ethnic cleansing, targeting of religious groups, wars and mass death based on straight-out lies, or mass medical coercion for power and money. A couple of obvious reasons stand out for this.
Firstly, high-level societal corruption is so deep and pervasive that it simply cannot be rooted out by force or law–the judges and armies and arms manufacturers are likely to be part of this behemoth already and have no interest in self-harm, while politicians are simply paid by them.
Secondly, if those deepest in this cesspit of child sacrifice and share market-dictated slaughter were taken out of the picture, some of us would simply replace them. We know this because none of what we are seeing now is new. Ask any late Roman, Chinese peasant, or victim of the Inquisition. We need to be honest with ourselves regarding human behavior if we are going to change direction.
There was, arguably, a period after World War Two when the West had a bit of a reset and the direction did seem better. Eisenhower was ignored, and so were the obvious risks of growing inequality as software entrepreneurs and financial houses accumulated riches greater than whole nations. Faced with a choice of recognizing the obvious or believing the public relations they funded, the propaganda proved more popular. We all, as a society, opted for a future rooted more in feudal inequality than egalitarianism. We regressed, because it is always easier than standing tall.
So, here we are, back again, deep in the mire. To address it, we should first recognize the enormity of what is going on. We have allowed a corporate-authoritarian behemoth to arise, a monster of our own dereliction. We removed the brakes on greed and human stupidity, giving a free hand to a few to accumulate enormous wealth and power and, most importantly, to dispense with empathy. We empowered people shallow enough to believe in their own superiority, even omnipotence, by ignoring the wisdom of thousands of years of humanity.
We are all capable of becoming similarly corrupted, if we receive an opportunity and elect to succumb to it. There is nothing special about leaders of the big financial houses, the Trilateral Commission, the World Economic Forum, the redactions of the Epstein files, nor the peons of old wealthy families that helped stoke, and profited from, former wars. They are all expressions of what the rest of us can become, given the resources and a willingness to empty ourselves of a more meaningful but harder existence.
Therefore, we should not blame a 'they' or a 'them.' It is our own tolerance of the worst of human nature that gets us into trouble. Obsessing with specific people – railing against 'elites' – will at best result in their replacement.
Alternatively, we can start thinking through the codes of conduct that are necessary in any society, and in ourselves, to stop people going that way. Stop enabling the worst of human greed and self-delusion that drives sponsored politicians to advocate for war, unknown insiders to trade shares on human lives, and oligarchs to dream of corralling whole populations into their digital prison and plying them with pharmaceuticals. We need to recognize the system we all built, within which they operate.
Human nature is driven by greed. We know greed is bad, yet it is not unrelated to protecting and benefiting on...
By Brownstone Institute at Brownstone dot org.
President Biden celebrated the confirmation of Justice Ketanji Brown Jackson by telling reporters on the White House South Lawn, "America is a nation that can be defined in a single word….Asufutimaehaefutbuhwuhsh." That proved to be a fitting foreshadowing for a tenure that has been defined by directionless verbosity, unintelligible standards, and the determined advancement of partisan dogma.
On Tuesday, Justice Jackson issued the lone dissent in an opinion overturning Colorado's ban on "conversion therapy." The state law was broad enough to apply to any discussions acknowledging biological realities with gender-confused patients or contradicting the precept that the LGBTQIA+ socialization process is a cure without trade-offs.
The near-unanimous court ruled that the First Amendment barred this "egregious form of content discrimination," which banned therapists from voicing "perspectives the State disfavors when speaking with consenting clients."
Justice Jackson, however, described the defense of free speech as "puzzling," which is unsurprising given her comprehension issues on the bench. In dissent, she embraced the State's power to quash any professional speech that deviates from "current beliefs about the safety and efficacy of various medical treatments." As Justice Neil Gorsuch acknowledges in the opinion for the Court, that principle would allow the government to apply those malleable standards to "teaching or protesting," but Jackson welcomes that threat.
Justice Jackson does not sidestep the issue; she embraces the muzzling in the name of "scientific consensus," which she never considers could be incorrect. As support, she cites the American Psychological Association and the medical bureaucracy's treatment of "conversion" as an "unattainable goal." (Jackson notably omits that the former president of the American Psychological Association argued that therapy to change sexual orientation is legitimate for those who consented).
According to Jackson, the suppression of liberty is justified because "scientific evidence supports the conclusion that the anticipated harms from conversion therapy" should be avoided. Notwithstanding the widespread dissent on the issue, these were the same groups that embraced lockdowns, vaccine mandates, masking, and rioting in the Covid response. Their purported "consensus" was concocted through vast censorship efforts and smear campaigns.
Businesses shuttered, schools closed, and churches were banned as the facade of expertise became a bludgeon for ideological tyranny. The ostensible "consensus" maintained protections for riots, liquor stores, and abortion services, later culminating in the reshaping of our election process. And Justice Jackson wouldn't have it any other way.
Her long-standing antipathy to free speech is ironic given her use of its liberty. She speaks 50 percent more than any of her colleagues and more than Justices Amy Coney Barrett, John Roberts, and Clarence Thomas combined.
That allows her to make sweeping claims (such as comparing banning transgender mutilation surgeries to prohibitions on interracial marriage), and it provides a corpus of material to understand her opposition to the First Amendment.
In oral arguments for Murthy v. Missouri, which considered an injunction prohibiting the Biden administration from colluding to censor its critics, Jackson stated that her "biggest concern" was that the plaintiffs' efforts may result in the "First Amendment hamstringing the Government," apparently unaware that this is its very purpose.
More recently, in a hearing on Trump v. Slaughter, Jackson spoke longingly for bureaucratic supremacy, arguing that "experts" like "doctors and the economists and the Ph.D.s" should be immune from presidential control. That was in line with her tenure as a District Court Judge, during which she overturned four executive orders that sought to rein in the power of the nearly three million federal e...
By David Bell at Brownstone dot org.
The polarised debate on the World Health Organization (WHO) has been based more on mud-slinging and all-or-nothing dogma than scientific evidence and empirical data. However, with trust plummeting in public health and the WHO's funding rapidly falling as it scrambles for more to fund what it claims are ever-increasing threats, change is needed.
The International Health Reform Project (IHRP) formed with the intent of returning this debate to a rational framework. It did not begin as an anti-institutional campaign but as a professional reckoning. Its origins lie in a shared unease among physicians, public health practitioners, economists, and former senior international officials who watched the Covid-19 response unfold with growing alarm. Their concern was not with public health itself, but with the direction it appeared to be taking. The two of us, long engaged in global health policy and governance respectively, are co-chairs of a diverse group of ten experts who have spent the past 18 months thinking through this problem from evidence and orthodoxy rather than soundbites. The project delivered its first reports in March.
For decades, the post-war health architecture led by the WHO rested on principles such as proportionality, transparency, subsidiarity, and the primacy of human welfare. Covid exposed strains in that architecture. Emergency powers expanded, dissent narrowed, and policy debate became increasingly constrained. Measures once shunned for their inevitable harms and ethical concerns—lockdowns, prolonged school closures, border restrictions, universal mask and vaccine mandates—became normalised across very different societies with little regard for age-specific risk or local context. Balancing costs and benefits of interventions—the basis of public health policy development—became anathema in professional discourse.
Several IHRP members with long experience in low- and middle-income countries were particularly sensitive to the harmful consequences of the Covid public health response. Disruptions to agriculture and food distribution increased hunger and malnutrition. Routine immunisation programmes were set back. Extended school closures affected tens of millions of children, locking in intergenerational poverty and exposing millions of children to added risks of child labour, child marriage, and trafficking. Poverty reduction efforts suffered reversals and economic losses and national debt will stymie future healthcare programmes.
Those raising such concerns were often dismissed as reckless or ideological. Yet, the questions were rooted in core public health principles: What are the costs as well as the benefits of intervention? What trade-offs are justified? Who decides, on what evidence, and with what accountability? Why were these basic principles of public health abandoned?
During this period, Brownstone Institute emerged as a forum for open debate, building on discussions associated with the Great Barrington Declaration, which called for focused protection of the vulnerable rather than broad society-wide shutdowns. At the same time, the UK-based initiative Action on World Health was exploring the need for a systematic review of the performance of the WHO and the wider international health architecture. Conversations among participants in these efforts helped shape the idea of an independent expert panel to examine global health governance more broadly.
From the outset, IHRP sought to offer constructive reform rather than reactive protest. Its founders were clinicians, economists, and former multilateral officials committed to public health and international cooperation. Their aim was and remains to ensure that future health crises are addressed effectively and with proportionality, transparency, and respect for human dignity.
In this sense, IHRP arose not from hostility to public health, but from fidelity to its core principles.
Rebuilding International Health Governance ...
By Lucia Sinatra at Brownstone dot org.
You've probably seen the headlines: University College London (UCL) settled a massive lawsuit for £21 million with college students who got an inferior education due to Covid-19 pandemic closures. That's roughly $26 million in US dollars with each student of the 6,000+ students represented getting about £3,270 (around $4,100). Meanwhile, in the US, Penn State—which had our largest settlement to date at $17 million—paid out just $236 per student. So why are British students receiving roughly 17 times more money than American students when learning disruptions were far more severe and longer-lasting in the US?
The answer lies in fundamental differences in how the UK and US law treat students. Put simply: UK students got Zoom learning and were compensated for overpaying. US students got Zoom learning with no legal pathway to get their partial refund.
The British students have a secret weapon that US students just don't have; the Consumer Rights Act 2015. It explicitly treats students as consumers and universities as businesses providing a service. Under this law, if you pay for a premium service but receive a basic service, you're entitled to a price reduction—period. The law says services must be performed with "reasonable care and skill," and if they are not, consumers are eligible to get their money back for the difference in value.
Importantly, the Consumer Rights Act overrides vague clauses that allow claims for "We can't be held responsible if something extraordinary happens" to escape responsibility. This is precisely what happened in US cases; the universities used "reservation of rights" language buried in student handbooks and government lockdown orders as valid defenses. Whereas in the UK, consumer protection law says: nice try, but students are consumers, and you still owe them a refund.
The UK students made valid legal claims, the UK courts agreed, and the rest is precedent.
In the US, over 300 lawsuits were filed against 70+ US colleges and universities. Students alleged breach of contract and unjust enrichment—basically arguing they were promised in-person education, didn't get it, and deserved a partial refund.
Only it is not that simple to get in the US.
While the US has consumer protection laws—both at the federal level (the FTC Act) and state level (UDAP laws in all states), they don't specifically apply to education the way the UK's Consumer Rights Act does.
Some college students did try including consumer protection claims in their lawsuits—particularly in California, which has strong consumer protection statutes. USC's lawsuit, for example, included violations of California's Business & Professions Code. But these claims were always secondary to the breach of contract arguments. Why? Because successful student claims under US consumer protection laws simply don't exist.
US lawsuits did not and will likely never result in UK-level settlements because judges refuse to assess educational quality, and they recognize "It is not our fault" defenses. US courts are extremely reluctant to assess the quality of education to determine if students got what they paid for academically. In other words, they don't want to be in the business of deciding whether your online chemistry class was as good as your in-person class. US courts also give enormous weight to "It is not our fault" defenses. Universities argued that the pandemic was extraordinary, and given that the government advised us to close, you can't hold us responsible for converting to an online learning model.
So where do US college students stand? Many of the early cases got dismissed outright with courts ruling that the students had no case. Others are still dragging on many years after they were filed, and some have settled.
As of today, about 30+ universities have settled—mostly to avoid the cost of continuing litigation but lest you think that these settlements caused a dent in the budgets of US colleges and univer...
By Wendy McElroy at Brownstone dot org.
On February 5, 2026, in the Canadian Parliament, Conservative MP Garnett Genuis tabled Bill C-260, which prohibits civil servants or others with authority from recommending assisted-suicide to anyone who has not asked about it.
Genuis cited "examples such as Canadian Armed Forces veteran David Baltzer…who was offered MAiD by Veterans Affairs Canada, as well as Nicholas Bergeron, a 46-year-old man from Quebec who was not interested in a medically facilitated death, but was 'repeatedly' pushed towards the option by a social worker."
I can verify this government policy personally since a family member was encouraged without prompting to attend a seminar on how and why to kill himself.
Introduced in 2016, Medical Assistance in Dying (MAiD) is a federal program that can differ slightly from province to province. The core and constant concept: at the request of an eligible individual, the government administers death either by euthanasia through a lethal injection delivered by a clinician or by assisted suicide through self-administered medication that is facilitated by a clinician. An estimated 99% of MAiD cases involve euthanasia, not assisted suicide.
For one thing, the populous province of Quebec prohibits self-administration; in other provinces, health regions and care facilities perform only euthanasia or lean strongly in this direction. Perhaps government chose the acronym MAiD because Medical Euthanasia sounds jarring.
MAiD sets the extremely dangerous precedent of granting government the authority to kill an innocent person. The standard rebuttal to this argument is that the innocent person must request the "service" of suicide.
MAiD is not a uniquely Canadian issue. State-assisted suicide has spread quickly across the Western world. Currently (February 2026), over a dozen American states have legalized it in some form. In the UK, the Terminally Ill Adults Bill is at the Committee Stage in Parliament where it reportedly has 1,227 proposed amendments.
Some regions in Australia are also drawing up programs. The list of nations offering State-assisted suicide or euthanasia scrolls on and on, including Switzerland, the Netherlands, Belgium, Spain, Portugal, Luxembourg, Austria, New Zealand…The same concerns and debates surrounding MAiD bear directly on these other programs, especially as MAiD is often referenced as a model or as a cautionary tale.
I view MAiD as a cautionary tale.
Medical personnel may have religious or other ethical objections to administering MAiD. Perhaps they view euthanasia as a violation of the Hippocratic Oath, which states, "First, Do No Harm." For many, these 4 words form the backbone of medical ethics. Canada does not force doctors or nurse practitioners to administer MAiD, but the Canadian Association of MAiD Assessors and Providers (CAMAP) explains that "holding a conscientious objection to MAiD does not negate these obligations.
Rather, it activates alternative duties to discuss the objection with the patient and to refer or transfer the care of the patient to a non-objecting clinician or other effective information-providing and access-facilitating resource." This forces the practitioners to participate in the MAiD system to which they may strenuously object. Equally, some taxpayers may consider MAiD to be a form of murder that is covered by tax-funded health care. They may be as repulsed by having to pay for MAiD as much as many pro-life advocates detest having to finance abortions.
All assisted-suicide nations will confront certain practical questions; for example, all programs need to answer "what constitutes consent, and how is it documented?"
A sketch of how these general practical problems surfaced in Canada gives insight.
The original 2016 legislation (Bill C-14) provided safeguards to ensure applicants were eligible for MAiD. An amendment in 2021 (Bill C-7) established a two-track system of qualifications: Track 1 and Track 2. What is now called T...
By Alan Cassels at Brownstone dot org.
Are doctors being crushed by busywork, so they don't get much time to actually help people? If you read no further, that's the crux of my argument.
Friends ask if I have a family doctor. I admit that Dr. C. has been my doctor for two decades. He inherited me after my old school doc retired. While they think I'm lucky, frankly he's a peripheral person in my life.
Over 20 years I've seen him maybe once a year in very short consultations, (usually for an X-ray requisition to see if I broke something after falling off my bike). In each visit I have noticed a certain tendency: that he wants to give me a lot more than I'm into. He wonders about my cholesterol or my blood sugars, a colon test, a prostate check, or a flu shot. I'm polite. Each time I say, I'll look into those things and get back to him.
I never do. Why? Because I've already looked into those things and there's basically nothing of interest there. I'm a healthy, fit, 60ish guy who has spent 30 years studying the value of medical technologies, pharmaceuticals, and screening tests and the preventative prizes he's offering are theoretically fine, but in my view of things they are little more than meddling ways to turn healthy people into patients. Sure, call me a skeptic but this sort of busywork is unlikely to contribute to the length and quality of my life. I've read most of the big studies of the major classes of pharmaceuticals and parsed the evidence on medical screening, enough to have written books on this stuff. I'm okay to refuse more medicine than I need.
Like most doctors, however, he's just being proactive, seeking out signs of disease before it might hurt me. I get that. But it has me thinking: where does he find time to help people who are actually sick?
Here's the blunt truth for health policymakers and others overzealous about prevention: if our doctors are overly occupied delivering low-value prevention in healthy people, they're not going to be there for the genuinely sick. That's not callousness. It's basic resource allocation informed by evidence about benefits, harms, and opportunity costs.
Large trials and systematic reviews have repeatedly shown that most screening tests and preventive prescriptions yield marginal benefits for otherwise healthy individuals, while often introducing real harms. Screening that seems sensible on paper can lead to false positives, cascades of further testing, overdiagnosis, anxiety, and procedures that don't improve — and sometimes worsen — the quality or length of our lives. Every drug comes with harm of some kind. Taking your chances with those harms if you are seriously in need, sure. But what if you're already otherwise healthy?
Drugs prescribed for healthy people frequently have tiny benefits. Lower your cholesterol? Sure, if you think a 2% reduction in the risk of a heart attack for swallowing a daily pill for 10 years (and the possible increased risk of muscle weakening that comes with it) is worth it. An osteoporosis drug that produces a 1% reduction in the risk of a hip fracture? Then there is the problem with overdrugging older people, a particularly common form of cruelty in our elderly which results in a high rate of hospitalizations and deaths. Millions of otherwise healthy people get labeled "at risk," exposed to drug adverse effects, and end up wasting our doctors' time (and our health care dollars) that could be devoted to acute problems.
Like most doctors, Dr. C defaults to "prevention" because it's neat, feels proactive, and aligns with performance metrics and billing incentives working in a system that rewards doing more rather than doing what's most necessary. But is his time being stolen away from more urgent cases: the frail patient with multiple things going wrong at the same time, the person with new, unexplained symptoms, or the caregiver needing complex coordination for Mom who is failing fast? For those moments when we need experienced clinical judgment...
By Renaud Beauchard at Brownstone dot org.
Some books explain events, and others explain the world in which events become possible. Jacob Siegel's The Information State: Politics in the Age of Total Control (Henry Holt, March 2026) belongs firmly to the second category. A former US Army infantry and intelligence officer who served in both Iraq and Afghanistan, Siegel is not a theorist who stumbled upon power. He watched it operate, up close, against living populations.
That experience planted the seed for his landmark 2023 essay in Tablet magazine, "A Guide to Understanding the Hoax of the Century," which was immediately recognized by some of the sharpest minds of our moment — N.S. Lyons, Matthew Crawford, Matt Taibbi, Walter Kirn, among others — as something rare: a genuinely illuminating text. The book that has grown from it is not merely an expansion. It is the definitive account of how liberal democracy, understood as government by consent, was quietly displaced by what Siegel calls the information state.
What is the information state? It is a regime that governs not through legislature or courts or votes, but through the invisible digital architecture that now mediates nearly every dimension of public life. Siegel's definition is evolutive: "a state organized on the principle that it exists to protect the sovereign rights of individuals" is replaced by "a digital leviathan that wields power through opaque algorithms and the manipulation of digital swarms."
The Foucauldian resonance is deliberate and precise. This is governmentality in the strict sense, a rationality of rule that targets conduct rather than territory, that operates through security mechanisms and the management of populations rather than through the old instruments of force and law, blurring the distinction between the two. Its goal, Siegel insists, was never simply to censor, never merely to oppress. It was to rule. The kind of brazen censorship we observed during the Biden era and that is so tempting to our warring rulers again is not a bug; it is a feature of the new normal.
What gives Siegel's thesis its particular force is the paradox at its center. The great ills the information state claims to remedy — disinformation above all — are self-referential products of the surveillance-and-attention-based internet upon which the state now depends for its very operation. The machine generates the pathology it then offers to cure. As Siegel puts it with characteristic precision, the politicians loudest in condemning platforms like Facebook or Twitter do not take the obvious step of seeking to make them less powerful.
Their aim is not to reform or rebuild the repressive infrastructure of the internet, only to make it serve their own interests. Anyone who has read Jacques Ellul will recognize the pattern immediately. In an endless vicious circle, "Technique" keeps expanding to solve the problems created by its own prior expansion. What had appeared in the 1990s as the emancipatory promise of limitless digital communication had quietly become, by 2016, the medium through which a new class of rulers managed the informational environment of their subjects.
The book's historical architecture is ambitious, and it is here that Siegel distinguishes himself most sharply from mere polemicists without ever sounding conspiratorial. He traces the genealogy of the information State across five acts, beginning far earlier than most observers imagine. The technocratic seed was planted by Francis Bacon's Promethean dream of extending human dominion over nature, a vision that married scientific empiricism to political will, and that dismissed classical contemplation as, in Bacon's own phrase, "the boyhood of knowledge."
From Bacon, the thread runs to Jean-Baptiste Colbert, Louis XIV's most trusted minister and weapon against the Nobility of the sword, who married humanist dreams of universal libraries to the accounting practices of Europe's merchant houses and pioneered, in ...
By International Health Reform Project at Brownstone dot org.
[This report of the International Health Reform Project is more than a year in preparation. The full policy report and technical reports are embedded below this foreword and executive summary. The policy report is also available from Amazon in physical and digital forms. The IHRP is sponsored by Brownstone Institute, which had no involvement in forming contents and conclusions.]
International cooperation on health is a widely accepted global good. Capacity building and development assistance reduce historic health inequalities and, as a result, strengthen economies. Management of cross-border infectious disease threats is best done through joint surveillance, data sharing, and response.
Collaboration on norms and standards provides efficiencies and facilitates trade in health products. However, the interaction between disease, the environment, and human populations is complex, and threats are heterogenous in their effects and gravity. Collaboration must therefore take such variability into account, with decision-making ultimately based around those affected.
Experience has demonstrated that international health cooperation can, when poorly governed, undermine trust, distort priorities, and produce significant unintended harm. Recent trends of centralized decision-making, emergency exceptionalism, and donor-driven agendas, exemplified during the Covid-19 response, displaced proportionality, local context, and established public-health ethics. These failures revealed structural weaknesses rather than temporary lapses.
At the same time, cooperation in public health also requires an understanding of the sovereignty and equality of individuals, and of the states that represent them – an understanding that underpins the United Nations itself. Thus, any institution tasked with managing health cooperation must be based on this understanding and be fully subject to the states it is intended to serve.
It should surprise no one that, after nearly 80 years of existence in a greatly changed world, the World Health Organization (WHO) is perceived by many to have drifted from its original model. Fundamental shifts in its funding base, and now the exit of its largest state funder, present both an opportunity and an urgency to reassess the optimal way in which states should work together to serve the health needs of their populations, applying the fundamental principles on which public health should be based to a greatly changed and evolving world.
WHO and the state of international health cooperation
The WHO constitution, signed in 1946 by 51 states then comprising the United Nations, had little input from most current African and Asian states. Its governing body, the World Health Assembly, gradually expanded as states broke from colonialism or foreign mandates to achieve sovereignty.
Defining health in its constitution as "a state of complete physical, mental, and social well-being and not merely the absence of disease or infirmity," the WHO took on a broad mandate including support for these less-resourced states, coordinating cross-border outbreak management, disease elimination, and the setting of international normative standards. It was hoped that the improvements in health and longevity that economic development had brought to wealthier countries could be accelerated in the lower income countries, reducing the inequalities resulting from colonialism and neglect.
The WHO's 150 country offices have formed a framework to strengthen local capacity and health systems. The organization is well known for successes such as smallpox eradication and early focus on the major drivers of well-being and longevity such as improved sanitation, nutrition, and access to basic healthcare. Major programmes in tuberculosis, malaria, vaccination, and child health have set standards for disease management and reduced overall disease burdens. A global decline in infectious disease mortality, con...
By Joseph Varon at Brownstone dot org.
The sounds in my mother's room during her final days stood in stark contrast to those that have defined most of my professional experience. There were no ventilator alarms piercing the air every few minutes, no overhead announcements echoing down hospital corridors, no infusion pumps demanding attention in the middle of the night. There were no teams rushing through doors, pushing carts full of medications, no physicians frantically adjusting machines that were temporarily holding physiology together, no organized chaos that defines the modern intensive care unit. Instead, there was quiet.
For decades in intensive care units, where noise signifies action and action equates to survival, quietness has felt unsettling. Intensive care medicine depends on urgency, real-time monitoring, and rapid decision-making to prevent death. I have lived my professional life in that environment. But in that room, I was not the physician. I was a son. And now, as I write this, I am a son whose mother has died.
My mother did not die in an intensive care unit. She was not surrounded by machines, alarms, or artificial light. She died at home, in a room imbued with the quiet weight of memory. Decades of life were embedded in those walls, which had witnessed birthdays, conversations, laughter, arguments, and the countless ordinary moments that, in retrospect, constitute the true foundation of a life. A peripherally inserted central catheter (PICC) line rested in her arm, serving not as a symbol of escalation but as an instrument of compassion. Medications were given to relieve discomfort rather than to reverse disease. Nurses entered the room with calm, deliberate purpose rather than urgency. Their voices were soft, their movements measured. Their objective was not to save her life, but to honor it. There was no battle being fought. There was acceptance. And in that acceptance, there was dignity.
Around her, the people who loved her most gathered. Children. Grandchildren. Family members who had traveled from different places, not in panic, but in recognition that this moment, this final chapter, was one that mattered deeply.
Sometimes we spoke. Sometimes we sat in silence. Sometimes we simply held her hand.
There is a form of communication in those moments that medicine cannot teach or measure. It is neither physiological nor quantifiable, yet it is real.
Meanwhile, my phone would not stop. Dozens of calls. Hundreds of text messages. Colleagues from across the country. Students from years past. Friends, patients, acquaintances. All reaching out with genuine compassion. And almost every message carried the same underlying sentiment: "We are praying she improves." "We hope she pulls through." "Let us know what else can be done." I understood the intention behind every one of those messages. They were kind. They were sincere. They were deeply human. But they were also revealing.
Because what they reflected, collectively and unconsciously, was something we rarely acknowledge openly: We have become a culture that no longer knows how to accept death.
Over the past century, medicine has achieved extraordinary success. We have extended life expectancy, eradicated diseases, developed technologies that can temporarily replace failing organs, and established systems capable of sustaining biological function long after the body can no longer do so independently.
Ventilators can breathe for failing lungs. Dialysis machines can replace kidney function. Vasopressors can maintain blood pressure when the cardiovascular system collapses. Extracorporeal support can oxygenate blood outside the body. Artificial nutrition can sustain metabolism indefinitely.
These are remarkable achievements. However, these advancements have also fostered a dangerous illusion: the belief that death is optional, and that with sufficient intervention, escalation, and technological force, the inevitable can be indefinitely postponed. We cannot.
Every...
By Sonia Elijah at Brownstone dot org.
When the World Health Organization declared Covid-19 a pandemic on March 11, 2020, what I call 3/11, it unleashed not just a health response but a coordinated global reset. What began as "two weeks to flatten the curve" metastasized into the most sweeping peacetime curtailment of civil liberties in modern history: lockdowns, mandates, behavioral manipulation, censorship, and the rise of digital authoritarianism.
This excerpt from Chapter 16 of my new book 3/11 Viral Takeover lays bare how the Covid response became the pretext for normalizing government-directed censorship, throttling legitimate scientific debate, and entrenching state power over public discourse.
After 3/11, governments did not merely request moderation, they demanded it. Backed by explicit threats of regulatory consequences. In the United States, the declassified "Twitter files" released between 2022 and 2023, along with the landmark Missouri v. Biden lawsuit, later known as Murthy v. Missouri, revealed a sustained multi-agency campaign of coercion that went far beyond polite suggestions.
Shortly after Elon Musk's acquisition of Twitter on October 27, 2022, he released internal documents, known as the "Twitter files" to journalists Matt Taibbi, Bari Weiss, Lee Fang, Michael Shellenberger, David Zweig, Alex Berenson, and Paul D. Thacker, exposing how federal agencies routinely flagged content for removal or suppression.
The files showed White House officials, including former Press Secretary Jen Psaki, directly pressuring Twitter to censor true posts about vaccine side effects, natural immunity, and lockdown harms. Government agencies, including the DHS, CDC, FBI, and the Cybersecurity and Infrastructure Security Agency (CISA), used specialized reporting mechanisms, such as Facebook's "Content Request System," to flag social media content for potential throttling, suppression, or removal, often under the umbrella of countering "mis-, dis-, and malinformation."
The files demonstrated that platforms complied not out of independent policy, but out of fear of antitrust action, Section 230 reform, or other regulatory retaliation.
The most consequential legal challenge was Missouri v. Biden, mentioned earlier. It was filed in May 2022 by the Attorneys General of Missouri and Louisiana, alleging that the Biden administration violated the First Amendment by coercing social media companies to suppress protected speech. The case expanded to include private plaintiffs, including Dr. Martin Kulldorff, Dr. Jay Bhattacharya, Dr. Aaron Kheriaty, and Jill Hines, co-authors and advocates of the Great Barrington Declaration.
Key revelations from discovery showed White House officials repeatedly berating Facebook and Twitter executives for not doing enough to censor vaccine-related content. Emails revealed direct pressure: "We are gravely concerned that your service is one of the top drivers of vaccine hesitancy – period."
In July 2021, President Biden publicly accused platforms like Facebook of "killing people" by allowing vaccine misinformation to spread. White House Communications Director Kate Bedingfield followed up by saying platforms "should be held accountable" and that the administration was "reviewing" Section 230 protections, which shield platforms from liability for user content.
This dynamic was laid bare even more starkly during Congressional testimony on the Twitter files and government weaponization. In the March 9, 2023 hearing of the House Judiciary Committee on the Weaponization of the Federal Government, journalists Matt Taibbi and Michael Shellenberger testified on the Twitter files revelations. Taibbi described the government's role as creating a "censorship-industrial complex," while Shellenberger detailed how federal agencies pressured platforms to suppress Covid-related dissent, including accurate information on vaccine side effects and origins. Shellenberger stated: "The Twitter Files show the government was dir...
By Brownstone Institute at Brownstone dot org.
On Tuesday, attorneys announced a "Consent Decree," which will put an end to the years-long litigation in Murthy v. Missouri (previously called Missouri v. Biden), which focused on government-induced social media censorship. While its proponents herald the settlement agreement as a victory for free speech, the details suggest that Leviathan has not lost this civilizational struggle. Its concessions are decorative, and the text implicitly suggests that the practices will largely continue.
The "victory" for free speech in this case is that the remaining defendants – the CDC, CISA, and the Surgeon General – agree not to "threaten Social-Media Companies with some form of punishment…unless they remove, delete, suppress, or reduce content" that contains "protected free speech." That is akin to a civilian signing an agreement not to steal his neighbor's car; it "prohibits" something that is already illegal under black-letter First Amendment law.
Free speech advocates, however, cannot even celebrate that as a "victory." The agreement not to bludgeon social media companies into imposing state censorship only lasts "for a period of 10 years," per the terms of the agreement. After that, the agreement implies that CISA can return to its practice of "switchboarding," which dictated which posts should be banned from social media.
Further, the "restriction" only applies to three government agencies; the settlement does not apply to similar assaults from any other government group (including DHS, the CIA, the FBI, or the White House).
Moreover, the only people who can enforce the terms are the five remaining plaintiffs, as the agreement is "enforceable only by the Parties." If government warhawks coerce platforms to ban critics of the Iran War, this "Decree" will have no effect.
The supposed triumphs lack substance. The government agencies agree that "modern technology does not alter the Government's obligation to abide by the strictures of the First Amendment" and that "misinformation" labels do not render speech constitutionally unprotected. Excellent. But that is nothing more than a repetition of well-established law.
Unfortunately, this was the predictable endpoint of the litigation following the Supreme Court's dereliction of duty in June 2024, when it concocted procedural excuses to evade the controversy of the indisputable evidence of the Biden White House's censorship apparatus. The history of the Action reveals that the Supreme Court forfeited a generational opportunity to protect American free speech.
July 2023: The District Court Unravels the Censorship Hegemon
On July 4, 2023, District Court Judge Terry Doughty granted a preliminary injunction barring large swaths of the US government from colluding with social media companies to censor "content containing protected free speech." He described the allegations, if true, as "arguably [] the most massive attack against free speech in United States' history."
The order included a 155-page memorandum recounting the Biden administration's wide-ranging assaults on free expression. Provided it survives future digital purges, historians will one day look to it as a guide to the authoritarian madness that overtook the republic under the guise of "public health." The vast conspiracy spanned nearly every federal entity, including the White House, the Department of Justice, the Centers for Disease Control and Prevention, and the Intelligence Community.
That was the high-water mark of this case's victory for freedom.
The Regime Fights Back
The regime would not let an injunction usurp its power. Censorship had been integral to its governing strategy since 2020's crackdown on Covid dissidents and the later election campaign, as Joe Biden anointed Antony Blinken Secretary of State in return for him arranging for the CIA to thwart the Hunter Biden laptop scandal. Once in office, the Biden administration had unprecedented censorship aspirations, inclu...
By Brownstone Institute at Brownstone dot org.
You recall how the Covid lockdowns began. It was a soft and slow drumbeat that began in late January 2020, with growing amounts of panic and a faster tempo, increasing for several weeks. The US President and the UK Prime Minister resisted extreme reactions. Most governments did and so did most public health authorities.
The drum pounding became earsplitting in late February. Faced with an incredible barrage, finally Boris Johnson and Donald Trump gave in. They got out in front of the problem and lowered the boom: stay home, essential/unessential, no flights, no parties, stop your consumerist ways. Just sit alone and be sad. Both came to regret this choice but, by then, others were in charge.
The experts and institutions were everywhere, seizing the moment. The CCP, WHO, CDC, Imperial College London, Fauci, Birx, CNN/NYT/MSNBC, and on it went, everyone telling us the same thing daily. Those who asked questions were shouted down, shamed, throttled, cancelled, deleted. It felt like we were surrounded on all sides by lies and liars, marionettes and mushbrains, sycophants and spooks.
Six years later and nearly to the day, this new attempted lockdown seems to be going the same way, not concerning infectious disease but energy use. Isn't it remarkable how the officially recommended methods of managing these completely different realms bear so much in common? They both come down to restricting your liberty, rationing your consumption, redirecting your attention, and shouting down critics.
The Iran War kicked off the price spike but it was uncanny how a machinery was so quickly put in place to instruct everyone of what to do. The panic about how to respond is intensifying. The crisis is without precedent, they say. We have to try new approaches, dramatic ones.
Suddenly, this institution called the International Energy Agency holds new prominence in world media. Founded in 1974, it's an NGO associated with OPEC. It has no hard but only soft power – like the World Health Organization, with whom the IEA shares a similarly authoritative branding.
There is a new Fauci too. The head of the IEA is highly decorated and universally praised Dr. Fatih Birol. Though he has never worked in industry, any more than Fauci had seen patients in decades, Dr. Birol is said to be the world's top expert and works closely with China on its supposed "energy transition." Indeed, sporting an honorary doctorate from Imperial College London, he has been a member of the Chinese Academy of Engineering since 2013.
Concerning the release of new energy reserves, Birol is nonplussed: "supply-side measures alone cannot fully offset the scale of the disruption."
Remarkable isn't it? New script, same play, new actors for the same roles, overlapping protocols, nearly identical tempo of acceleration and dynamic of acoustics in the media. Around the world, countries are imposing price caps, consumption rationing, indoor temperature controls, and shorter work weeks as a prelude to full-on stay home orders. They haven't come to the US yet but they are spreading in Europe and the UK, as people panic about prices.
Clearly, they say, we need to flatten the curve once again. Temporarily. Just until we get the problem under control. We just need to buy time. After all, we've never dealt with anything like this. Clearly the long-term solution, they say, is a full switch to "renewables" but that cannot happen all at once.
Inspired by the manner in which governments were able to control communication and people during the Covid crisis, the IEA advises the following:
1. Work from home where possible. We'll be back to languishing at home and consuming entertainment through laptops. IEA comments: "Displaces oil use from commuting, particularly where jobs are suitable for remote work."
2. Reduce highway speed limits by at least 10 km/h ( 6-7 miles per hour), which is really nothing more than a method of creating annoyance. The IEA says "lo...
By Peter C. Gøtzsche at Brownstone dot org.
On 16 March, federal judge Brian Murphy blocked the US government from making sweeping changes to the US childhood immunisation schedule, "in a blow to Health Secretary Robert F Kennedy Jr's agenda," as the BBC expressed it.
The American Academy of Pediatrics and other large medical groups had sued, saying Kennedy's changes violated federal law.
The BBC calls them respected medical groups, which they are not, as illustrated by the hepatitis B vaccine controversy. On 5 December 2025, the Advisory Committee on Immunization Practices (ACIP) at the Centers for Disease Control and Prevention (CDC) ended the recommendation that all newborns in the United States receive a hepatitis B shot at birth. The birth dose was recommended only if the mother had tested positive for the virus or if her infection status was unknown.
The change was very rational, and, as in Western Europe, where only Portugal recommends a universal birth dose, it would seem difficult to argue against it. But the media did and failed us badly. Two days after the vote, I downloaded news stories from 14 major media outlets, and they were all very negative.
The media gave organisations undue prominence without ever considering if they were impartial. They urged people to look to "independent recommendations," e.g. from the American Medical Association and the American Academy of Pediatrics, for "science-based advice."
I argued that it was advice based on money. The Academy would continue to support the birth dose of the vaccine but all the journalists forgot to say that it receives many millions of dollars from vaccine manufacturers and other drug companies.
Judge Murphy also suspended Kennedy's appointments to ACIP. The BBC argued that many of the panel members were vaccine sceptics and noted that "Kennedy was a longtime antivaccine activist before joining President Donald Trump's administration."
This is so typical of irresponsible journalists. They never investigate if Kennedy's reforms are prudent and evidence-based but use ad hominem arguments to kill them. It is so low and does not further a rational healthcare; it impedes it. I have described in detail how the coverage of Kennedy's vaccine reforms in the BMJ, a major medical journal, amounts to character assassination. It is just mind-blowing that a medical journal would do this in a consecutive sample of 33 articles.
It is also false that Kennedy's new vaccine panel at the CDC are vaccine sceptics. I know several of them personally and they are highly qualified researchers who do not have the financial conflicts of interest that the old panel had, which I found was corrupt. They rubber-stamped any proposal that came forward, no matter how idiotic it was.
A spokesperson from the Department of Health and Human Services said the agency "looks forward to this judge's decision being overturned just like his other attempts to keep the Trump administration from governing."
The medical groups that brought the lawsuit lauded the decision, including the American Medical Association, the largest US professional organisation for doctors, which called it "an important step toward protecting the health of Americans, particularly children."
Follow the money is the best advice I can give to anyone with an interest in healthcare and in the US, virtually everything has to do with money. The American Medical Association is heavily corrupted by industry money.
Why on earth could it be a problem that Kennedy reduced the huge number of recommended vaccines in the US so that the vaccine schedule became similar to the one we have in my country, Denmark, and in many other European countries? As I have demonstrated, the reduced US childhood vaccination schedule was systematically denigrated in the media although it was a rational and evidence-based decision.
It is possible that there are some technicalities, "procedural requirements," that need to be addressed. Judge Murphy pointed these o...
By Jim Haslam at Brownstone dot org.
Disclaimer: If Covid-19 were linked to animal vaccine research, it would be an unintended consequence, though the institutional response was anything but accidental.
The new revelation that America's top coronavirus scientist, Dr. Ralph Baric of the University of North Carolina (UNC), worked with the intelligence agencies in the lead-up to the Covid-19 pandemic significantly raises the likelihood that Baric is the creator of SARS-CoV-2, the virus that caused the Covid-19 pandemic.
Yet the evidence for and against this hypothesis remains incomplete because the US government is engaged in an ongoing coverup of key information. Regardless of the government's willingness to be forthcoming, Baric himself could shed copious light on a matter of major public and scientific importance by making available his lab materials from the period leading up to the pandemic.
There is strong evidence backing the following key points:
1. Baric's lab had the technical ability (reverse genetics systems, chimeric spike protein, infectious clone production) to build viruses similar to SARS-CoV-2.
2. The 2018 DEFUSE proposal to the Defense Advanced Research Projects Agency (DARPA), led by Baric, explicitly outlined laboratory manipulations capable of producing a SARS-CoV-2–like virus.
3. Although DARPA declined to fund DEFUSE, most team members later received similar funding through other grants from the National Institutes of Health (NIH).
4. US intelligence agencies (including CIA and ODNI) consulted Baric and other experts from 2015 onwards and even ran pandemic war games (e.g., Event 201, Crimson Contagion) just before the pandemic. The CIA now assesses, albeit with low confidence, that a lab-related incident in China is more likely than a purely natural origin.
5. This new assessment is consistent with the "lab-leak" hypothesis that Baric created the virus and "provided" it to the Wuhan Institute of Virology (WIV) for experiments on "wild-caught" Chinese bats.
6. Early in the pandemic, Baric omitted the furin cleavage site in his intelligence briefing. He later testified that he had seen it, and the idea of inserting such a site "was clearly mine."
7. SARS-CoV-2 remains the only known SARS-like (sarbecovirus) with such a furin cleavage site (FCS), which significantly enhances infectivity and transmissibility.
One of us (Haslam) has set forth the most detailed and likely hypothesis regarding the origin of the pandemic, in the book COVID-19: Mystery Solved: It leaked from a Wuhan lab but it's not Chinese junk (2024). No information has come to light that challenges or refutes the following sequence of events, as hypothesized in the book:
Baric's lab in North Carolina created a chimeric SARS-like virus (SARS-CoV-2 or its immediate progenitor called HKU3-Smix) using DEFUSE-style methods.
The proposed novel virus (HKU3-Smix) differed from SARS-CoV-1 by 25%; SARS-CoV-2's spike differed by 24.7%. Baric later testified, "We were within the range."
Baric used Egyptian fruit bats as a surrogate at Rocky Mountain Laboratories in Montana (a high-containment NIH facility that conducts DARPA research). His biotechnology was designed to be portable in a small tube and usable under BSL-2 conditions.
The constructed virus was then sent to WIV for further experiments, likely at a Chinese bat colony (Rhinolophus sinicus) near the BSL-4 facility.
The virus infected a lab worker, probably asymptomatically, and spread (initially undetected) in Wuhan from the WIV, triggering the pandemic.
Egyptian fruit bats (Rousettus aegyptiacus) have emerged as a non-natural reservoir host for SARS-CoV-2, and were referenced in DARPA DEFUSE.
Over the past year, we have debated this lab-leak hypothesis with the WHO Scientific Advisory Group for the Origins of Novel Pathogens (SAGO). That debate became public with their recent Nature paper. We reminded SAGO that they have not identified a progenitor virus with 99% genome similarity, nor have th...





















