One year ago President Donald Trump promised to “love and cherish” Medicaid. Alas, his affection for the public insurance program was short-lived. The so-called One Big Beautiful Bill Act (OBBA) he signed into law on July 4—the most sweeping health care legislation since Barack Obama’s 2010 Affordable Care Act (ACA)—slashes about $1 trillion in federal health care spending, mostly from Medicaid, over the coming decade. The spending reductions fund tax cuts for the wealthy and a surge in spending on immigration enforcement.
As an ICU physician at a safety-net hospital system that largely serves publicly insured, low-income communities north of Boston, I may see some of the effects of the law’s cuts firsthand. Around half of our system’s gross patient revenue comes from Medicaid, which covers more than 70 million Americans. What these cuts will mean for hospitals like mine is uncertain: some providers could face severe financial strain or even closure if Medicaid revenues plunge. But for Medicaid enrollees themselves, there is little doubt that the cuts will translate into inadequate care, increased debt, doctor visits avoided, prescriptions left unfilled, and chronic diseases left unmanaged, resulting in health crises and even, in some cases, preventable deaths. I might see the latter parts of this sequence in the ICU, where an uninsured patient could, for instance, arrive short of breath, with a sputtering heart, swollen brain, or failing kidneys, because of years of untreated high blood pressure.
The Congressional Budget Office estimates that the OBBBA’s welter of cuts will increase the number of Americans without health coverage by some 10 million. For instance, a provision that makes enrollment into Medicaid more cumbersome will leave approximately 400,000 more people without coverage by 2034. Another that forces participants to undergo a reassessment of their eligibility every six months rather than annually will separate roughly 700,000 from health care. And one that restricts states’ ability to raise funds for Medicaid by taxing health care providers will leave 1.1 million more uninsured. But the law’s most punitive component—one rooted in the Elizabethan Poor Law tradition of separating the “deserving” from the “undeserving” poor—is the provision imposing work requirements that will compel nearly 20 million beneficiaries to regularly document time spent working (or a qualifying exemption). Precedents in Arkansas and Georgia show that such requirements mostly dump people from coverage by forcing them to clear onerous administrative hurdles—with significant cost to taxpayers, given the need for expensive new bureaucracies. All in all, a shockingly large number of Americans will find themselves affected: in a study published in the Annals of Internal Medicine last year, my colleagues David Himmelstein, Steffie Woolhandler, and I projected that, together, the Medicaid cuts included in a late version of the bill could lead to nearly two million patients losing their personal physician, 380,000 not getting mammograms, 1.2 million accruing additional medical debt, and 16,000 dying preventable deaths.
These cuts cannot be reversed fast enough. But whenever the political conditions shift, turning back the clock won’t suffice. Medicaid’s origins lie in a state-centric welfare model ideologically predicated on the same distinction between the “deserving” and the “undeserving” that underpins the Trump administration’s new work requirements. Since its inception, this model has constrained the program’s benefits and made it more vulnerable to attack from the political right. We need to restore Medicaid, and to improve it—but the best way to advance its mission in the long term would be to replace it with something better.
*
Medicaid was established by the same Civil Rights-era legislation that created Medicare, the federal health insurance program that covers nearly all the nation’s elderly. This legislation’s roots go back to the end of World War II. The reformers who helped craft the New Deal were drawn to the thought of creating a national health system on the social insurance model, which emerged in Europe in the late nineteenth century to alleviate the social and political pressure created by a rapidly expanding industrial working class. For reasons of political expediency, national health insurance had been left out of the landmark Social Security Act of 1935, but during the last years of the war some progressives returned to their unfinished business. In 1943 they introduced the Wagner-Murray-Dingell (WMD) Bill. Taking its cues from the famous “Beveridge Plan” published in Great Britain a year earlier, which proposed a flat tax to fund a vastly expanded social safety net, the bill envisioned an ambitious expansion of the US welfare state, including a national health insurance system.
The first version of WMD was drafted in part by the indefatigable social security expert Wilbur Cohen, who would go on to become a leading architect of Medicare and Medicaid over his decades-long career in government. But WMD never passed, and the social insurance model for health care it advanced—which would have provided coverage to most of the nation without means testing—fell victim to a redbaiting campaign led by the American Medical Association (AMA).1 As the Indiana Representative Forest Harness, who sat on a Congressional subcommittee launched by Republicans to investigate the Truman administration’s campaign, ominously intoned: “Communism, Fascism, Nazism, Socialism—all are alike in that they enforce a system of State Medicine.”
When WMD failed, Truman’s Federal Security Administrator, Oscar Ewing, tasked Cohen with drafting less ambitious courses of action. “One of the options I mentioned,” Cohen recalled in an essay from the mid-1970s, “was a plan for national health insurance solely limited to the aged”—the much narrower plan that became Medicare.2 In 1960 John F. Kennedy became the first Democratic presidential candidate to embrace the idea, although during his presidency the fiscally conservative Wilbur Mills, a Democrat from Arkansas who chaired the Ways and Means Committee, kept the draft legislation from coming to a vote. After a 1964 electoral sweep gave Democrats commanding control of Congress, Mills found himself outnumbered. According to the sociologist Jill Quadagno, not a single Medicare-supporting incumbent lost their race.3 A year later the legislation establishing Medicare and Medicaid was signed into law by President Lyndon B. Johnson as the capstone on his “war on poverty.”
The standard story about the legislation overwhelmingly focuses on Medicare, which was the source of effectively all controversy and debate at the time. In contrast, scholars have generally had relatively little to say about Medicaid, describing its origins as “obscure” and the program on the whole as an “afterthought” and a “sleeper provision.” But Medicaid does have an origin story of its own. While Medicare was a fully federal program with roots in the social insurance model, Medicaid emerges from the tradition of federally subsidized but state-controlled welfare programs, which in turn descend from centuries-old systems of localized poor relief. “The Medicaid program was based upon state implementation,” Cohen noted, “because welfare had been a state and local prerogative in the United States for more than 300 years, deriving its basic form from the Elizabethan Poor Law of 1601 and its subsequent variations.”
The Elizabethan Poor Law, as the scholar Melinda Cooper notes in her study Family Values (2017), dramatically reorganized poor relief in England, shifting responsibility from church authorities to a more systematic, tax-funded program run by local governments. It was at once “the first serious attempt to organize public relief on a national scale” and a system of coercion. As the legal scholar Nicole Huberfeld writes, the “deserving” poor (such as young children or widows) would receive aid, but “able-bodied” (or “underserving”) applicants would be deemed shirkers and effectively imprisoned in workhouses. The American colonies adopted this categorization, which extended, Huberfeld explains, into “early federal efforts to improve the health of the poor” in the 1920s and 1930s. New Deal programs drew on both the social insurance and the Poor Law traditions. For instance, whereas the Social Security Act of 1935 provided nationalized old-age pensions (“social security”) at a federal and relatively universal level, its provision of cash welfare was administered by states, which retained significant control over eligibility and benefits.
Medicaid and its precedents built on the welfare model. In 1950, when Congress amended the Social Security Act, Cohen managed to insert a provision that, for the first time, allowed states to use federal dollars to pay medical providers for caring for some welfare recipients. This “vendor payment” system—the seed of Medicaid—was later expanded by a law called the Kerr-Mills Act. Named for its two sponsoring Southern Democrats, Wilbur Mills and Oklahoma Senator Robert Kerr, the bill passed with Cohen’s assistance in 1960. In some respects Cohen, Kerr, and Mills made for odd bedfellows: after all, the bill was a conservative counter to the ongoing progressive effort for more ambitious health care reform, which Cohen had long supported. But Kerr-Mills proved viable precisely because it drew support among conservatives by promising state control. As the political scientist Matthew Gritter argues in a history of the legislation, the bill was “a modest measure that allowed southern segregationist governments to have the power of implementation in the South.” The vendor payments it permitted on behalf of the low-income elderly were so limited that even Ronald Reagan—then the governor of California—and the AMA supported the program. On the eve of Medicare and Medicaid’s passage, Kerr-Mills covered a mere 2 percent of the country’s elderly population. This was Medicaid’s direct template.4
Because Medicaid owed so much to the welfare model, it was seen as a bulwark against national health insurance; this perception may even have been its genesis. As Cohen later recalled, in the lead-up to the 1965 legislation, Mills asked him how he would deflect worries that Medicare would be the “entering wedge” for a national universal health care system. “I suggested that if he included some plan to cover the key groups of poor people, he would have a possible answer to this criticism,” Cohen wrote in an essay from 1985. In his study The Politics of Medicare (1970), Theodore R. Marmor spelled out the implication more directly: “in the eyes of Wilbur Mills, [Medicaid] was yet another means of ‘building a fence’ around Medicare, by undercutting future demands to expand the social security insurance program to cover all income groups.” And so Mills tacked Medicaid onto Medicare at seemingly the last minute with little controversy.
Yet it would be a mistake to take from this that Medicaid was nothing more than a stopgap measure against the universalization of Medicare. Cohen’s suggestion to the conservative Mills could also be interpreted as savvy political salesmanship that helped advance both programs; Cohen had been advancing similar plans for years and saw himself as a supporter of universal national health insurance. But it does help explain why Medicaid engendered so little controversy at the time, and Medicare so much. Medicaid, unlike Medicare, was a program that states—particularly Southern states—could control. Rightly or wrongly, it was seen as less of a threat to the established, segregated medical order, which treated some people as second-class citizens.5
Early on, Medicaid mostly only covered poor individuals who were “categorically” eligible based on participation in cash welfare programs. This included the aged, the blind, the disabled, and single mothers with young children—people who in the Elizabethan Poor Law tradition might be seen as “deserving.” But over time eligibility for Medicaid, little by little, became less and less connected to welfare, a process that unfolded unevenly across states. While Medicare, despite the hopes of some of its architects, never became a universal program, Medicaid grew far beyond their expectations, in large part to fill in the gaps. Remarkably, at the peak of enrollment in 2023, Medicaid came to cover 94 million Americans, almost 20 million more than Medicare. The biggest single shift occurred with the passage of the ACA, which expanded Medicaid to include all low-income adults: means testing remained, but all poor individuals could now enroll on the basis of their income regardless of their “category” of deservingness or state of residence, effectively federalizing eligibility criteria (at least until the Supreme Court made the expansion optional for states).6 As a result, a progressively larger share of disadvantaged Americans gained coverage; for the last few years the percentage uninsured has hovered near the (still substantial) low of 8 percent.
Medicaid expansion helped millions of Americans. Study after study has found that it increased access to—and use of—needed care and medications, reduced medical debt and financial strain for families, improved mental health, and saved thousands upon thousands of lives. Trump’s cuts will now undo many of these gains. But even without these cuts, central features of Medicaid—including means testing and state control—have built inequity and political weakness into its foundations.
*
“If you have seen one Medicaid program,” one wonkish expression of uncertain origin goes, “you have seen one Medicaid program.” The idea is that unlike traditional Medicare—which offers largely universal benefits, access to almost every doctor and hospital, and consistent eligibility standards from coast to coast—Medicaid varies dramatically from state to state.7 Some might see this as an advantage, allowing states to experiment with different benefits and make the program more responsive to local needs. But state control also opened the door to gross inequities, as Cohen himself recognized. “I strongly believe in the need to improve the Medicaid system,” he noted in an interview from the early 1980s, “because it’s very inequitable and doesn’t cover all the needy people in every state under the same conditions.”
Even today states differ greatly both in the share of their populations they cover and in the frugality of benefits they provide. My state of Massachusetts, for instance, has a relatively generous Medicaid program. In 2023 we spent more than $9,000 per enrollee, whereas Alabama, South Carolina, Georgia, Florida, and Nevada each spent less than $5,000. To reduce costs, states can restrict eligibility or benefits to harmful—even cruel—ends. Some states, for instance, impose arbitrary maximums on the number of prescription drugs some enrollees can fill in a month, regardless of their condition: certain adults in Texas are limited to only three. Many studies have documented the injurious effects of these restrictions on people with mental illness, HIV, and disabilities. Some states, meanwhile, impose petty out-of-pocket payments on extremely cash-poor Medicaid recipients. During Trump’s first term, his administration’s Centers for Medicare and Medicaid Services encouraged states to request waivers allowing them to enact work requirements that would weed out the supposedly undeserving from the Medicaid rolls. Arkansas’ work requirement caused 18,000 people to lose coverage within just seven months of implementation. Now, due to the One Big Beautiful Bill Act, that policy is the law of the land.
Medicaid has also long generally reimbursed providers at a lower rate relative to other insurers, contributing to so-called informal segregation in healthcare provision.8 Such segregation takes place both across institutions and within them. My medical school, for instance, was affiliated with two hospitals a few minutes’ walk from each other along the East River. One was public (where Medicaid patients were much more likely to be seen) and one private (where the privately insured would more typically receive care). Meanwhile, as a recent study documented, many major academic medical centers maintain physically separate faculty practices and resident clinics, with the latter serving much larger shares of patients with Medicaid—findings that comport with my experiences during my internal medicine residency at a medical center in northern Manhattan. Such segregation by payer invariably contributes to segregation by class and race, since low-income and minority patients disproportionately rely on Medicaid.
In some cases the problem is not only where patients are seen but also whether they can be seen. One 2017 survey found that only 46 percent of dermatologists accept new Medicaid patients, for instance, compared to 98 percent who accept private insurance. (Primary care disparities are smaller but still considerable, at 76 percent and 97 percent, respectively.) But even that statistic understates the barriers to access. Some doctors who are literally contracted to see Medicaid patients see few or even none in reality: in a recent four-state study, some 16 percent of physicians (and 36 percent of psychiatrists) listed by Medicaid-managed care plans as “in network” didn’t see a single Medicaid patient in the course of an entire year. And some barriers are nearly impossible to detect unless you experience them as a patient—or as someone pretending to be one. In a “secret shopper” study published in 2014, for instance, study personnel who called physicians’ practices posing as prospective patients succeeded in getting an appointment 85 percent of the time if they claimed to have private insurance, but only 58 percent of the time if they said they had Medicaid, even though these practices accepted Medicaid on paper.
Worst of all, because Medicaid is a means-tested program, beneficiaries frequently “churn” in and out of coverage as their incomes rise or fall even slightly, or when they fail to complete sometimes onerous reenrollment paperwork. Nationwide, about one in five Medicaid beneficiaries will have a lapse in coverage over a two-year period. A study in Michigan found that three quarters of Medicaid beneficiaries suffered a gap in coverage over a decade. Churn is not mere inconvenience. The life-saving benefits of modern medicine depend largely on thoughtful, continuous care from a dedicated primary care physician—a relationship that is all but impossible with on-again, off-again coverage. None of this is to say that things can’t get worse: under the One Big Beautiful Bill Act, these shortcomings will only be exacerbated.
*
Medicaid’s flaws, to be clear, are only some of the problems hardwired into US health care. For instance, despite Medicaid’s enormous growth, more than 25 million Americans, most of them low-income, are uninsured today, even before the One Big Beautiful Bill Act takes effect. Even those with insurance, meanwhile, face ever-more expensive out-of-pocket costs. In a recent study in JAMA Internal Medicine, colleagues and I analyzed a sample of American adults over a four-year period: at least once in forty-eight months about a quarter of them, we found, had to contend with burdensome health costs, or went without necessary care altogether because they couldn’t afford it. If we could have tracked these individuals even longer, the share facing burdens would no doubt have continued to rise. Meanwhile, the escalating corporate takeover of the delivery of care—of our hospitals, our hospices, our nursing homes, and our clinics—siphons resources away from patient care and into the pockets of shareholders and executives.
To address these failings, as my colleagues and I have argued in these pages and elsewhere, we need nothing less than a return to the New Deal-era vision of universal health care that Cohen once helped draft—its current incarnation is called Medicare for All. Universal care would not only mitigate the inequities built into our current system, including for Medicaid participants, but also prove more resilient against political attack, for the simple reason that universal programs create buy-in across the socioeconomic spectrum. It was no accident that Republicans put Medicaid—not Medicare—in their crosshairs after Trump’s election: a politician who attacks Medicare will face the wrath of older Americans across the class divide, in red states and blue;9 politicians who cut Medicaid are “only” picking the pockets of the poor. This political reality also helps explain why universal health care systems have endured in other nations, even when parties that once opposed such systems come to power.
Medicaid remains the foundation of our health care safety net. It has been a literal lifeline for patients I have cared for throughout my career, covering many if not most of the critically ill people I treat today. My hospital would not stand without it. A health care program for the post-Trump era would need to start by rectifying the plunder of this vital resource. But it should also lay out a vision that offers something more. Even as a committed incrementalist, Wilbur Cohen never lost sight of the benefits of universal systems. In 1972 he faced off against the economist Milton Friedman in a debate entitled “Social Security: Universal or Selective?” Cohen, naturally, argued the universal case. “A program that deals only with the poor will end up being a poor program,” he noted, and it would also lack sufficient public support. For that reason, he went on, “one must try to find a way to link the interests of all classes in these programs.” That is no less true for health care today.



















English (US) ·