- February 20, 2026
- A post I wrote on LinkedIn noting that many states are currently trapped with a single vendor for their integrated eligibility and enrollment systems and wondering if some sort of data interchange standard like FHIR would help improve this situation.
As many of you may know, the One Big Beautiful Bill Act (also known as HR.1) included new verification and payment provisions that have left states scrambling to develop new software and resolve enrollment errors. These include new compliance restrictions for the 42 states (and DC) that have used Medicaid’s expansion provisions to provide an additional 20 million low-income Americans with Medicaid.
Starting on January 1st next year, all Medicaid beneficiaries in the expansion population will be required to demonstrate either that they have a valid exemption or have worked or earned the monetary equivalent of 80 hours at least one month every six months in a job or equivalent community engagement. Republican boosters have presented these changes as necessary to ensure that “29-year-old males sitting on their couches playing video games” aren’t exploiting Medicaid. However, those people are often already working, sometimes in multiple low-paying jobs, and it’s adults aged 50-64 (and mostly women) who are most likely to lose their medical coverage.
In the rhetoric of politicians, these programs perpetually need to be “reformed” through the addition of new rules and regulations. We call this means testing, because the new rules are supposed to ensure that benefits are only distributed to those with the means that require them. Fraud is a valid concern with any public program, but too often means testing is just the means to deny services through the slow attrition of paperwork and meetings. In civic technology, we call this administrative burden. This is not an essay on that subject, but I strongly recommend you read more about it if this is your first time encountering the concept. Especially if you think the largest design challenge facing the federal government is that sites need to be more like the Apple Store.
Unfortunately, these new rules have left many states in a difficult bind. The compressed time-frame means that states have had only roughly a year and a half to implement the changes before they must go live. And the legislation also only allocated a limited number of funds for states to implement these changes. These changes will involve ingesting new types of data from new sources, and some of the rules for these new regulations won’t be finalized until June of this year. And states will of course need to not break anything for existing users as they are implementing these new rules.
The senior architect in me knows the answer to this problem would be using agile approaches to quickly build out and launch some sort of API service for verifying compliance. This could then be integrated with existing case-management systems as well as data sources and could output files that could be re-imported back into existing case management systems. It might seem, on first glance, that it would be better to create a completely separate system (that approach poses challenges for beneficiaries, case workers and auditors who now must build a parallel process for Medicaid expansion clients) or to make all modifications to existing systems (these systems are often overly complicated behemoths that require highly formalized processes to update which would take too long for our needs), but a separate system that is hooked into existing processes seems to be the most effective way to deliver the functionality in the time needed.
This approach is heavily reliant on being able to import the data it needs and export its decisions. That’s exactly where it runs into trouble. Because that data itself needs to be sourced from the same monolithic case management systems we tried to avoid touching by building a separate component. The house still wins in the end, and in half of all states, that house is Deloitte.
All of this makes me wonder if we can burn it all down with FHIR. Fast Health Interoperability Resources is a standard for data formats and APIs to support health interchange and replace ad hoc APIs and document-based approaches (basically, a lot of faxing). You might not have heard of it, but if you’ve used Apple Health to pull in medical results from a provider or Labcorp, you’re using FHIR in the back-end. It’s taken about 15 years to get to this point, but FHIR is now a mature standard and it’s expected (and in some cases, legally mandated) that any medical software will support data interchange this way.
Why do I think this would be a great fit for Medicaid? Well, it mostly solves (more on that below) the question of how to connect systems since it defines a standard that levels the playing field more for the little guys too. There are a rich variety of open-source tools and libraries. Because code is built for handling records with PII and PHI, I’m pretty confident that privacy concerns will be handled by popular implementations. It has a robust mechanism for working groups to define new standards and add extended metadata to existing field. This has allowed the standard to expand the universe of Resources to include new concepts like quality metrics or care teams or lab results.
More esoterically, I am fascinated by existing tools like Synthea, which lets organizations generate entire populations of synthetic data so they can test out their tools safely in developer environments with data that reflects the size and scope of production data. Here’s SyntheticMass, a virtual Massachusetts that you can just load if you have a system that is able to ingest FHIR records. Does any state or organization have something similar for Medicaid applications? Could an outside entity generate them in a format that states could load for testing? I doubt it.
Of course, FHIR is not a panacea. The FHIR standard allows for partial implementations (and systems to share their capabilities); any company might be able to claim interoperability, but add onerous pricing or restrictions for partners who want to use it (see some of the allegations in lawsuits against Epic for instance). Data interoperability also does not prevent data misuse by bad actors or fraud by participants. FHIR is a comprehensive standard which also makes it a pretty convoluted one to understand. There is no way this could be implemented in time before the HR 1 deadline, especially following a formal working group process.
But, it’s better than what we have now. Could we expand FHIR to represent Medicaid applications? Better yet, would it make sense for representing the gamut of social safety net programs that current integrated eligibility programs handle (this also includes SNAP, WIC, heating prices, etc. and that’s just in the US alone). Would it make sense to put all this in FHIR? Or rather, should we consider a similar FHIR-shaped approach for eligibility and enrollment? If you are an expert in these things, I’d love your take! All I know is that we need some ways to break up the consolidation of integrated eligibility providers and true interoperability would help that process!
Postscript: On 1/29/2026, CMS released a list of vendors who are going to offer discounts on their products for states that need to implement HR1 compiiance. Scanning the list, you can see that several of them list special verification hubs, portals, or things like getting data from Managed Care Organizations (MCOs), etc. All of these are going to be doing some sort of data interchange. For instance, if you are a Deloitte customer and you get the Deloitte NextGen Verification Hub, it likely already has an API and ways in which it expects the data to be represented. The question is, will that product eventually switch to an open documented protocol, or are many states going to be forced into buying products from the same vendor for both ends of the connection?