This report examines how artificial intelligence (AI) and automation are being discussed and deployed within state Medicaid programs. Using text analysis of nearly 900 publicly available documents from 45 states and insights from stakeholder interviews, the study identifies where and how AI is being used, which states are ahead of the curve, and recommendations for improving transparency and monitoring as AI systems continue to proliferate. This report was funded by the GE HealthCare Foundation. We are grateful to them and to all our funders, who make it possible for Urban to advance its mission.
Why This Matters
As states modernize Medicaid systems, AI and automation are increasingly embedded in eligibility, utilization management, and care coordination. However, the public record and reporting requirements for these tools are sparse and inconsistent, making it difficult to assess their implications for fairness, transparency, and efficacy for patients.
Policymakers need clearer information to ensure that efficiency gains do not come at the expense of trust in public programs.
- There is little systematic public documentation or reporting of AI usage by states and the managed care organizations (MCOs) they contract with.In most states, the researchers found agencies publish little about AI, other kinds of algorithms, or automation being used in Medicaid program administration.
- In a smaller set of states, MCOs describe using AI for core functions like patient risk stratification and utilization management, but with limited detail.In a seven-state “deeper dive”, researchers found that contracts with MCOs do discuss AI usage in various Medicaid functions like patient risk stratification and utilization management, typically including framing language on improving efficiency. The team found little to no information reported about how these tools work, specific methodologies, evaluations, or who oversees them.
- Mentions of generative AI (genAI) were nearly absent from the team’s analysis of documents as of late 2024, but we could see a rise in its use after recent legislation that includes significant changes to the Medicaid program. Researchers heard from Medicaid stakeholders that genAI is still in the early consideration phase, and only one genAI case (an intelligent voice assistant) was found, which is considered outside the scope of formal patient care.
Key Recommendations for State Medicaid Agencies
Based on our findings, we recommend the following policy actions for state Medicaid agencies:
- To support transparency, state Medicaid agencies can require MCOs to publish standardized information on their use of AI and automated systems, including the tools they use, their specific functions, and how members might interact with them.
- State Medicaid agencies can require evaluations of all AI and automated systems that involve patients, including a complete set of performance metrics, equity analysis for effects of implementation on demographic subgroups, and monitoring over time.
- State Medicaid agencies can establish AI ethics committees and governance frameworks to guide responsible decisionmaking and deployment of AI and automated systems.
How We Did It
Researchers collected 895 publicly available Medicaid agency documents—including MCO contracts, annual and quality strategy reports, and equity plans—from 45 states between June and September 2024. Text analysis techniques identified the prevalence of AI- and automation-related terms and the specific excerpts of interest in each document. Qualitative coding was then used to categorize how states describe these technologies. The research team also conducted semistructured interviews with state officials, vendors, and advocates to contextualize findings.