The blog of the Urban Institute
March 26, 2019

How Cloud Computing Is Taking Our Tax Policy Analysis to New Heights (and Speeds)

March 26, 2019

When the Tax Cuts and Jobs Act (TCJA) was passed in 2017, it brought about the biggest changes to federal tax policy in decades and presented an equally enormous challenge for tax analysts—quickly calculating the effects of so many changes was impossible. Without this information, the public never had the chance to fully weigh the benefits and trade-offs of the new legislation.

The TCJA, for example, raised the standard deduction for single filers in 2018 from what would have been $6,500 under prior law to $12,000. But what if that number were slightly higher or lower? How would a different choice interact with the rest of the TCJA’s provisions, and how would it affect federal revenue and taxpayers’ average after-tax incomes? The combinations of alternatives are endless and presented too great a challenge for our modelling capabilities at the time.

But with our new cloud computing capabilities, we can tackle a once-impossible problem quickly and cost-effectively. Recently, the Urban Institute’s Office of Technology and Data Science helped the Urban-Brookings Tax Policy Center (TPC) migrate a version of our microsimulation model to the cloud. Cloud computing allows us to test thousands of tax policy variations in just a few hours.

TPC researchers leveraged this new capability to test more than 9,000 alternative scenarios, which include variations on eight core individual income tax law elements affected by the TCJA. They studied these plans’ effects on federal revenue and after-tax income for people in different income groups. TPC uncovered several important trends and trade-offs within the TCJA. The research team describes these findings in a new report and interactive data visualization.

In the following conversation, Robert McClelland, a senior fellow in the Tax Policy Center, and Jessica Kelly, the director of research programming in Urban’s Office of Technology and Data Science, discuss the benefits of this new technological capability.

What was your original goal for this cloud computing project?

McClelland: When we started this project in 2017, one of our goals was to be able to quickly generate alternatives to proposed tax policies. The TCJA just turned out to be a perfect case study for the project and this technology.

Why put the models in the cloud? What does that allow us to do that we couldn’t do before?

Kelly: With the cloud, you have as much access to computer resources as you can get. Previously, we were doing runs on a single PC or a server on-site. Those are limited resources. The cloud lets you run everything faster and cheaper than would be the case on a powerful server. You only pay for resources when you use them, and as cloud technology quickly improves, it’s easy to take advantage of new upgrades.

McClelland: Doing 9,000 model runs using the old modeling framework would have been impossible. We could never do it. We could never write 9,000 separate parameter files, much less run the models and then collect all the output. The time and manpower required would have made that impracticable. So it’s not just speed. We’re doing something that was previously impossible.

So now you have the capability to do thousands of runs, and you used it to analyze thousands of alternatives to the TCJA. What did you learn, and what does the data interactive show?

McClelland: The interactive shows a scatterplot of all 9,216 plans we looked at and describes some of the results that we found in our research paper regarding effects of alternative plans on tax filers in different income quintiles. We learned, for example, that the child tax credit is of paramount importance to families with children in the bottom 20 percent of incomes (or the bottom quintile). Now, you might’ve guessed such an outcome, but it’s visually striking to see the results on the screen. Similarly, when we looked at families who have incomes in the top 1 percent, income tax rates are paramount.

We also show another interesting quality of the TCJA, at least in terms of the parts of the individual income tax codes we changed. Plans that would benefit families in the top quintile do not benefit families in the fourth quintile, third quintile, second quintile, or first quintile—not if you want to avoid losing more revenue than is already the case with the TCJA. We found lots of plans that benefited the first and the third quintiles, or second and third, or fourth and first. But if the plan benefits the fifth, it doesn’t benefit any of those other quintiles.

Typically, a microsimulation modeling output is a series of tables. We’ve turned a tabling exercise into something closer to a big data exercise, where people can now intuitively sort among thousands of options. One part of the digital feature allows people to make their own TCJA policy choices and visualize the results.

Who are you trying to inform with this work and why?

McClelland: We’re trying to inform the public so they have a better understanding of what happened when the TCJA passed. It passed rather quickly. And while there’s a lot of discussion about the different components, it wasn’t possible to discuss all the differences and how they interact with each other. Now people are going to have that ability.

Supercomputing isn’t new, but social scientists don’t typically use it very often. Why is that?

McClelland: I think it’s becoming more popular, but historically, it has not been used because we’ve been using programs such as Stata that are not designed to use this capability. Also, in terms of my personal experience, the whole supercomputing environment is completely different from statistical software on a PC, so it requires a whole new way of thinking and a whole new skill set. We could never have done anything like this without the Tech and Data and data visualization teams.

Looking ahead, how might this new capability change some of your work?

McClelland: I’ve already written blog posts about possible changes to the TCJA, such as bringing back the personal exemption, which would be extremely costly. What other changes might we make to the tax code to offset that change so we don’t increase the budget deficit? That kind of task might have been difficult to perform in the past, but with this model, I could quickly run several hundred alternative scenarios. By looking at changes to the child tax credit and the standard deduction, I was able to find offsetting changes to increasing the personal exemption that led to only a very small increase in the budget deficit.

When you have this kind of processing power, you can work with the end goal in mind. Instead of pulling a bunch of policy levers and just watching what they do, from thousands of options you can pick the sets of policies that are most likely to give you the outcome you want.

Kelly: We are changing the way we think about how a microsimulation is consumed. We’re developing some new microsimulation models, and we start such projects knowing that we are going to run the model thousands of times and make design decisions early to accommodate this process so that we don’t have to reengineer anything. We’re also working on a forthcoming paper that shares all the things we've learned so other people can use what we’ve learned as a blueprint for their models.

SHARE THIS PAGE

As an organization, the Urban Institute does not take positions on issues. Experts are independent and empowered to share their evidence-based views and recommendations shaped by research.