Urban Institute’s Generative AI Policy

The Urban Institute’s greatest asset is its reputation for objective, independent work that reflects the highest ethical standards. These standards include the responsible use of emerging technologies—like generative artificial intelligence (AI)—that offer considerable benefits but also introduce important risks. 

Urban views generative AI as a tool to support our work, enhance productivity, and improve efficiency. We also recognize the risks of using generative AI, including unverifiable provenance, perpetuation of biases, inaccuracy, plagiarism, harmful content, and breaches of privacy and confidentiality. When using generative AI, Urban staff must follow strict guidelines to prevent and mitigate those risks.  

We are currently testing the use of select generative AI tools with a pilot group of roughly 50 Urban staff members. Only those in the pilot group are permitted to use generative AI.  

The following rules apply to all Urban staff during this pilot phase: 

  • Staff are prohibited from using generative AI tools that have not been approved by Urban’s Office of Technology and Data Science. 
  • Staff are prohibited from using generative AI to produce images. 
  • Staff are prohibited from entering personally identifying or confidential information into a generative AI tool if the disclosure of this information is prohibited by contract, data use agreement, or Urban’s Institutional Review Board (IRB). Exceptions may be made by staff in charge of these processes, such as the IRB, Urban’s Grants and Contracts office, or Urban’s Chief Information Officer. 

Members of the pilot group may use generative AI to produce written summaries of Urban’s work, provided the source material was written entirely by Urban staff, by Urban staff and partners when Urban is the prime grant or contract holder, or by Urban staff and partners when Urban is the subcontractor or subgrantee and we have disclosed the use of AI to the prime grant or contract holder.  

Members of the pilot group may also use generative AI to produce written content from external source material provided the aggregate amount of AI-generated content does not make up a substantial portion of the final product.  

All published written content and external communications created with the assistance of generative AI must be reviewed by Urban’s approved plagiarism detection software. The project’s principal investigator must also review the content for inaccuracies, bias, and harmful content that the generative AI might have introduced. 

External content written with the assistance of generative AI must include a disclaimer, with a few exceptions. At this time, authors do not need to include an acknowledgment if they use generative AI to write code or if they use AI as a word-processing tool to improve a product’s grammar, clarity, or writing style.  

Generative AI is a rapidly evolving technology, so we expect to update this policy as conditions warrant.