During the next few years, generative AI and other forms of artificial intelligence will transform the public sector. They will rapidly increase the productivity of knowledge work, they will expand the types of services governments can offer their citizens, and they will present broad regulatory and auditing challenges. This course covers the opportunities inherent in this technology and the challenges associated with it.
The goal of the course is to equip students entering government and related work to adopt AI responsibly, choosing and implementing tools in effective ways. It offers hands-on practice with prompt generation, direct information on the use of generative AI in government, and then focuses on 7 principles for the responsible use of AI:
1) Risk assessment and management
2) Explainable AI and open systems
3) Reclaiming data rights for people
4) Confronting and questioning the bias inherent in data
5) Accountability in the private and public sectors
6) Organizational systems and structures
7) Creative friction: the organizational culture and practices that favor better outcomes
We look at each of these issues in light the four logics of power affecting the future of AI: business, engineering, government, and social justice. We model personal and group processes to bring these issues safely to the surface, and learn a set of standards and guardrails (a “calculus of intentional risk”) that students can apply to their own work to help assess and avoid harm.
This course is set up as a seminar, conducted through dialogue. It is also structured around comprehensive group assignments: a feasible application of generative AI, a case study of a real-world government dilemma based on news reports and other sources, and a proposal for standards or guidelines.