Wednesday, October 4, 2023
12:00 PM - 12:10 PM
Welcome
Emma Llansó Emma Llansó, Center for Democracy and Technology
 
12:10 PM - 12:30 PM
Fireside Chat
Alan Davidson Alan Davidson, National Telecommunications and Information Administration
Alexandra Givens Alexandra Givens, Center for Democracy and Technology
 
12:30 PM - 1:30 PM
The Nuts and Bolts (and Nuances) of Foundation Models
Pratik Joshi Pratik Joshi, Google DeepMind
Arvind Narayanan Arvind Narayanan, Princeton University
Miranda Bogen Miranda Bogen
Rumman Chowdhury, Berkman Klein Center

In this session, our speakers will give an overview of “large language models” and their role in generative AI tools, including the basics of their technical design and the wide variety of uses for LLMs across industries. This panel will outline the tensions in developing, using, and regulating this technology, including challenges around scale, model transparency, and issues of bias in the training and design of these models. This session will introduce a number of the topics the rest of FOSO will explore more deeply. 

1:30 PM - 2:30 PM
Elections and Generative AI
Katie Harbath Katie Harbath, Anchor Change
Renée DiResta Renée DiResta, Stanford Internet Observatory
Josh Goldstein Josh Goldstein, Georgetown University Center for Security and Emerging Technology
Will Adler Will Adler, Bipartisan Policy Center

With scores of elections worldwide slated for 2024, the potential role of generative AI in shaping campaigns and influencing outcomes is drawing significant scrutiny. This session will explain how generative AI tools are being used by both campaigns and malicious actors, what risks they pose for amplifying election disinformation efforts and voter suppression, and what mitigation tactics model developers, user-generated content platforms, elections administrators, and others can employ.

Thursday, October 5, 2023
12:00 PM - 12:10 PM
Welcome
Ash Kazaryan Ash Kazaryan, Stand Together
 
12:00 PM - 1:00 PM
In Search of Best Practices: What Makes for Safe and Accountable Generative AI Models?
Elham Tabassi Elham Tabassi, National Institute of Standards and Technology (NIST)
Nicklas Lundblad Nicklas Lundblad, Google DeepMind
Dave Willner Dave Willner, Stanford
Emma Llansó Emma Llansó, Center for Democracy and Technology

The introduction of generative AI tools to a broad public audience was quickly met with significant concerns—and calls for safeguards—from policymakers, human rights advocates, and the general public. Industry has begun to respond with initiatives ranging from individual company commitments to the launch of the Frontier Model Forum. But what exactly are “best practices” in this nascent field? In this session, experts will discuss the safety commitments and accountability efforts currently underway, and where there’s room for growth. What concrete interventions can be made by those creating foundation models, developing specific generative tools, and deploying those tools in different environments    
 

1:00 PM - 1:30 PM
Lightning Talks: The Promise and Perils of Generative AI
Sarabeth Berman Sarabeth Berman, American Journalism Project
Kelley Szany Kelley Szany, Illinois Holocaust Museum & Education Center, Skokie
Sal Khan Sal Khan, Khan Academy
Steve Lee Steve Lee, SkillUp Coalition
Matthew Gee Matthew Gee, BrightHive

This session will spotlight experts using generative AI across a variety of industries. Speakers will give a short overview of their work as lightning talks. Speakers come from art, medicine, literature, education, and political advocacy.
 

1:30 PM - 2:30 PM
Designing a Liability Framework for Generative AI
James Grimmelmann James Grimmelmann, Cornell law School and Cornell Tech
Ellen Goodman Ellen Goodman, Rutgers Law School
Ari Cohn Ari Cohn, TechFreedom
Samir Jain Samir Jain, Center for Democracy and Technology

Generative AI has many different use cases with many different actors involved at various stages of a tool’s creation and use. The liability framework for generative AI will play a large role in shaping the development and deployment of these tools. This panel asks: who should we hold responsible for different aspects of a generative AI tool? What do we want to incentivize, and what should we safeguard? What approaches currently exist or are being proposed?