This isn't an official website of the European Union

Brussels to the Bay: The State of International AI Governance and Safety

 

The livestream of this event is now available by clicking here.

As AI development and deployment continue to advance at an unprecedented rate, the EU, the US, the UK, as well as several other countries around the world, are working to ensure the safety of powerful AI models and their benefits to be shared equitably around the world. They are committed to working together through a network of AI Safety Institutes (AISIs) that will convene for the first time in San Francisco on 20-21 November to promote knowledge sharing and work towards common approaches for the development and deployment of responsible AI. Together with civil society, academia, and industry, they will also help prepare for the Paris AI Action Summit of February 2025. This Brussels to the Bay event brought together AISIs leaders and experts that shared the convening main takeaways and reflected on the journey ahead.

The European Commission also just published the first draft of the Code of Practice on General Purpose AI models (GPAIs), developed under the EU AI Act. The EU AI Act is the first comprehensive legal framework governing AI in the democratic world. The much-anticipated Code of Practice represents a collaborative effort of nearly 1000 AI companies, civil society, and academic representatives, who responded to European Commission’s call to propose state-of-the-art and technically feasible solutions to implement the EU AI Act provisions on GPAIs, e.g. regarding transparency, systemic risk taxonomy, risk assessment and mitigation measures, as well as copyright related aspects. Given the wide range of international stakeholders involved in the preparation, the Code will likely evolve into a global standard for GPAI oversight and governance.

On November 22, the EU Office in San Francisco hosted a Brussels to the Bay policy event on “The State of International AI Governance and Safety”.

At this Brussels to the Bay event, Director of the EU AI Office Lucilla Sioli provided a first-hand overview of the Code’s first draft and what’s next to come before its finalization by April 2025. Once the European Commission has endorsed the Code, it will provide a presumption of conformity for those GPAIs that adhere to its provisions by August 2025. Further technical work will be pursued subsequently, e.g. to facilitate compliance for the detection and labelling of artificially generated or manipulated content.

How do these different regulatory and voluntary initiatives around the world come together? How can the international network of AISIs join forces to help spread the benefits of the technology, enhance global prosperity and equity whilst at the same time promoting safe AI development and deployment? How can AISIs promote common rather than fragmented approaches? How can we ensure that AI governance is inclusive, with substantive involvement from civil society and academia?

KEY INFO

Date: Friday, November 22nd, 2024

VenueEU Office in San Francisco - 1 Post Street, suite 2300

Panel of B2B event on AI Governance

Meet our speakers:

  • Lucilla Sioli, Director of the EU AI Office, European Commission
  • Senator Josh Becker, California State Senator, author of the California Artificial Intelligence Transparency Act (CAITA)
  • Stuart Russell, Distinguished Professor of Computer Science and holder of the Smith-Zadeh Chair in Engineering, University of California, Berkeley
  • Sarah Heck, Head of Policy Plannings and Programs @Anthropic
  • Daniel Privitera, Founder and Executive Director of the KIRA Center (AI policy non-profit), Vice-Chair under the EU General-Purpose AI Code of Practice
  • Anne Josephine Flanagan, Vice President for Artificial Intelligence, Future of Privacy Forum [moderator]

Lucilla Sioli, Director of the EU AI Office, European Commission

Lucilla Sioli is the Director of the "EU AI Office" within Directorate-General CONNECT at the European Commission. She is responsible for the coordination of the European AI strategy, including the implementation of the AI Act and international collaboration in trustworthy AI and AI for good.
The directorate is also responsible for R&D&I activities in AI and for the implementation of the AI Innovation Package. Lucilla holds a PhD in economics from the University of Southampton (UK) and one from the Catholic University of Milan (Italy) and has been a civil servant with the European Commission since 1997.

Senator Josh Becker, California State Senator, author of the California Artificial Intelligence Transparency Act (CAITA)

Senator Josh Becker represents California’s 13th Senate District, covering most of San Mateo County and northern Santa Clara County. Senator Becker is the main author of three AI Bills signed into law by Governor Newsom, namely the “Public Schools: Artificial Intelligence working group” (SB-1288), the “AI Transparency Act “ (SB-942) and the “Physicians make decisions act” (SB-1120). He founded Full Circle, a Bay Area organization that supports non-profits in economic opportunity, education, health, and environmental sustainability. He has served on California’s Workforce Development Board and San Mateo County’s Child Care Partnership Council, and co-founded New Cycle Capital, focused on socially responsible businesses, as well as Lex Machina, a legal transparency platform. Senator Becker holds a JD/MBA from Stanford University, where he also co-founded the Stanford Board Fellows program, which prepares students to serve on non-profit boards early in their careers.

Stuart Russell, Distinguished Professor of Computer Science and holder of the Smith-Zadeh Chair in Engineering, University of California, Berkeley

Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He is a recipient of the IJCAI Computers and Thought Award, the IJCAI Research Excellence Award, and the ACM Allen Newell Award. From 2012-14 he held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the BBC Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, an AI2050 Senior Fellow, and a Fellow of AAAI, ACM, and AAAS. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in over 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.

Sarah Heck, Head of Policy Plannings and Programs @Anthropic

Sarah Heck is currently the Head of Policy Planning and Programs at Anthropic and a Founding Operator of "Coalition Operators," an early-stage venture capital firm. Previously, she led entrepreneurship initiatives at Stripe Atlas, supporting global entrepreneurs in launching and scaling their businesses. Sarah also served as the Director for Global Engagement on the National Security Council, where she focused on global entrepreneurship policy, private sector engagement, youth demographics.

Earlier in her career, she advised the Obama Foundation on international entrepreneurship and youth programs and held various roles at the U.S. Department of State, where she focused on public diplomacy, countering violent extremism, entrepreneurship, and the role of technology in diplomacy. Sarah holds degrees from Georgetown University and the University of Maryland. She is also a Truman National Security Project Fellow, a Council on Foreign Relations Term Member, and sits on the board of Young Professionals in Foreign Policy.

Daniel Privitera, Founder and Executive Director of the KIRA Center
Daniel Privitera is the Founder and Executive Director of the KIRA Center, an independent AI policy non-profit based in Berlin. He is the Lead Writer of the International Scientific Report on the Safety of Advanced AI, which is co-written by 75 international AI experts and supported by 30 leading AI countries, the UN, and the EU. He is a Vice Chair of the Technical Risk Mitigation working group under the EU General-Purpose AI Code of Practice.

Anne Josephine Flanagan, Vice President for Artificial Intelligence, Future of Privacy Forum (moderator)

Anne Josephine Flanagan serves as the Vice President for Artificial intelligence at FPF, leading FPF’s new Center for AI where she oversees projects on data flows driving algorithmic, and ethical and responsible development of AI products and services.Anne spent over a decade in the Irish government and EU institutions, where she developed technical policy positions and diplomatic strategies in relation to EU legislation on telecoms, digital infrastructures, data and approaches to AI governance. Since 2019 Anne has held senior roles in technology policy, including positions at the World Economic Forum and Meta, helping business leaders in shaping responsible and sustainable technology development.Anne holds a Masters in Economics and Political Science and in International Relations plus a MBA from Trinity College Dublin. She is a Member of the Board of Advisors of the Innovation Value Institute (IVI) and was recently named AI Leader of the Year 2024 by Women Leaders in Data and AI.

If you have any questions, please email us at [email protected].

-
12:00 pm - 03:00 pm