<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">
<channel>
  <atom:link href="https://feeds.cohostpodcasting.com/Jnq1C9cT" rel="self" title="MP3 Audio" type="application/atom+xml"/>
  <atom:link href="https://pubsubhubbub.appspot.com/" rel="hub" xmlns="http://www.w3.org/2005/Atom" />
  <generator>https://cohostpodcasting.com</generator>
  <title><![CDATA[RegulatingAI Podcast: Innovate Responsibly]]></title>
  <description><![CDATA[Welcome to the RegulatingAI Podcast: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.

You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.

Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
]]></description>
  <itunes:summary><![CDATA[Welcome to the RegulatingAI Podcast: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast.

You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world.

Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
]]></itunes:summary>
  <language>en</language>
  <copyright><![CDATA[Copyright 2025]]></copyright>
<podcast:guid>099ef565-66b9-47ee-8d35-103cba64c0cb</podcast:guid>
  <pubDate>Wed, 25 Oct 2023 12:03:51 -0400</pubDate>
  <lastBuildDate>Sat, 18 Apr 2026 14:40:06 -0400</lastBuildDate>
  
  <link>https://regulatingai.org/podcast/</link>
  <itunes:type>episodic</itunes:type>
  <itunes:author><![CDATA[Sanjay Puri]]></itunes:author>
  <itunes:explicit>false</itunes:explicit>
  <itunes:image href="https://files.cohostpodcasting.com/quill-file-prod/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/cover-art/original_a304b68b6b4daa447a765e082e28922b.png"/>
  <itunes:new-feed-url>https://feeds.cohostpodcasting.com/Jnq1C9cT</itunes:new-feed-url>
      <itunes:keywords><![CDATA[AIStandard,AISafety,AIRegulation]]></itunes:keywords>
  
  <itunes:owner>
    <itunes:name><![CDATA[Sanjay Puri]]></itunes:name>
    <itunes:email>sanjaypuri.podcast@gmail.com</itunes:email>
  </itunes:owner>
  <itunes:category text="Technology"/>
  <itunes:category text="Business">
    <itunes:category text="Entrepreneurship"/>
  </itunes:category>
  <itunes:category text="Business"/>
<item>
  <guid isPermaLink="false"><![CDATA[aa3771ef-3b6b-4050-9ebb-1f4d975bf1b2]]></guid>
  <title><![CDATA[Deep Dive Into Armenia's AI Ecosystem | With H.E. Narek Mkrtchyan, Ambassador of Armenia to the USA]]></title>
  <description><![CDATA[<p><em>A historian. A former&nbsp;labor&nbsp;minister. now Armenia's Ambassador to the United States.</em></p><p><em>Catch Ambassador Narek Mkrtchyan joins Sanjay Puri on the Regulating AI Podcast to discuss Armenia's AI strategy, the US-Armenia semiconductor partnership, and what small nations can teach the world about governing AI.</em></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/f7da2429-df26-44ab-ab4a-3a117659ff05/f726177fb1.jpg" />
  <pubDate>Sat, 18 Apr 2026 01:33:53 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="36642806" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/f7da2429-df26-44ab-ab4a-3a117659ff05/episode.mp3" />
  <itunes:title><![CDATA[Deep Dive Into Armenia's AI Ecosystem | With H.E. Narek Mkrtchyan, Ambassador of Armenia to the USA]]></itunes:title>
  <itunes:duration>38:10</itunes:duration>
  <itunes:summary><![CDATA[<p><em>A historian. A former&nbsp;labor&nbsp;minister. now Armenia's Ambassador to the United States.</em></p><p><em>Catch Ambassador Narek Mkrtchyan joins Sanjay Puri on the Regulating AI Podcast to discuss Armenia's AI strategy, the US-Armenia semiconductor partnership, and what small nations can teach the world about governing AI.</em></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><em>A historian. A former&nbsp;labor&nbsp;minister. now Armenia's Ambassador to the United States.</em></p><p><em>Catch Ambassador Narek Mkrtchyan joins Sanjay Puri on the Regulating AI Podcast to discuss Armenia's AI strategy, the US-Armenia semiconductor partnership, and what small nations can teach the world about governing AI.</em></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[A historian. A former labor minister. now Armenia's Ambassador to the United States.Catch Ambassador Narek Mkrtchyan joins Sanjay Puri on the Regulating AI Podcast to discuss Armenia's AI strategy, the US-Armenia semiconductor partnership, and what...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>168</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[998f3f58-787a-4c4b-bb4a-94cdc35f7b38]]></guid>
  <title><![CDATA[Who Speaks for Small Farmers in the Age of AI? | With MEP André Franqueira Rodrigues]]></title>
  <description><![CDATA[<p>AI is reshaping democracy, food systems, and global power — but are regulations keeping up?</p><p><br></p><p>In this episode, Sanjay Puri speaks with André Franqueira Rodrigues about the rise of AI-driven misinformation, the strengths and gaps in the EU AI Act, and the growing global competition between the US, China, and Europe to shape AI governance.</p><p><br></p><p>They also explore how regulation can protect farmers and fisheries, and why this technological shift may outpace anything we’ve seen before.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/3228313f-c8fc-4462-b7a5-6edd0090df41/2ac2cea867.jpg" />
  <pubDate>Mon, 13 Apr 2026 09:51:39 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="33266532" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/3228313f-c8fc-4462-b7a5-6edd0090df41/episode.mp3" />
  <itunes:title><![CDATA[Who Speaks for Small Farmers in the Age of AI? | With MEP André Franqueira Rodrigues]]></itunes:title>
  <itunes:duration>34:39</itunes:duration>
  <itunes:summary><![CDATA[<p>AI is reshaping democracy, food systems, and global power — but are regulations keeping up?</p><p><br></p><p>In this episode, Sanjay Puri speaks with André Franqueira Rodrigues about the rise of AI-driven misinformation, the strengths and gaps in the EU AI Act, and the growing global competition between the US, China, and Europe to shape AI governance.</p><p><br></p><p>They also explore how regulation can protect farmers and fisheries, and why this technological shift may outpace anything we’ve seen before.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>AI is reshaping democracy, food systems, and global power — but are regulations keeping up?</p><p><br></p><p>In this episode, Sanjay Puri speaks with André Franqueira Rodrigues about the rise of AI-driven misinformation, the strengths and gaps in the EU AI Act, and the growing global competition between the US, China, and Europe to shape AI governance.</p><p><br></p><p>They also explore how regulation can protect farmers and fisheries, and why this technological shift may outpace anything we’ve seen before.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI is reshaping democracy, food systems, and global power — but are regulations keeping up?In this episode, Sanjay Puri speaks with André Franqueira Rodrigues about the rise of AI-driven misinformation, the strengths and gaps in the EU AI Act, and ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>167</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[67fa061a-1411-44bb-83bf-e045bf3c4329]]></guid>
  <title><![CDATA[ The Global Majority Cannot Be Left Behind Again | With Oby Ezekwesili & Anja Manuel]]></title>
  <description><![CDATA[<p>AI is advancing fast—but who is it really working for?</p><p>In this episode, Sanjay Puri speaks with Oby Ezekwesili and Anja Manov on the global stakes of AI—from the agency divide and Africa’s informal economy to gendered job loss and AI in national security.</p><p>A powerful conversation on who benefits, who gets left behind, and what responsible AI governance should look like.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/13f58315-1278-4c1f-a919-033bbc8d72d2/1ba54a4691.jpg" />
  <pubDate>Wed, 08 Apr 2026 13:47:45 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="49530160" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/13f58315-1278-4c1f-a919-033bbc8d72d2/episode.mp3" />
  <itunes:title><![CDATA[ The Global Majority Cannot Be Left Behind Again | With Oby Ezekwesili & Anja Manuel]]></itunes:title>
  <itunes:duration>51:35</itunes:duration>
  <itunes:summary><![CDATA[<p>AI is advancing fast—but who is it really working for?</p><p>In this episode, Sanjay Puri speaks with Oby Ezekwesili and Anja Manov on the global stakes of AI—from the agency divide and Africa’s informal economy to gendered job loss and AI in national security.</p><p>A powerful conversation on who benefits, who gets left behind, and what responsible AI governance should look like.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>AI is advancing fast—but who is it really working for?</p><p>In this episode, Sanjay Puri speaks with Oby Ezekwesili and Anja Manov on the global stakes of AI—from the agency divide and Africa’s informal economy to gendered job loss and AI in national security.</p><p>A powerful conversation on who benefits, who gets left behind, and what responsible AI governance should look like.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI is advancing fast—but who is it really working for?In this episode, Sanjay Puri speaks with Oby Ezekwesili and Anja Manov on the global stakes of AI—from the agency divide and Africa’s informal economy to gendered job loss and AI in national sec...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>166</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[8b9ddd42-6f25-4577-989f-238ac68349b1]]></guid>
  <title><![CDATA[ Why AI Governance Must Start at the City Level | With Giorgia Rambelli]]></title>
  <description><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Giorgia Rambelli, Director of Mission Innovation’s Urban Transitions Mission, on why cities are emerging as the real frontlines of AI governance.</p><p>From data sovereignty and infrastructure challenges to AI in housing, mobility, and energy, they explore how municipalities worldwide are navigating power, policy, and implementation.</p><p>A sharp look at why the future of responsible AI will be shaped not just globally—but on city streets.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e6fe84de-1c7a-4080-be5b-2696412ab9fe/189e0b7792.jpg" />
  <pubDate>Tue, 07 Apr 2026 11:05:41 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="36258084" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e6fe84de-1c7a-4080-be5b-2696412ab9fe/episode.mp3" />
  <itunes:title><![CDATA[ Why AI Governance Must Start at the City Level | With Giorgia Rambelli]]></itunes:title>
  <itunes:duration>37:31</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Giorgia Rambelli, Director of Mission Innovation’s Urban Transitions Mission, on why cities are emerging as the real frontlines of AI governance.</p><p>From data sovereignty and infrastructure challenges to AI in housing, mobility, and energy, they explore how municipalities worldwide are navigating power, policy, and implementation.</p><p>A sharp look at why the future of responsible AI will be shaped not just globally—but on city streets.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Giorgia Rambelli, Director of Mission Innovation’s Urban Transitions Mission, on why cities are emerging as the real frontlines of AI governance.</p><p>From data sovereignty and infrastructure challenges to AI in housing, mobility, and energy, they explore how municipalities worldwide are navigating power, policy, and implementation.</p><p>A sharp look at why the future of responsible AI will be shaped not just globally—but on city streets.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Giorgia Rambelli, Director of Mission Innovation’s Urban Transitions Mission, on why cities are emerging as the real frontlines of AI governance.From data sovereignty and infrast...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>165</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f6de23bf-b639-4445-a970-d960ab86ec1d]]></guid>
  <title><![CDATA[Why AI Regulation Needs Every Nation at the Table | With Stephan Löfven, Former PM of Sweden]]></title>
  <description><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Stefan Löfven—former Prime Minister of Sweden—on why AI governance must be global, inclusive, and grounded in workers’ realities.</p><p><br></p><p>From factory floors to global policy, Löfven shares insights on AI’s impact on jobs, the concentration of power, risks like autonomous weapons, and the urgent need for trust-led, multilateral cooperation.</p><p><br></p><p>A powerful conversation on shaping AI that works for everyone—not just a few.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e05a8177-1c70-487b-9a68-6c3938032cef/382907e819.jpg" />
  <pubDate>Tue, 07 Apr 2026 10:57:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="26079285" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e05a8177-1c70-487b-9a68-6c3938032cef/episode.mp3" />
  <itunes:title><![CDATA[Why AI Regulation Needs Every Nation at the Table | With Stephan Löfven, Former PM of Sweden]]></itunes:title>
  <itunes:duration>26:56</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Stefan Löfven—former Prime Minister of Sweden—on why AI governance must be global, inclusive, and grounded in workers’ realities.</p><p><br></p><p>From factory floors to global policy, Löfven shares insights on AI’s impact on jobs, the concentration of power, risks like autonomous weapons, and the urgent need for trust-led, multilateral cooperation.</p><p><br></p><p>A powerful conversation on shaping AI that works for everyone—not just a few.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Stefan Löfven—former Prime Minister of Sweden—on why AI governance must be global, inclusive, and grounded in workers’ realities.</p><p><br></p><p>From factory floors to global policy, Löfven shares insights on AI’s impact on jobs, the concentration of power, risks like autonomous weapons, and the urgent need for trust-led, multilateral cooperation.</p><p><br></p><p>A powerful conversation on shaping AI that works for everyone—not just a few.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Stefan Löfven—former Prime Minister of Sweden—on why AI governance must be global, inclusive, and grounded in workers’ realities.From factory floors to global policy, Löfven shar...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>164</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f9f6f1af-3c95-4b92-b851-1e38f38a695a]]></guid>
  <title><![CDATA[The Future of Journalism in the Age of AI & Platform Power with Raju Narisetti]]></title>
  <description><![CDATA[<p>What does journalism look like in the age of AI, algorithms, and declining trust?</p><p><br></p><p>In this episode, Raju Narisetti—former senior editor at <em>The Wall Street Journal</em> and <em>The Washington Post</em>, founding editor of <em>Mint</em>, and Partner at McKinsey—breaks down how AI is transforming newsrooms, the collapse of traditional business models, and the growing tension between platforms and editorial independence.</p><p><br></p><p>A sharp, insider perspective on trust, misinformation, and what the next generation of journalists must navigate.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/9254c540-0f94-4073-b7b1-43e6dbd7b851/176b50c43f.jpg" />
  <pubDate>Tue, 07 Apr 2026 10:36:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="19055742" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9254c540-0f94-4073-b7b1-43e6dbd7b851/episode.mp3" />
  <itunes:title><![CDATA[The Future of Journalism in the Age of AI & Platform Power with Raju Narisetti]]></itunes:title>
  <itunes:duration>19:50</itunes:duration>
  <itunes:summary><![CDATA[<p>What does journalism look like in the age of AI, algorithms, and declining trust?</p><p><br></p><p>In this episode, Raju Narisetti—former senior editor at <em>The Wall Street Journal</em> and <em>The Washington Post</em>, founding editor of <em>Mint</em>, and Partner at McKinsey—breaks down how AI is transforming newsrooms, the collapse of traditional business models, and the growing tension between platforms and editorial independence.</p><p><br></p><p>A sharp, insider perspective on trust, misinformation, and what the next generation of journalists must navigate.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>What does journalism look like in the age of AI, algorithms, and declining trust?</p><p><br></p><p>In this episode, Raju Narisetti—former senior editor at <em>The Wall Street Journal</em> and <em>The Washington Post</em>, founding editor of <em>Mint</em>, and Partner at McKinsey—breaks down how AI is transforming newsrooms, the collapse of traditional business models, and the growing tension between platforms and editorial independence.</p><p><br></p><p>A sharp, insider perspective on trust, misinformation, and what the next generation of journalists must navigate.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[What does journalism look like in the age of AI, algorithms, and declining trust?In this episode, Raju Narisetti—former senior editor at The Wall Street Journal and The Washington Post, founding editor of Mint, and Partner at McKinsey—breaks down h...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>true</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>163</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[db646d19-eac6-4ba9-a605-e6d42c8ff14d]]></guid>
  <title><![CDATA[The AI Funding Gap No One Talks About]]></title>
  <description><![CDATA[<p>In this International Women’s Day episode, *Sanjay Puri, Founder &amp; Chairman of Knowledge Networks, speaks with **Dr. Hoda A. Alkhzaimi of NYU Abu Dhabi*, a global expert in cybersecurity and emerging technologies. They discuss the AI funding gap for women founders, the importance of stronger innovation pipelines, and why greater representation in leadership is critical to shaping a more inclusive and responsible AI future.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/c708b8ce-b621-4ee8-a08d-c9a31aca260a/b42a8cedf7.jpg" />
  <pubDate>Sun, 08 Mar 2026 09:30:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="22578589" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/c708b8ce-b621-4ee8-a08d-c9a31aca260a/episode.mp3" />
  <itunes:title><![CDATA[The AI Funding Gap No One Talks About]]></itunes:title>
  <itunes:duration>22:34</itunes:duration>
  <itunes:summary><![CDATA[<p>In this International Women’s Day episode, *Sanjay Puri, Founder &amp; Chairman of Knowledge Networks, speaks with **Dr. Hoda A. Alkhzaimi of NYU Abu Dhabi*, a global expert in cybersecurity and emerging technologies. They discuss the AI funding gap for women founders, the importance of stronger innovation pipelines, and why greater representation in leadership is critical to shaping a more inclusive and responsible AI future.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this International Women’s Day episode, *Sanjay Puri, Founder &amp; Chairman of Knowledge Networks, speaks with **Dr. Hoda A. Alkhzaimi of NYU Abu Dhabi*, a global expert in cybersecurity and emerging technologies. They discuss the AI funding gap for women founders, the importance of stronger innovation pipelines, and why greater representation in leadership is critical to shaping a more inclusive and responsible AI future.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this International Women’s Day episode, *Sanjay Puri, Founder & Chairman of Knowledge Networks, speaks with **Dr. Hoda A. Alkhzaimi of NYU Abu Dhabi*, a global expert in cybersecurity and emerging technologies. They discuss the AI funding gap fo...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>162</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a8c43b6c-5ae4-4547-95be-456817e91e9a]]></guid>
  <title><![CDATA[How AI Policy Meets the Real World | With Maya Sherman, Embassy of Israel, in India]]></title>
  <description><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Maya Sherman, Innovation Attaché at the Israeli Embassy in India and an AI policy researcher focused on responsible AI and global governance. They discuss the evolving landscape of AI regulation, ethics, AI literacy, and India–Israel collaboration in emerging technologies, and what it will take to build responsible AI systems for the future.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/0d418e61-55e1-4eba-8c55-4ebaa8d5bef8/5f7f692699.jpg" />
  <pubDate>Sun, 08 Mar 2026 09:30:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="40498978" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/0d418e61-55e1-4eba-8c55-4ebaa8d5bef8/episode.mp3" />
  <itunes:title><![CDATA[How AI Policy Meets the Real World | With Maya Sherman, Embassy of Israel, in India]]></itunes:title>
  <itunes:duration>41:29</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Maya Sherman, Innovation Attaché at the Israeli Embassy in India and an AI policy researcher focused on responsible AI and global governance. They discuss the evolving landscape of AI regulation, ethics, AI literacy, and India–Israel collaboration in emerging technologies, and what it will take to build responsible AI systems for the future.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Maya Sherman, Innovation Attaché at the Israeli Embassy in India and an AI policy researcher focused on responsible AI and global governance. They discuss the evolving landscape of AI regulation, ethics, AI literacy, and India–Israel collaboration in emerging technologies, and what it will take to build responsible AI systems for the future.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the Regulating AI Podcast, Sanjay Puri speaks with Maya Sherman, Innovation Attaché at the Israeli Embassy in India and an AI policy researcher focused on responsible AI and global governance. They discuss the evolving landscape ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>162</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b51a6aac-b0db-4b8b-9ab9-d2ae34a93993]]></guid>
  <title><![CDATA[From Tunisia's Democratic Transition to Global AI Governance]]></title>
  <description><![CDATA[<p>In this episode of the Regulating AI Podcast, we speak with His Excellency Mehdi Jomaa, former Prime Minister of Tunisia, about the future of AI governance. Drawing on his experience in government and industry, he shares insights on how nations can balance innovation, regulation, and global cooperation in the age of artificial intelligence.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/dff1997f-7047-471a-a6f1-bdf879d6a579/fa70b019f6.jpg" />
  <pubDate>Thu, 05 Mar 2026 16:54:02 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38901712" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/dff1997f-7047-471a-a6f1-bdf879d6a579/episode.mp3" />
  <itunes:title><![CDATA[From Tunisia's Democratic Transition to Global AI Governance]]></itunes:title>
  <itunes:duration>39:45</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the Regulating AI Podcast, we speak with His Excellency Mehdi Jomaa, former Prime Minister of Tunisia, about the future of AI governance. Drawing on his experience in government and industry, he shares insights on how nations can balance innovation, regulation, and global cooperation in the age of artificial intelligence.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the Regulating AI Podcast, we speak with His Excellency Mehdi Jomaa, former Prime Minister of Tunisia, about the future of AI governance. Drawing on his experience in government and industry, he shares insights on how nations can balance innovation, regulation, and global cooperation in the age of artificial intelligence.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the Regulating AI Podcast, we speak with His Excellency Mehdi Jomaa, former Prime Minister of Tunisia, about the future of AI governance. Drawing on his experience in government and industry, he shares insights on how nations can...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>161</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f32473ba-9400-404b-9a9a-d5ad697d8f04]]></guid>
  <title><![CDATA[Collaborating Across Sectors for AI Impact | Sovereign AI, PPP & AI Skilling]]></title>
  <description><![CDATA[<p>Recorded live at the *India AI Impact Summit*, this session brings together industry leaders, policymakers, and technology experts to explore how cross-sector collaboration is shaping the future of AI.</p><p><br></p><p>The conversation examines the rise of *Sovereign AI, the role of **public–private partnerships in accelerating innovation, and the urgent need for **AI skilling and workforce transformation* to build a future-ready talent ecosystem.</p><p><br></p><p>From policy and strategy to real-world implementation, the discussion looks at how governments, enterprises, academia, and innovators can work together to unlock AI’s full potential responsibly. 🎙️</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ac4feca7-022b-4bec-829c-c069ecc29e24/c171e95010.jpg" />
  <pubDate>Thu, 05 Mar 2026 08:15:56 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="40149452" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ac4feca7-022b-4bec-829c-c069ecc29e24/episode.mp3" />
  <itunes:title><![CDATA[Collaborating Across Sectors for AI Impact | Sovereign AI, PPP & AI Skilling]]></itunes:title>
  <itunes:duration>41:40</itunes:duration>
  <itunes:summary><![CDATA[<p>Recorded live at the *India AI Impact Summit*, this session brings together industry leaders, policymakers, and technology experts to explore how cross-sector collaboration is shaping the future of AI.</p><p><br></p><p>The conversation examines the rise of *Sovereign AI, the role of **public–private partnerships in accelerating innovation, and the urgent need for **AI skilling and workforce transformation* to build a future-ready talent ecosystem.</p><p><br></p><p>From policy and strategy to real-world implementation, the discussion looks at how governments, enterprises, academia, and innovators can work together to unlock AI’s full potential responsibly. 🎙️</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Recorded live at the *India AI Impact Summit*, this session brings together industry leaders, policymakers, and technology experts to explore how cross-sector collaboration is shaping the future of AI.</p><p><br></p><p>The conversation examines the rise of *Sovereign AI, the role of **public–private partnerships in accelerating innovation, and the urgent need for **AI skilling and workforce transformation* to build a future-ready talent ecosystem.</p><p><br></p><p>From policy and strategy to real-world implementation, the discussion looks at how governments, enterprises, academia, and innovators can work together to unlock AI’s full potential responsibly. 🎙️</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Recorded live at the *India AI Impact Summit*, this session brings together industry leaders, policymakers, and technology experts to explore how cross-sector collaboration is shaping the future of AI.The conversation examines the rise of *Sovereig...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>160</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[6cc9b649-bd01-4a66-bd57-4e4dcbfd48ea]]></guid>
  <title><![CDATA[Human-Centered AI Leadership I Dr.Ravi Pendse, University of Michigan I IndiaAI Impact Summit’26]]></title>
  <description><![CDATA[<p>*What’s your AIQ?*</p><p><br></p><p>Live from the AI Impact Summit India, this episode of Regulating AI features Dr. Ravi Pendse, Vice President for IT &amp; CIO at the University of Michigan.</p><p><br></p><p>He shares how institutions can deploy AI at scale — responsibly, securely, and with people at the center — reminding us that AI adoption isn’t a technology challenge, but a human one.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/1e04327e-20d3-4c1a-a32a-b6abc780b7de/59eece4be9.jpg" />
  <pubDate>Tue, 03 Mar 2026 08:20:20 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="20987390" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/1e04327e-20d3-4c1a-a32a-b6abc780b7de/episode.mp3?v=eccf17080e" />
  <itunes:title><![CDATA[Human-Centered AI Leadership I Dr.Ravi Pendse, University of Michigan I IndiaAI Impact Summit’26]]></itunes:title>
  <itunes:duration>20:58</itunes:duration>
  <itunes:summary><![CDATA[<p>*What’s your AIQ?*</p><p><br></p><p>Live from the AI Impact Summit India, this episode of Regulating AI features Dr. Ravi Pendse, Vice President for IT &amp; CIO at the University of Michigan.</p><p><br></p><p>He shares how institutions can deploy AI at scale — responsibly, securely, and with people at the center — reminding us that AI adoption isn’t a technology challenge, but a human one.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>*What’s your AIQ?*</p><p><br></p><p>Live from the AI Impact Summit India, this episode of Regulating AI features Dr. Ravi Pendse, Vice President for IT &amp; CIO at the University of Michigan.</p><p><br></p><p>He shares how institutions can deploy AI at scale — responsibly, securely, and with people at the center — reminding us that AI adoption isn’t a technology challenge, but a human one.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[*What’s your AIQ?*Live from the AI Impact Summit India, this episode of Regulating AI features Dr. Ravi Pendse, Vice President for IT & CIO at the University of Michigan.He shares how institutions can deploy AI at scale — responsibly, securely, and...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>159</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[14235665-c626-46b0-87c9-d961c59bcaf1]]></guid>
  <title><![CDATA[Designing Digital Futures That Work For All | Thomas Davin, UNICEF | LIVE at IndianAI Impact Summit]]></title>
  <description><![CDATA[<p><strong style="background-color: transparent;">Live from the India AI Impact Summit 2026</strong><span style="background-color: transparent;">, Thomas Davin, Global Director of Innovation at UNICEF, joins </span><em style="background-color: transparent;">Regulating AI</em><span style="background-color: transparent;"> to explore what responsible AI means for children and underserved communities.</span></p><p><br></p><p><span style="background-color: transparent;">When we regulate AI, who are we protecting — and who might we be leaving behind?</span></p><p><span style="background-color: transparent;">Tune in.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e917a284-bb7a-44c4-b52e-cf22f1956962/138bf050fd.jpg" />
  <pubDate>Tue, 24 Feb 2026 15:57:34 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="18488064" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e917a284-bb7a-44c4-b52e-cf22f1956962/episode.mp3" />
  <itunes:title><![CDATA[Designing Digital Futures That Work For All | Thomas Davin, UNICEF | LIVE at IndianAI Impact Summit]]></itunes:title>
  <itunes:duration>18:53</itunes:duration>
  <itunes:summary><![CDATA[<p><strong style="background-color: transparent;">Live from the India AI Impact Summit 2026</strong><span style="background-color: transparent;">, Thomas Davin, Global Director of Innovation at UNICEF, joins </span><em style="background-color: transparent;">Regulating AI</em><span style="background-color: transparent;"> to explore what responsible AI means for children and underserved communities.</span></p><p><br></p><p><span style="background-color: transparent;">When we regulate AI, who are we protecting — and who might we be leaving behind?</span></p><p><span style="background-color: transparent;">Tune in.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong style="background-color: transparent;">Live from the India AI Impact Summit 2026</strong><span style="background-color: transparent;">, Thomas Davin, Global Director of Innovation at UNICEF, joins </span><em style="background-color: transparent;">Regulating AI</em><span style="background-color: transparent;"> to explore what responsible AI means for children and underserved communities.</span></p><p><br></p><p><span style="background-color: transparent;">When we regulate AI, who are we protecting — and who might we be leaving behind?</span></p><p><span style="background-color: transparent;">Tune in.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Live from the India AI Impact Summit 2026, Thomas Davin, Global Director of Innovation at UNICEF, joins Regulating AI to explore what responsible AI means for children and underserved communities.When we regulate AI, who are we protecting — and who...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>158</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2706bcbf-7344-44b7-ae96-25156fcf628b]]></guid>
  <title><![CDATA[Regulation Vs Innovation | Anu Bradford, Columbia Law School | Live at IndianAI Impact Summit’26]]></title>
  <description><![CDATA[<p><strong style="background-color: transparent;">Live from the India AI Impact Summit 2026</strong><span style="background-color: transparent;">, Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law School, joins </span><em style="background-color: transparent;">Regulated AI</em><span style="background-color: transparent;"> to unpack the global power dynamics behind AI regulation.</span></p><p><br></p><p><span style="background-color: transparent;">Whose rules will govern the future of artificial intelligence?</span></p><p><span style="background-color: transparent;">Tune in.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d156dd6f-8f2e-4c79-a9b4-6980a3148a5b/b0c60e72bd.jpg" />
  <pubDate>Tue, 24 Feb 2026 14:24:24 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="19999397" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d156dd6f-8f2e-4c79-a9b4-6980a3148a5b/episode.mp3" />
  <itunes:title><![CDATA[Regulation Vs Innovation | Anu Bradford, Columbia Law School | Live at IndianAI Impact Summit’26]]></itunes:title>
  <itunes:duration>20:24</itunes:duration>
  <itunes:summary><![CDATA[<p><strong style="background-color: transparent;">Live from the India AI Impact Summit 2026</strong><span style="background-color: transparent;">, Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law School, joins </span><em style="background-color: transparent;">Regulated AI</em><span style="background-color: transparent;"> to unpack the global power dynamics behind AI regulation.</span></p><p><br></p><p><span style="background-color: transparent;">Whose rules will govern the future of artificial intelligence?</span></p><p><span style="background-color: transparent;">Tune in.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong style="background-color: transparent;">Live from the India AI Impact Summit 2026</strong><span style="background-color: transparent;">, Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law School, joins </span><em style="background-color: transparent;">Regulated AI</em><span style="background-color: transparent;"> to unpack the global power dynamics behind AI regulation.</span></p><p><br></p><p><span style="background-color: transparent;">Whose rules will govern the future of artificial intelligence?</span></p><p><span style="background-color: transparent;">Tune in.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Live from the India AI Impact Summit 2026, Anu Bradford, Henry L. Moses Professor of Law and International Organization at Columbia Law School, joins Regulated AI to unpack the global power dynamics behind AI regulation.Whose rules will govern the ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>157</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[8db90714-55d2-497c-ab36-10c8df6a7f1c]]></guid>
  <title><![CDATA[Raju Narisetti: The Future of Journalism in the Age of AI & Platform Power]]></title>
  <description><![CDATA[<p>Journalism is at an inflection point — and few people understand that shift better than Raju Narisetti.</p><p><br></p><p>With leadership roles at The Wall Street Journal and The Washington Post, and now as a Partner at McKinsey, Raju brings a rare perspective on how media is evolving in the age of AI, digital platforms, and declining public trust.</p><p><br></p><p>In this episode, we unpack the structural transformation of news, the economics of attention, and what responsible media leadership looks like today.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/f595c8e2-4877-40af-b6f1-03280f4e2b0e/7c25b856ff.jpg" />
  <pubDate>Fri, 20 Feb 2026 18:21:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="19970031" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/f595c8e2-4877-40af-b6f1-03280f4e2b0e/episode.mp3?v=498d32955f" />
  <itunes:title><![CDATA[Raju Narisetti: The Future of Journalism in the Age of AI & Platform Power]]></itunes:title>
  <itunes:duration>19:50</itunes:duration>
  <itunes:summary><![CDATA[<p>Journalism is at an inflection point — and few people understand that shift better than Raju Narisetti.</p><p><br></p><p>With leadership roles at The Wall Street Journal and The Washington Post, and now as a Partner at McKinsey, Raju brings a rare perspective on how media is evolving in the age of AI, digital platforms, and declining public trust.</p><p><br></p><p>In this episode, we unpack the structural transformation of news, the economics of attention, and what responsible media leadership looks like today.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Journalism is at an inflection point — and few people understand that shift better than Raju Narisetti.</p><p><br></p><p>With leadership roles at The Wall Street Journal and The Washington Post, and now as a Partner at McKinsey, Raju brings a rare perspective on how media is evolving in the age of AI, digital platforms, and declining public trust.</p><p><br></p><p>In this episode, we unpack the structural transformation of news, the economics of attention, and what responsible media leadership looks like today.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Journalism is at an inflection point — and few people understand that shift better than Raju Narisetti.With leadership roles at The Wall Street Journal and The Washington Post, and now as a Partner at McKinsey, Raju brings a rare perspective on how...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>156</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f50d28d3-322d-4aac-884d-e43a5864b192]]></guid>
  <title><![CDATA[Addressing the AI Skills Gap | Fred Werner, ITU I Live at IndianAI Impact Summit’26]]></title>
  <description><![CDATA[<p>Recorded live at the India AI Impact Summit, this special episode of <em>Regulating AI – Voices of Impact</em> features Sanjay Puri in conversation with Fred Werner of the International Telecommunication Union (ITU).</p><p>They explore the evolution of AI — from machine learning to generative AI and AI agents — and why the Global South has a critical role in shaping AI governance. From sovereign AI and open source to jobs, gender impact, and the global AI skills gap, this is a sharp, global perspective on where AI is headed.</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d0775db6-52ba-40b1-ae97-0b621b83a533/d5bee74c05.jpg" />
  <pubDate>Fri, 20 Feb 2026 14:29:39 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="18459545" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d0775db6-52ba-40b1-ae97-0b621b83a533/episode.mp3" />
  <itunes:title><![CDATA[Addressing the AI Skills Gap | Fred Werner, ITU I Live at IndianAI Impact Summit’26]]></itunes:title>
  <itunes:duration>18:42</itunes:duration>
  <itunes:summary><![CDATA[<p>Recorded live at the India AI Impact Summit, this special episode of <em>Regulating AI – Voices of Impact</em> features Sanjay Puri in conversation with Fred Werner of the International Telecommunication Union (ITU).</p><p>They explore the evolution of AI — from machine learning to generative AI and AI agents — and why the Global South has a critical role in shaping AI governance. From sovereign AI and open source to jobs, gender impact, and the global AI skills gap, this is a sharp, global perspective on where AI is headed.</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Recorded live at the India AI Impact Summit, this special episode of <em>Regulating AI – Voices of Impact</em> features Sanjay Puri in conversation with Fred Werner of the International Telecommunication Union (ITU).</p><p>They explore the evolution of AI — from machine learning to generative AI and AI agents — and why the Global South has a critical role in shaping AI governance. From sovereign AI and open source to jobs, gender impact, and the global AI skills gap, this is a sharp, global perspective on where AI is headed.</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Recorded live at the India AI Impact Summit, this special episode of Regulating AI – Voices of Impact features Sanjay Puri in conversation with Fred Werner of the International Telecommunication Union (ITU).They explore the evolution of AI — from m...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>155</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[cc9aafea-f51a-4069-8521-538b358b4077]]></guid>
  <title><![CDATA[Transforming Tax, Audit & Corporate Governance with AI I Rahul Patni, EY I Live at IndiaAI Impact Summit’26]]></title>
  <description><![CDATA[<p>Recorded live at the India AI Impact Summit, this episode of <em>RegulatingAI</em> features Rahul Patni, Leader – Digital Tax Practice at EY India, on how Artificial Intelligence is transforming tax, governance, and board-level oversight. From AI agents in compliance and litigation to Responsible AI frameworks and human-in-the-loop decision-making, this conversation explores how tax functions are evolving from process-driven compliance units into strategic, data-powered engines for CFOs and boards. A sharp, practical discussion for leaders navigating AI in regulated environments.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/f3d8e207-2af5-4d8c-9bcc-88ebd576fc0f/736dd50460.jpg" />
  <pubDate>Thu, 19 Feb 2026 17:38:13 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="14808714" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/f3d8e207-2af5-4d8c-9bcc-88ebd576fc0f/episode.mp3" />
  <itunes:title><![CDATA[Transforming Tax, Audit & Corporate Governance with AI I Rahul Patni, EY I Live at IndiaAI Impact Summit’26]]></itunes:title>
  <itunes:duration>14:58</itunes:duration>
  <itunes:summary><![CDATA[<p>Recorded live at the India AI Impact Summit, this episode of <em>RegulatingAI</em> features Rahul Patni, Leader – Digital Tax Practice at EY India, on how Artificial Intelligence is transforming tax, governance, and board-level oversight. From AI agents in compliance and litigation to Responsible AI frameworks and human-in-the-loop decision-making, this conversation explores how tax functions are evolving from process-driven compliance units into strategic, data-powered engines for CFOs and boards. A sharp, practical discussion for leaders navigating AI in regulated environments.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Recorded live at the India AI Impact Summit, this episode of <em>RegulatingAI</em> features Rahul Patni, Leader – Digital Tax Practice at EY India, on how Artificial Intelligence is transforming tax, governance, and board-level oversight. From AI agents in compliance and litigation to Responsible AI frameworks and human-in-the-loop decision-making, this conversation explores how tax functions are evolving from process-driven compliance units into strategic, data-powered engines for CFOs and boards. A sharp, practical discussion for leaders navigating AI in regulated environments.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Recorded live at the India AI Impact Summit, this episode of RegulatingAI features Rahul Patni, Leader – Digital Tax Practice at EY India, on how Artificial Intelligence is transforming tax, governance, and board-level oversight. From AI agents in ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>154</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[27bafeed-4723-4416-a6cc-1efef66b8501]]></guid>
  <title><![CDATA[The Missing Piece in Enterprise AI i Amit Zavery, ServiceNow I Live at IndiaAI Impact Summit’26]]></title>
  <description><![CDATA[<p>Recorded live at the India AI Impact Summit, this episode of <em>RegulatingAI</em> features Amit Zavery, President, CPO and COO at ServiceNow, on what it really takes to move AI from experimentation to enterprise-wide transformation. Drawing on leadership experience across Oracle and Google Cloud, Amit shares a pragmatic view on autonomous workflows, governance as a prerequisite for scale, and why enterprises must focus on outcomes — not models — to drive real impact.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/04cd8ac7-e91a-4af1-95ca-477777488048/4e0c7eb727.jpg" />
  <pubDate>Thu, 19 Feb 2026 15:39:12 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="15635194" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/04cd8ac7-e91a-4af1-95ca-477777488048/episode.mp3" />
  <itunes:title><![CDATA[The Missing Piece in Enterprise AI i Amit Zavery, ServiceNow I Live at IndiaAI Impact Summit’26]]></itunes:title>
  <itunes:duration>16:01</itunes:duration>
  <itunes:summary><![CDATA[<p>Recorded live at the India AI Impact Summit, this episode of <em>RegulatingAI</em> features Amit Zavery, President, CPO and COO at ServiceNow, on what it really takes to move AI from experimentation to enterprise-wide transformation. Drawing on leadership experience across Oracle and Google Cloud, Amit shares a pragmatic view on autonomous workflows, governance as a prerequisite for scale, and why enterprises must focus on outcomes — not models — to drive real impact.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Recorded live at the India AI Impact Summit, this episode of <em>RegulatingAI</em> features Amit Zavery, President, CPO and COO at ServiceNow, on what it really takes to move AI from experimentation to enterprise-wide transformation. Drawing on leadership experience across Oracle and Google Cloud, Amit shares a pragmatic view on autonomous workflows, governance as a prerequisite for scale, and why enterprises must focus on outcomes — not models — to drive real impact.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Recorded live at the India AI Impact Summit, this episode of RegulatingAI features Amit Zavery, President, CPO and COO at ServiceNow, on what it really takes to move AI from experimentation to enterprise-wide transformation. Drawing on leadership e...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>153</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2c5965a5-fc26-4b46-bc46-69d2d22e24be]]></guid>
  <title><![CDATA[Brian Poe on Building the Philippines AI Framework | Live at Davos’26 with Sanjay Puri]]></title>
  <description><![CDATA[<p class="ql-align-justify">Live from Davos: Is the Philippines about to become Southeast Asia’s next AI powerhouse?</p><p class="ql-align-justify"><br></p><p class="ql-align-justify">In this special episode of <em>RegulatingAI</em>, Sanjay Puri sits down with Congressman Brian Poe, architect of the Philippines’ AI Framework Bill (House Bill 1196). Rather than racing to overregulate, the country is taking a bold “early adopter” approach—balancing innovation, workforce transformation, and smart governance.</p><p class="ql-align-justify"><br></p><p class="ql-align-justify">From AI sandboxes and startup incentives to protecting local dialects and reshaping the BPO workforce, this conversation reveals how a 120M-strong nation is writing its AI future in real time.</p><p class="ql-align-justify">If you care about global AI policy, emerging markets, or the future of responsible innovation—don’t miss this one.</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2c42c40a-fe0c-429a-8f90-cf02e14ffbed/7100934963.jpg" />
  <pubDate>Thu, 12 Feb 2026 14:11:23 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="19684982" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2c42c40a-fe0c-429a-8f90-cf02e14ffbed/episode.mp3" />
  <itunes:title><![CDATA[Brian Poe on Building the Philippines AI Framework | Live at Davos’26 with Sanjay Puri]]></itunes:title>
  <itunes:duration>19:48</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">Live from Davos: Is the Philippines about to become Southeast Asia’s next AI powerhouse?</p><p class="ql-align-justify"><br></p><p class="ql-align-justify">In this special episode of <em>RegulatingAI</em>, Sanjay Puri sits down with Congressman Brian Poe, architect of the Philippines’ AI Framework Bill (House Bill 1196). Rather than racing to overregulate, the country is taking a bold “early adopter” approach—balancing innovation, workforce transformation, and smart governance.</p><p class="ql-align-justify"><br></p><p class="ql-align-justify">From AI sandboxes and startup incentives to protecting local dialects and reshaping the BPO workforce, this conversation reveals how a 120M-strong nation is writing its AI future in real time.</p><p class="ql-align-justify">If you care about global AI policy, emerging markets, or the future of responsible innovation—don’t miss this one.</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">Live from Davos: Is the Philippines about to become Southeast Asia’s next AI powerhouse?</p><p class="ql-align-justify"><br></p><p class="ql-align-justify">In this special episode of <em>RegulatingAI</em>, Sanjay Puri sits down with Congressman Brian Poe, architect of the Philippines’ AI Framework Bill (House Bill 1196). Rather than racing to overregulate, the country is taking a bold “early adopter” approach—balancing innovation, workforce transformation, and smart governance.</p><p class="ql-align-justify"><br></p><p class="ql-align-justify">From AI sandboxes and startup incentives to protecting local dialects and reshaping the BPO workforce, this conversation reveals how a 120M-strong nation is writing its AI future in real time.</p><p class="ql-align-justify">If you care about global AI policy, emerging markets, or the future of responsible innovation—don’t miss this one.</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Live from Davos: Is the Philippines about to become Southeast Asia’s next AI powerhouse?In this special episode of RegulatingAI, Sanjay Puri sits down with Congressman Brian Poe, architect of the Philippines’ AI Framework Bill (House Bill 1196). Ra...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>152</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a45971ca-ae3f-474a-b9c3-d9b641d1ec4d]]></guid>
  <title><![CDATA[AI Governance Crisis: Can Democracy Survive the Multipolar AI Revolution? | Davos 2026]]></title>
  <description><![CDATA[<p>Recorded live at the House of Kosovo during Davos 2026, this <em>RegulatingAI</em> episode brings together leaders from policy, government, and technology to tackle <strong>AI governance in a multipolar world</strong>.</p><p><br></p><p>Moderated by Sanjay Puri, the panel—Dr. Jess Conser, Dr. Clara Guerra, Brando Benifei (Member of the European Parliament), and Combiz Abdolrahimi (ServiceNow)—explores how democratic societies can govern AI amid geopolitical fragmentation, and what it takes to move from principles to real-world implementation.</p><p><br></p><p>From public–private collaboration to AI literacy and execution at scale, this conversation unpacks what responsible AI governance requires in 2026—and why getting it right is essential for innovation, trust, and democracy.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/4f047f69-dd33-479c-a26d-a8187d0d207f/b8be65751d.jpg" />
  <pubDate>Thu, 05 Feb 2026 10:48:02 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="37670216" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/4f047f69-dd33-479c-a26d-a8187d0d207f/episode.mp3" />
  <itunes:title><![CDATA[AI Governance Crisis: Can Democracy Survive the Multipolar AI Revolution? | Davos 2026]]></itunes:title>
  <itunes:duration>39:06</itunes:duration>
  <itunes:summary><![CDATA[<p>Recorded live at the House of Kosovo during Davos 2026, this <em>RegulatingAI</em> episode brings together leaders from policy, government, and technology to tackle <strong>AI governance in a multipolar world</strong>.</p><p><br></p><p>Moderated by Sanjay Puri, the panel—Dr. Jess Conser, Dr. Clara Guerra, Brando Benifei (Member of the European Parliament), and Combiz Abdolrahimi (ServiceNow)—explores how democratic societies can govern AI amid geopolitical fragmentation, and what it takes to move from principles to real-world implementation.</p><p><br></p><p>From public–private collaboration to AI literacy and execution at scale, this conversation unpacks what responsible AI governance requires in 2026—and why getting it right is essential for innovation, trust, and democracy.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Recorded live at the House of Kosovo during Davos 2026, this <em>RegulatingAI</em> episode brings together leaders from policy, government, and technology to tackle <strong>AI governance in a multipolar world</strong>.</p><p><br></p><p>Moderated by Sanjay Puri, the panel—Dr. Jess Conser, Dr. Clara Guerra, Brando Benifei (Member of the European Parliament), and Combiz Abdolrahimi (ServiceNow)—explores how democratic societies can govern AI amid geopolitical fragmentation, and what it takes to move from principles to real-world implementation.</p><p><br></p><p>From public–private collaboration to AI literacy and execution at scale, this conversation unpacks what responsible AI governance requires in 2026—and why getting it right is essential for innovation, trust, and democracy.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Recorded live at the House of Kosovo during Davos 2026, this RegulatingAI episode brings together leaders from policy, government, and technology to tackle AI governance in a multipolar world.Moderated by Sanjay Puri, the panel—Dr. Jess Conser, Dr....]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>151</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[d989f6dc-ebd9-429d-a003-82e9a60fd20f]]></guid>
  <title><![CDATA[Why India’s AI Mission Matters with Abhishek Singh, CEO, IndiaAI Mission]]></title>
  <description><![CDATA[<p>In this Davos-recorded episode of the <strong>Regulating AI Podcast</strong>, we speak with <strong>Abhishek Singh</strong>, CEO, <strong>IndiaAI Mission &amp; </strong>Additional Secretary, Ministry of Electronics &amp; Information technology, ahead of the <strong>India AI Impact Summit 2026</strong> on <strong>February 19th &amp; 20th.</strong></p><p><br></p><p>He shares how India is building a full-stack AI ecosystem, democratizing access to AI, and bringing Global South leadership into the future of AI governance.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/f8d9d344-1a1d-4389-8246-5fa3cf1b1663/c0b6266fa5.jpg" />
  <pubDate>Thu, 29 Jan 2026 16:42:46 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="24231487" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/f8d9d344-1a1d-4389-8246-5fa3cf1b1663/episode.mp3" />
  <itunes:title><![CDATA[Why India’s AI Mission Matters with Abhishek Singh, CEO, IndiaAI Mission]]></itunes:title>
  <itunes:duration>24:18</itunes:duration>
  <itunes:summary><![CDATA[<p>In this Davos-recorded episode of the <strong>Regulating AI Podcast</strong>, we speak with <strong>Abhishek Singh</strong>, CEO, <strong>IndiaAI Mission &amp; </strong>Additional Secretary, Ministry of Electronics &amp; Information technology, ahead of the <strong>India AI Impact Summit 2026</strong> on <strong>February 19th &amp; 20th.</strong></p><p><br></p><p>He shares how India is building a full-stack AI ecosystem, democratizing access to AI, and bringing Global South leadership into the future of AI governance.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this Davos-recorded episode of the <strong>Regulating AI Podcast</strong>, we speak with <strong>Abhishek Singh</strong>, CEO, <strong>IndiaAI Mission &amp; </strong>Additional Secretary, Ministry of Electronics &amp; Information technology, ahead of the <strong>India AI Impact Summit 2026</strong> on <strong>February 19th &amp; 20th.</strong></p><p><br></p><p>He shares how India is building a full-stack AI ecosystem, democratizing access to AI, and bringing Global South leadership into the future of AI governance.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this Davos-recorded episode of the Regulating AI Podcast, we speak with Abhishek Singh, CEO, IndiaAI Mission & Additional Secretary, Ministry of Electronics & Information technology, ahead of the India AI Impact Summit 2026 on February 19th & 20...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>150</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[5c062f89-e4ce-4096-833c-2060201cecf2]]></guid>
  <title><![CDATA[How Enterprise AI Standards Are Shaping Regulation]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">How do enterprise AI standards influence global AI regulation?</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span></span></p><p><span style="background-color: transparent;">In this episode of </span><em style="background-color: transparent;">Regulating AI</em><span style="background-color: transparent;">, Dr. James H. Dickerson, Director at ASCET, explains how technical standards shape governance, compliance, and policy long before laws are enforced. A must-listen for leaders navigating responsible and regulated AI.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/888ea80a-82e4-4f44-8a09-1fedfb2f2a9a/9d72587abd.jpg" />
  <pubDate>Thu, 22 Jan 2026 12:10:27 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="37932903" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/888ea80a-82e4-4f44-8a09-1fedfb2f2a9a/episode.mp3" />
  <itunes:title><![CDATA[How Enterprise AI Standards Are Shaping Regulation]]></itunes:title>
  <itunes:duration>39:27</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">How do enterprise AI standards influence global AI regulation?</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span></span></p><p><span style="background-color: transparent;">In this episode of </span><em style="background-color: transparent;">Regulating AI</em><span style="background-color: transparent;">, Dr. James H. Dickerson, Director at ASCET, explains how technical standards shape governance, compliance, and policy long before laws are enforced. A must-listen for leaders navigating responsible and regulated AI.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">How do enterprise AI standards influence global AI regulation?</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span></span></p><p><span style="background-color: transparent;">In this episode of </span><em style="background-color: transparent;">Regulating AI</em><span style="background-color: transparent;">, Dr. James H. Dickerson, Director at ASCET, explains how technical standards shape governance, compliance, and policy long before laws are enforced. A must-listen for leaders navigating responsible and regulated AI.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[How do enterprise AI standards influence global AI regulation?﻿In this episode of Regulating AI, Dr. James H. Dickerson, Director at ASCET, explains how technical standards shape governance, compliance, and policy long before laws are enforced. A m...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>149</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b42b8ca3-e809-4410-ac17-c87048d98249]]></guid>
  <title><![CDATA[Rethinking AI Governance: Through Government, Big Tech &b Academia]]></title>
  <description><![CDATA[<p>Roy Austin brings a rare, end-to-end perspective on AI governance—from co-authoring President Obama’s 2014 civil rights and big data report to building Meta’s first civil rights team and now leading Howard Law’s AI initiative.</p><p>In this episode of <em>Regulating AI</em>, he unpacks why self-regulation fails, why data quality still defines AI outcomes, and how states are stepping in where federal policy has stalled. We also explore how unchecked wealth concentration and weak oversight threaten democracy in the age of AI. A must-listen conversation on what real accountability in AI governance should look like.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/b0351510-2bbf-4050-bbd2-ae5b1fee41d9/ccef889032.jpg" />
  <pubDate>Thu, 15 Jan 2026 13:28:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38936182" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/b0351510-2bbf-4050-bbd2-ae5b1fee41d9/episode.mp3" />
  <itunes:title><![CDATA[Rethinking AI Governance: Through Government, Big Tech &b Academia]]></itunes:title>
  <itunes:duration>40:08</itunes:duration>
  <itunes:summary><![CDATA[<p>Roy Austin brings a rare, end-to-end perspective on AI governance—from co-authoring President Obama’s 2014 civil rights and big data report to building Meta’s first civil rights team and now leading Howard Law’s AI initiative.</p><p>In this episode of <em>Regulating AI</em>, he unpacks why self-regulation fails, why data quality still defines AI outcomes, and how states are stepping in where federal policy has stalled. We also explore how unchecked wealth concentration and weak oversight threaten democracy in the age of AI. A must-listen conversation on what real accountability in AI governance should look like.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Roy Austin brings a rare, end-to-end perspective on AI governance—from co-authoring President Obama’s 2014 civil rights and big data report to building Meta’s first civil rights team and now leading Howard Law’s AI initiative.</p><p>In this episode of <em>Regulating AI</em>, he unpacks why self-regulation fails, why data quality still defines AI outcomes, and how states are stepping in where federal policy has stalled. We also explore how unchecked wealth concentration and weak oversight threaten democracy in the age of AI. A must-listen conversation on what real accountability in AI governance should look like.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Roy Austin brings a rare, end-to-end perspective on AI governance—from co-authoring President Obama’s 2014 civil rights and big data report to building Meta’s first civil rights team and now leading Howard Law’s AI initiative.In this episode of Reg...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>148</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[01f437ce-a533-47d4-baa8-868d7496c1b2]]></guid>
  <title><![CDATA[Can AI Be Both Sovereign and Global? With Anne Bouverot]]></title>
  <description><![CDATA[<p>In this episode of Regulating AI Talk, we sit down with Anne Bouverot, France’s Special Envoy for AI, to unpack one of the defining tensions of our time. As nations race to protect democratic values, economic competitiveness, and technological autonomy, AI refuses to respect borders. Anne explores how governments can balance AI sovereignty with global cooperation, why fragmented regulation could backfire, and what it will take to build shared rules for a technology shaping geopolitics, markets, and society itself. A must-listen conversation on power, policy, and the future of AI governance.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/23a7daf3-f848-4c1a-80a5-1cab197317d0/73b29c0063.jpg" />
  <pubDate>Wed, 07 Jan 2026 15:35:44 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32386373" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/23a7daf3-f848-4c1a-80a5-1cab197317d0/episode.mp3" />
  <itunes:title><![CDATA[Can AI Be Both Sovereign and Global? With Anne Bouverot]]></itunes:title>
  <itunes:duration>33:09</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of Regulating AI Talk, we sit down with Anne Bouverot, France’s Special Envoy for AI, to unpack one of the defining tensions of our time. As nations race to protect democratic values, economic competitiveness, and technological autonomy, AI refuses to respect borders. Anne explores how governments can balance AI sovereignty with global cooperation, why fragmented regulation could backfire, and what it will take to build shared rules for a technology shaping geopolitics, markets, and society itself. A must-listen conversation on power, policy, and the future of AI governance.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of Regulating AI Talk, we sit down with Anne Bouverot, France’s Special Envoy for AI, to unpack one of the defining tensions of our time. As nations race to protect democratic values, economic competitiveness, and technological autonomy, AI refuses to respect borders. Anne explores how governments can balance AI sovereignty with global cooperation, why fragmented regulation could backfire, and what it will take to build shared rules for a technology shaping geopolitics, markets, and society itself. A must-listen conversation on power, policy, and the future of AI governance.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of Regulating AI Talk, we sit down with Anne Bouverot, France’s Special Envoy for AI, to unpack one of the defining tensions of our time. As nations race to protect democratic values, economic competitiveness, and technological auto...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>147</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[efe1030a-7cb3-4539-905a-9c7959cbb71c]]></guid>
  <title><![CDATA[AI Governance & Global Policy at ASEAN | Sanjay Puri in Conversation with Congressman Jay Obernolte]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Artificial Intelligence is reshaping economies, governments, and global cooperation.</span></p><p><span style="background-color: transparent;">At the ASEAN platform, </span><strong style="background-color: transparent;">Sanjay Puri</strong><span style="background-color: transparent;">, Founder &amp; Chairperson, sits down with </span><strong style="background-color: transparent;">U.S. Congressman Jay Obernolte</strong><span style="background-color: transparent;"> to discuss the evolving landscape of </span><strong style="background-color: transparent;">AI governance, AI policy, and international collaboration</strong><span style="background-color: transparent;">.</span></p><p><br></p><p><span style="background-color: transparent;">This insightful conversation explores:</span></p><ul><li><span style="background-color: transparent;">The future of </span><strong style="background-color: transparent;">AI regulation and governance</strong></li><li><span style="background-color: transparent;">How governments can balance </span><strong style="background-color: transparent;">innovation and responsibility</strong></li><li><span style="background-color: transparent;">The role of ASEAN and global partnerships in shaping AI policy</span></li><li><span style="background-color: transparent;">The importance of ethical, transparent, and inclusive AI frameworks</span></li></ul><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>📌 Watch the full discussion to understand how policymakers and industry leaders are working together to shape the future of AI.</span></p><p><br></p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/dd815f31-480d-48e2-b190-d43f2b6783d8/3713824298.jpg" />
  <pubDate>Fri, 02 Jan 2026 03:38:12 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="11792443" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/dd815f31-480d-48e2-b190-d43f2b6783d8/episode.mp3" />
  <itunes:title><![CDATA[AI Governance & Global Policy at ASEAN | Sanjay Puri in Conversation with Congressman Jay Obernolte]]></itunes:title>
  <itunes:duration>12:16</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Artificial Intelligence is reshaping economies, governments, and global cooperation.</span></p><p><span style="background-color: transparent;">At the ASEAN platform, </span><strong style="background-color: transparent;">Sanjay Puri</strong><span style="background-color: transparent;">, Founder &amp; Chairperson, sits down with </span><strong style="background-color: transparent;">U.S. Congressman Jay Obernolte</strong><span style="background-color: transparent;"> to discuss the evolving landscape of </span><strong style="background-color: transparent;">AI governance, AI policy, and international collaboration</strong><span style="background-color: transparent;">.</span></p><p><br></p><p><span style="background-color: transparent;">This insightful conversation explores:</span></p><ul><li><span style="background-color: transparent;">The future of </span><strong style="background-color: transparent;">AI regulation and governance</strong></li><li><span style="background-color: transparent;">How governments can balance </span><strong style="background-color: transparent;">innovation and responsibility</strong></li><li><span style="background-color: transparent;">The role of ASEAN and global partnerships in shaping AI policy</span></li><li><span style="background-color: transparent;">The importance of ethical, transparent, and inclusive AI frameworks</span></li></ul><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>📌 Watch the full discussion to understand how policymakers and industry leaders are working together to shape the future of AI.</span></p><p><br></p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Artificial Intelligence is reshaping economies, governments, and global cooperation.</span></p><p><span style="background-color: transparent;">At the ASEAN platform, </span><strong style="background-color: transparent;">Sanjay Puri</strong><span style="background-color: transparent;">, Founder &amp; Chairperson, sits down with </span><strong style="background-color: transparent;">U.S. Congressman Jay Obernolte</strong><span style="background-color: transparent;"> to discuss the evolving landscape of </span><strong style="background-color: transparent;">AI governance, AI policy, and international collaboration</strong><span style="background-color: transparent;">.</span></p><p><br></p><p><span style="background-color: transparent;">This insightful conversation explores:</span></p><ul><li><span style="background-color: transparent;">The future of </span><strong style="background-color: transparent;">AI regulation and governance</strong></li><li><span style="background-color: transparent;">How governments can balance </span><strong style="background-color: transparent;">innovation and responsibility</strong></li><li><span style="background-color: transparent;">The role of ASEAN and global partnerships in shaping AI policy</span></li><li><span style="background-color: transparent;">The importance of ethical, transparent, and inclusive AI frameworks</span></li></ul><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>📌 Watch the full discussion to understand how policymakers and industry leaders are working together to shape the future of AI.</span></p><p><br></p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Artificial Intelligence is reshaping economies, governments, and global cooperation.At the ASEAN platform, Sanjay Puri, Founder & Chairperson, sits down with U.S. Congressman Jay Obernolte to discuss the evolving landscape of AI governance, AI poli...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7ae87398-ca39-4a64-8e12-ff265608ad88]]></guid>
  <title><![CDATA[Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of the&nbsp;<strong>Regulating AI Podcast</strong>, we speak with&nbsp;<strong>Camille Carlton</strong>,&nbsp;<strong>Director of Policy at the Center for Humane Technology</strong>, a leading voice in&nbsp;<strong>AI regulation, chatbot safety, and public-interest technology</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">Camille is directly involved in&nbsp;<strong>landmark lawsuits against&nbsp;CharacterAI&nbsp;and OpenAI CEO Sam Altman</strong>, placing her at the forefront of debates around&nbsp;<strong>AI accountability, AI companions, and platform liability</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">This conversation examines the&nbsp;<strong>mental-health risks of AI chatbots</strong>, the rise of&nbsp;<strong>AI companions</strong>, and why certain conversational systems may pose&nbsp;<strong>public-health concerns</strong>, especially for younger and socially isolated users. Camille also breaks down how&nbsp;<strong>AI governance frameworks</strong>&nbsp;differ across&nbsp;<strong>U.S. states, Congress, and the EU AI Act</strong>, and outlines what&nbsp;<strong>practical, enforceable AI policy</strong>&nbsp;could look like in the years ahead.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Key Takeaways</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>AI Chatbots as a Public-Health Risk</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent&nbsp;<strong>mental-health&nbsp;and safety concerns</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Regulating Chatbots vs. Foundation Models</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">Why&nbsp;<strong>high-risk conversational AI systems</strong>&nbsp;require different regulatory treatment than&nbsp;<strong>general-purpose LLMs and foundation&nbsp;models</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Global AI Governance Lessons</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">What the&nbsp;<strong>EU AI Act</strong>, U.S. states, and Congress can learn from each other when designing&nbsp;<strong>balanced, risk-based AI regulation</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Transparency, Design &amp; Accountability</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">How a&nbsp;<strong>light-touch but firm AI policy approach</strong>&nbsp;can improve transparency, platform accountability, and data access without slowing innovation.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Why AI Personhood Is a Dangerous Idea</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">How&nbsp;framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Subscribe to Regulating AI</strong>&nbsp;for expert conversations on&nbsp;<strong>AI governance, responsible AI, technology policy, and the future of regulation</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">#RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify">#AICompanions&nbsp;&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/camille-carlton" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/camille-carlton</a>&nbsp;&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/</a>&nbsp;&nbsp;</p><p><a href="https://www.humanetech.com/substack" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/substack</a>&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/podcast" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/podcast</a>&nbsp;</p><p><a href="https://www.humanetech.com/landing/the-ai-dilemma" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/landing/the-ai-dilemma</a>&nbsp;&nbsp;</p><p><a href="https://centerforhumanetechnology.substack.com/p/ai-product-liability" target="_blank" style="color: rgb(70, 120, 134);">https://centerforhumanetechnology.substack.com/p/ai-product-liability</a>&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai</a>&nbsp;</p><p>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d41bc43e-b625-4fab-b5c9-26295ae2ffe6/c7d509013f.jpg" />
  <pubDate>Thu, 18 Dec 2025 14:08:03 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39272742" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d41bc43e-b625-4fab-b5c9-26295ae2ffe6/episode.mp3" />
  <itunes:title><![CDATA[Camille Carlton on the Hidden Dangers of Chatbots & AI Governance | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>40:54</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of the&nbsp;<strong>Regulating AI Podcast</strong>, we speak with&nbsp;<strong>Camille Carlton</strong>,&nbsp;<strong>Director of Policy at the Center for Humane Technology</strong>, a leading voice in&nbsp;<strong>AI regulation, chatbot safety, and public-interest technology</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">Camille is directly involved in&nbsp;<strong>landmark lawsuits against&nbsp;CharacterAI&nbsp;and OpenAI CEO Sam Altman</strong>, placing her at the forefront of debates around&nbsp;<strong>AI accountability, AI companions, and platform liability</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">This conversation examines the&nbsp;<strong>mental-health risks of AI chatbots</strong>, the rise of&nbsp;<strong>AI companions</strong>, and why certain conversational systems may pose&nbsp;<strong>public-health concerns</strong>, especially for younger and socially isolated users. Camille also breaks down how&nbsp;<strong>AI governance frameworks</strong>&nbsp;differ across&nbsp;<strong>U.S. states, Congress, and the EU AI Act</strong>, and outlines what&nbsp;<strong>practical, enforceable AI policy</strong>&nbsp;could look like in the years ahead.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Key Takeaways</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>AI Chatbots as a Public-Health Risk</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent&nbsp;<strong>mental-health&nbsp;and safety concerns</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Regulating Chatbots vs. Foundation Models</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">Why&nbsp;<strong>high-risk conversational AI systems</strong>&nbsp;require different regulatory treatment than&nbsp;<strong>general-purpose LLMs and foundation&nbsp;models</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Global AI Governance Lessons</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">What the&nbsp;<strong>EU AI Act</strong>, U.S. states, and Congress can learn from each other when designing&nbsp;<strong>balanced, risk-based AI regulation</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Transparency, Design &amp; Accountability</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">How a&nbsp;<strong>light-touch but firm AI policy approach</strong>&nbsp;can improve transparency, platform accountability, and data access without slowing innovation.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Why AI Personhood Is a Dangerous Idea</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">How&nbsp;framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Subscribe to Regulating AI</strong>&nbsp;for expert conversations on&nbsp;<strong>AI governance, responsible AI, technology policy, and the future of regulation</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">#RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify">#AICompanions&nbsp;&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/camille-carlton" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/camille-carlton</a>&nbsp;&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/</a>&nbsp;&nbsp;</p><p><a href="https://www.humanetech.com/substack" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/substack</a>&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/podcast" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/podcast</a>&nbsp;</p><p><a href="https://www.humanetech.com/landing/the-ai-dilemma" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/landing/the-ai-dilemma</a>&nbsp;&nbsp;</p><p><a href="https://centerforhumanetechnology.substack.com/p/ai-product-liability" target="_blank" style="color: rgb(70, 120, 134);">https://centerforhumanetechnology.substack.com/p/ai-product-liability</a>&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai</a>&nbsp;</p><p>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of the&nbsp;<strong>Regulating AI Podcast</strong>, we speak with&nbsp;<strong>Camille Carlton</strong>,&nbsp;<strong>Director of Policy at the Center for Humane Technology</strong>, a leading voice in&nbsp;<strong>AI regulation, chatbot safety, and public-interest technology</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">Camille is directly involved in&nbsp;<strong>landmark lawsuits against&nbsp;CharacterAI&nbsp;and OpenAI CEO Sam Altman</strong>, placing her at the forefront of debates around&nbsp;<strong>AI accountability, AI companions, and platform liability</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">This conversation examines the&nbsp;<strong>mental-health risks of AI chatbots</strong>, the rise of&nbsp;<strong>AI companions</strong>, and why certain conversational systems may pose&nbsp;<strong>public-health concerns</strong>, especially for younger and socially isolated users. Camille also breaks down how&nbsp;<strong>AI governance frameworks</strong>&nbsp;differ across&nbsp;<strong>U.S. states, Congress, and the EU AI Act</strong>, and outlines what&nbsp;<strong>practical, enforceable AI policy</strong>&nbsp;could look like in the years ahead.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Key Takeaways</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>AI Chatbots as a Public-Health Risk</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">Why AI companions may intensify loneliness, emotional dependency, and psychological harm—raising urgent&nbsp;<strong>mental-health&nbsp;and safety concerns</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Regulating Chatbots vs. Foundation Models</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">Why&nbsp;<strong>high-risk conversational AI systems</strong>&nbsp;require different regulatory treatment than&nbsp;<strong>general-purpose LLMs and foundation&nbsp;models</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Global AI Governance Lessons</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">What the&nbsp;<strong>EU AI Act</strong>, U.S. states, and Congress can learn from each other when designing&nbsp;<strong>balanced, risk-based AI regulation</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Transparency, Design &amp; Accountability</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">How a&nbsp;<strong>light-touch but firm AI policy approach</strong>&nbsp;can improve transparency, platform accountability, and data access without slowing innovation.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Why AI Personhood Is a Dangerous Idea</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">How&nbsp;framing AI systems as “persons” undermines liability, weakens accountability, and complicates enforcement.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Subscribe to Regulating AI</strong>&nbsp;for expert conversations on&nbsp;<strong>AI governance, responsible AI, technology policy, and the future of regulation</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">#RegulatingAIpodcast #camillecarlton #AIGovernance #ChatbotSafety #Knowledgenetworks&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify">#AICompanions&nbsp;&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/camille-carlton" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/camille-carlton</a>&nbsp;&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/</a>&nbsp;&nbsp;</p><p><a href="https://www.humanetech.com/substack" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/substack</a>&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/podcast" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/podcast</a>&nbsp;</p><p><a href="https://www.humanetech.com/landing/the-ai-dilemma" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/landing/the-ai-dilemma</a>&nbsp;&nbsp;</p><p><a href="https://centerforhumanetechnology.substack.com/p/ai-product-liability" target="_blank" style="color: rgb(70, 120, 134);">https://centerforhumanetechnology.substack.com/p/ai-product-liability</a>&nbsp;</p><p><br></p><p><a href="https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai" target="_blank" style="color: rgb(70, 120, 134);">https://www.humanetech.com/case-study/policy-in-action-strategic-litigation-that-helps-govern-ai</a>&nbsp;</p><p>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the Regulating AI Podcast, we speak with Camille Carlton, Director of Policy at the Center for Humane Technology, a leading voice in AI regulation, chatbot safety, and public-interest technology. Camille is directly involved in l...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[65534332-ba7b-4d19-9046-3ef4644ff455]]></guid>
  <title><![CDATA[Karin Stephan on Building Emotionally Intelligent Technology | RegulatingAI Podcast    ]]></title>
  <description><![CDATA[<p><strong>In this episode of&nbsp;<em>RegulatingAI</em>, host Sanjay Puri speaks with Karin Andrea-Stephan —&nbsp;COO &amp; Co-founder&nbsp;of&nbsp;Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being.</strong>&nbsp;</p><p><br></p><p>With a career that spans music, psychology, and digital innovation, Karin shares how&nbsp;she’s&nbsp;building&nbsp;<strong>privacy-first AI tools</strong>&nbsp;designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress.&nbsp;</p><p><br></p><p>Together, they unpack the delicate balance between&nbsp;<strong>AI innovation and human empathy</strong>, the ethics of&nbsp;<strong>AI chatbots for youth</strong>, and what it really takes to design technology that heals instead of harms.&nbsp;</p><p><br></p><p><strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><p>•&nbsp;<em>AI and Empathy:</em>&nbsp;Why emotional intelligence—not algorithms—must guide the future of mental health tech.&nbsp;</p><p><br></p><p>•&nbsp;<em>Teens and Trust:</em>&nbsp;How technology exploits belonging, and what must change to rebuild digital trust.&nbsp;</p><p><br></p><p>•&nbsp;<em>Regulating Responsibly:</em>&nbsp;Why the answer&nbsp;isn’t&nbsp;bans, but thoughtful, transparent policy shaped with youth input.&nbsp;</p><p><br></p><p>•&nbsp;<em>Privacy by Design:</em>&nbsp;How ethical AI can protect privacy without compromising impact.&nbsp;</p><p><br></p><p>•&nbsp;<em>Bridging the Global Mental Health Gap:</em>&nbsp;Why collaboration and compassion matter as much as code.&nbsp;</p><p><br></p><p>If this conversation made you rethink the relationship between&nbsp;<strong>AI and mental health</strong>, hit&nbsp;<strong>like</strong>,&nbsp;<strong>share</strong>, and&nbsp;<strong>subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more insights on building technology that serves humanity.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/karinstephan/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/karinstephan/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/31a12094-4b51-48f5-962e-8d51375624f5/35f89482d0.jpg" />
  <pubDate>Wed, 10 Dec 2025 08:56:54 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="34526816" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/31a12094-4b51-48f5-962e-8d51375624f5/episode.mp3" />
  <itunes:title><![CDATA[Karin Stephan on Building Emotionally Intelligent Technology | RegulatingAI Podcast    ]]></itunes:title>
  <itunes:duration>35:57</itunes:duration>
  <itunes:summary><![CDATA[<p><strong>In this episode of&nbsp;<em>RegulatingAI</em>, host Sanjay Puri speaks with Karin Andrea-Stephan —&nbsp;COO &amp; Co-founder&nbsp;of&nbsp;Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being.</strong>&nbsp;</p><p><br></p><p>With a career that spans music, psychology, and digital innovation, Karin shares how&nbsp;she’s&nbsp;building&nbsp;<strong>privacy-first AI tools</strong>&nbsp;designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress.&nbsp;</p><p><br></p><p>Together, they unpack the delicate balance between&nbsp;<strong>AI innovation and human empathy</strong>, the ethics of&nbsp;<strong>AI chatbots for youth</strong>, and what it really takes to design technology that heals instead of harms.&nbsp;</p><p><br></p><p><strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><p>•&nbsp;<em>AI and Empathy:</em>&nbsp;Why emotional intelligence—not algorithms—must guide the future of mental health tech.&nbsp;</p><p><br></p><p>•&nbsp;<em>Teens and Trust:</em>&nbsp;How technology exploits belonging, and what must change to rebuild digital trust.&nbsp;</p><p><br></p><p>•&nbsp;<em>Regulating Responsibly:</em>&nbsp;Why the answer&nbsp;isn’t&nbsp;bans, but thoughtful, transparent policy shaped with youth input.&nbsp;</p><p><br></p><p>•&nbsp;<em>Privacy by Design:</em>&nbsp;How ethical AI can protect privacy without compromising impact.&nbsp;</p><p><br></p><p>•&nbsp;<em>Bridging the Global Mental Health Gap:</em>&nbsp;Why collaboration and compassion matter as much as code.&nbsp;</p><p><br></p><p>If this conversation made you rethink the relationship between&nbsp;<strong>AI and mental health</strong>, hit&nbsp;<strong>like</strong>,&nbsp;<strong>share</strong>, and&nbsp;<strong>subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more insights on building technology that serves humanity.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/karinstephan/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/karinstephan/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong>In this episode of&nbsp;<em>RegulatingAI</em>, host Sanjay Puri speaks with Karin Andrea-Stephan —&nbsp;COO &amp; Co-founder&nbsp;of&nbsp;Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being.</strong>&nbsp;</p><p><br></p><p>With a career that spans music, psychology, and digital innovation, Karin shares how&nbsp;she’s&nbsp;building&nbsp;<strong>privacy-first AI tools</strong>&nbsp;designed to make mental health support accessible — especially for teens navigating loneliness and emotional stress.&nbsp;</p><p><br></p><p>Together, they unpack the delicate balance between&nbsp;<strong>AI innovation and human empathy</strong>, the ethics of&nbsp;<strong>AI chatbots for youth</strong>, and what it really takes to design technology that heals instead of harms.&nbsp;</p><p><br></p><p><strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><p>•&nbsp;<em>AI and Empathy:</em>&nbsp;Why emotional intelligence—not algorithms—must guide the future of mental health tech.&nbsp;</p><p><br></p><p>•&nbsp;<em>Teens and Trust:</em>&nbsp;How technology exploits belonging, and what must change to rebuild digital trust.&nbsp;</p><p><br></p><p>•&nbsp;<em>Regulating Responsibly:</em>&nbsp;Why the answer&nbsp;isn’t&nbsp;bans, but thoughtful, transparent policy shaped with youth input.&nbsp;</p><p><br></p><p>•&nbsp;<em>Privacy by Design:</em>&nbsp;How ethical AI can protect privacy without compromising impact.&nbsp;</p><p><br></p><p>•&nbsp;<em>Bridging the Global Mental Health Gap:</em>&nbsp;Why collaboration and compassion matter as much as code.&nbsp;</p><p><br></p><p>If this conversation made you rethink the relationship between&nbsp;<strong>AI and mental health</strong>, hit&nbsp;<strong>like</strong>,&nbsp;<strong>share</strong>, and&nbsp;<strong>subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more insights on building technology that serves humanity.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/karinstephan/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/karinstephan/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, host Sanjay Puri speaks with Karin Andrea-Stephan — COO & Co-founder of Earkick, an AI-powered mental health platform redefining how technology supports emotional well-being. With a career that spans music, psycholo...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f09ec072-b7e4-4174-af07-ca9aa3cbe3dc]]></guid>
  <title><![CDATA[The Human Side of Machine Intelligence: Jeff McMillan on AI at Morgan Stanley – RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p><strong>In this episode of&nbsp;RegulatingAI</strong>, host&nbsp;<strong>Sanjay Puri</strong>&nbsp;sits down with&nbsp;<strong>Jeff McMillan</strong>, Head of Firmwide Artificial Intelligence at&nbsp;<strong>Morgan Stanley</strong>. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness&nbsp;<strong>generative AI&nbsp;responsibly striking</strong>&nbsp;the right balance between&nbsp;<strong>innovation, governance, and ethics</strong>.&nbsp;</p><p><br></p><p><strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li><strong>AI Governance:</strong>&nbsp;Why collaboration across business, legal, and compliance is the foundation of effective AI oversight.&nbsp;</li><li><strong>Human-in-the-Loop:</strong>&nbsp;Morgan Stanley’s core principle—keeping humans accountable and central in every AI decision.&nbsp;</li><li><strong>Education First:</strong>&nbsp;Jeff’s golden rule—spend&nbsp;<em>90% of your AI budget training people</em>&nbsp;before building tech.&nbsp;</li><li><strong>AI as a Risk Mitigator:</strong>&nbsp;How AI can&nbsp;actually strengthen&nbsp;compliance and risk management when designed right.&nbsp;</li><li><strong>Culture Over Code:</strong>&nbsp;Why successful AI transformation is less about algorithms and more about mindset, structure, and leadership.&nbsp;</li></ul><p>If you enjoyed this conversation,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, share, and subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more insights from global leaders shaping the future of responsible AI.&nbsp;</p><p><br></p><p>#RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/</a>&nbsp;&nbsp;</p><p><strong>Recent Podcast</strong>&nbsp;</p><p><br></p><p><a href="https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849" target="_blank" style="color: rgb(70, 120, 134);">https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849</a>&nbsp;&nbsp;</p><p>Morgan Stanley External Facing Website sharing some of the work we are doing on AI&nbsp;&nbsp;</p><p><a href="https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team" target="_blank" style="color: rgb(70, 120, 134);">https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team</a>&nbsp;</p><p>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/4d4f4b42-c453-4dc4-8ce2-eb7baf43dc26/c3861fe8ef.jpg" />
  <pubDate>Fri, 05 Dec 2025 05:46:41 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="50266323" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/4d4f4b42-c453-4dc4-8ce2-eb7baf43dc26/episode.mp3" />
  <itunes:title><![CDATA[The Human Side of Machine Intelligence: Jeff McMillan on AI at Morgan Stanley – RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>52:21</itunes:duration>
  <itunes:summary><![CDATA[<p><strong>In this episode of&nbsp;RegulatingAI</strong>, host&nbsp;<strong>Sanjay Puri</strong>&nbsp;sits down with&nbsp;<strong>Jeff McMillan</strong>, Head of Firmwide Artificial Intelligence at&nbsp;<strong>Morgan Stanley</strong>. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness&nbsp;<strong>generative AI&nbsp;responsibly striking</strong>&nbsp;the right balance between&nbsp;<strong>innovation, governance, and ethics</strong>.&nbsp;</p><p><br></p><p><strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li><strong>AI Governance:</strong>&nbsp;Why collaboration across business, legal, and compliance is the foundation of effective AI oversight.&nbsp;</li><li><strong>Human-in-the-Loop:</strong>&nbsp;Morgan Stanley’s core principle—keeping humans accountable and central in every AI decision.&nbsp;</li><li><strong>Education First:</strong>&nbsp;Jeff’s golden rule—spend&nbsp;<em>90% of your AI budget training people</em>&nbsp;before building tech.&nbsp;</li><li><strong>AI as a Risk Mitigator:</strong>&nbsp;How AI can&nbsp;actually strengthen&nbsp;compliance and risk management when designed right.&nbsp;</li><li><strong>Culture Over Code:</strong>&nbsp;Why successful AI transformation is less about algorithms and more about mindset, structure, and leadership.&nbsp;</li></ul><p>If you enjoyed this conversation,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, share, and subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more insights from global leaders shaping the future of responsible AI.&nbsp;</p><p><br></p><p>#RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/</a>&nbsp;&nbsp;</p><p><strong>Recent Podcast</strong>&nbsp;</p><p><br></p><p><a href="https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849" target="_blank" style="color: rgb(70, 120, 134);">https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849</a>&nbsp;&nbsp;</p><p>Morgan Stanley External Facing Website sharing some of the work we are doing on AI&nbsp;&nbsp;</p><p><a href="https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team" target="_blank" style="color: rgb(70, 120, 134);">https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team</a>&nbsp;</p><p>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong>In this episode of&nbsp;RegulatingAI</strong>, host&nbsp;<strong>Sanjay Puri</strong>&nbsp;sits down with&nbsp;<strong>Jeff McMillan</strong>, Head of Firmwide Artificial Intelligence at&nbsp;<strong>Morgan Stanley</strong>. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the world’s most regulated industries, Jeff shares how large enterprises can harness&nbsp;<strong>generative AI&nbsp;responsibly striking</strong>&nbsp;the right balance between&nbsp;<strong>innovation, governance, and ethics</strong>.&nbsp;</p><p><br></p><p><strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li><strong>AI Governance:</strong>&nbsp;Why collaboration across business, legal, and compliance is the foundation of effective AI oversight.&nbsp;</li><li><strong>Human-in-the-Loop:</strong>&nbsp;Morgan Stanley’s core principle—keeping humans accountable and central in every AI decision.&nbsp;</li><li><strong>Education First:</strong>&nbsp;Jeff’s golden rule—spend&nbsp;<em>90% of your AI budget training people</em>&nbsp;before building tech.&nbsp;</li><li><strong>AI as a Risk Mitigator:</strong>&nbsp;How AI can&nbsp;actually strengthen&nbsp;compliance and risk management when designed right.&nbsp;</li><li><strong>Culture Over Code:</strong>&nbsp;Why successful AI transformation is less about algorithms and more about mindset, structure, and leadership.&nbsp;</li></ul><p>If you enjoyed this conversation,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, share, and subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more insights from global leaders shaping the future of responsible AI.&nbsp;</p><p><br></p><p>#RegulatingAI #SanjayPuri #MorganStanley #JeffMcmillan #AIGovernance #AILeadership #EnterpriseAI&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/jeff-mcmillan-bb8b0a5/</a>&nbsp;&nbsp;</p><p><strong>Recent Podcast</strong>&nbsp;</p><p><br></p><p><a href="https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849" target="_blank" style="color: rgb(70, 120, 134);">https://podcasts.apple.com/fr/podcast/jeff-mcmillan-how-morgan-stanley-deploys-ai-at-scale/id1819622546?i=1000714786849</a>&nbsp;&nbsp;</p><p>Morgan Stanley External Facing Website sharing some of the work we are doing on AI&nbsp;&nbsp;</p><p><a href="https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team" target="_blank" style="color: rgb(70, 120, 134);">https://www.morganstanley.com/about-us/technology/artificial-intelligence-firmwide-team</a>&nbsp;</p><p>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, host Sanjay Puri sits down with Jeff McMillan, Head of Firmwide Artificial Intelligence at Morgan Stanley. With over 25 years of experience leading digital transformation and responsible AI adoption in one of the wo...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[12062448-4e2a-4bdf-a1ec-1611d6599e72]]></guid>
  <title><![CDATA[Trump’s AI Executive Order vs California: Senator Scott Wiener Responds | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>In this episode of the&nbsp;<strong>RegulatingAI&nbsp;Podcast</strong>, we host&nbsp;<strong>California State Senator Scott Wiener</strong>, one of the most influential policymakers shaping the future of&nbsp;<strong>AI regulation, AI safety, and transparency standards in the United States</strong>.&nbsp;</p><p><br></p><p>As President&nbsp;<strong>Donald Trump’s new AI executive order</strong>&nbsp;pushes for federal control over AI regulation, Senator Wiener explains why states like&nbsp;<strong>California must&nbsp;retain&nbsp;the power to regulate artificial intelligence</strong>&nbsp;— and how California’s laws could influence global AI governance.&nbsp;</p><p><br></p><p>Senator Wiener is the author of:&nbsp;</p><p><br></p><p>&nbsp;•&nbsp;<strong>SB 1047</strong>&nbsp;– California’s proposed liability bill for high-risk AI systems&nbsp;</p><p><br></p><p>&nbsp;•&nbsp;<strong>SB 53</strong>&nbsp;– California’s new AI transparency law, now in effect&nbsp;</p><p><br></p><p>We dive deep into:&nbsp;</p><p><br></p><p>&nbsp;• The battle between&nbsp;<strong>federal vs. state AI regulation</strong>&nbsp;</p><p><br></p><p>&nbsp;• Why California&nbsp;remains&nbsp;the frontline of&nbsp;<strong>AI governance</strong>&nbsp;</p><p><br></p><p>&nbsp;• The real impact of&nbsp;<strong>Trump’s AI executive order</strong>&nbsp;</p><p><br></p><p>&nbsp;• Growing risks of&nbsp;<strong>AI-driven job displacement</strong>&nbsp;</p><p><br></p><p>&nbsp;• How governments can balance&nbsp;<strong>innovation with public safety</strong>&nbsp;</p><p><br></p><p>&nbsp;• The future of responsible and accountable AI development&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>🔑 KEY TAKEAWAYS</strong>&nbsp;</p><p><br></p><p><strong>1. California’s Policy Power</strong>&nbsp;</p><p><br></p><p>&nbsp;California’s tech dominance allows it to shape national and global AI standards even when Congress stalls.&nbsp;</p><p><br></p><p><strong>2. SB 1047 vs. SB 53 Explained</strong>&nbsp;</p><p><br></p><p>&nbsp;SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly&nbsp;disclose&nbsp;safety and risk practices.&nbsp;</p><p><br></p><p><strong>3. Why Transparency Won</strong>&nbsp;</p><p><br></p><p>&nbsp;After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53.&nbsp;</p><p><br></p><p><strong>4. AI Job Disruption Is Accelerating</strong>&nbsp;</p><p><br></p><p>&nbsp;Senator Wiener warns that workforce displacement from AI is happening faster than expected.&nbsp;</p><p><br></p><p><strong>5. A Realistic Middle Path</strong>&nbsp;</p><p><br></p><p>&nbsp;He advocates for smart AI guardrails — avoiding both overregulation and total deregulation.&nbsp;</p><p><br></p><p>If you found this conversation valuable,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, subscribe, and share</strong>&nbsp;to stay updated on global conversations shaping the future of AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/company/ascet-center-of-excellence" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/company/ascet-center-of-excellence</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/james-h-dickerson-phd" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/james-h-dickerson-phd</a>&nbsp;&nbsp;&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d59d1ae5-7361-402c-a0d0-86a6288134b8/7cb85867f2.jpg" />
  <pubDate>Thu, 27 Nov 2025 16:52:44 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="27146911" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d59d1ae5-7361-402c-a0d0-86a6288134b8/episode.mp3" />
  <itunes:title><![CDATA[Trump’s AI Executive Order vs California: Senator Scott Wiener Responds | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>28:16</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the&nbsp;<strong>RegulatingAI&nbsp;Podcast</strong>, we host&nbsp;<strong>California State Senator Scott Wiener</strong>, one of the most influential policymakers shaping the future of&nbsp;<strong>AI regulation, AI safety, and transparency standards in the United States</strong>.&nbsp;</p><p><br></p><p>As President&nbsp;<strong>Donald Trump’s new AI executive order</strong>&nbsp;pushes for federal control over AI regulation, Senator Wiener explains why states like&nbsp;<strong>California must&nbsp;retain&nbsp;the power to regulate artificial intelligence</strong>&nbsp;— and how California’s laws could influence global AI governance.&nbsp;</p><p><br></p><p>Senator Wiener is the author of:&nbsp;</p><p><br></p><p>&nbsp;•&nbsp;<strong>SB 1047</strong>&nbsp;– California’s proposed liability bill for high-risk AI systems&nbsp;</p><p><br></p><p>&nbsp;•&nbsp;<strong>SB 53</strong>&nbsp;– California’s new AI transparency law, now in effect&nbsp;</p><p><br></p><p>We dive deep into:&nbsp;</p><p><br></p><p>&nbsp;• The battle between&nbsp;<strong>federal vs. state AI regulation</strong>&nbsp;</p><p><br></p><p>&nbsp;• Why California&nbsp;remains&nbsp;the frontline of&nbsp;<strong>AI governance</strong>&nbsp;</p><p><br></p><p>&nbsp;• The real impact of&nbsp;<strong>Trump’s AI executive order</strong>&nbsp;</p><p><br></p><p>&nbsp;• Growing risks of&nbsp;<strong>AI-driven job displacement</strong>&nbsp;</p><p><br></p><p>&nbsp;• How governments can balance&nbsp;<strong>innovation with public safety</strong>&nbsp;</p><p><br></p><p>&nbsp;• The future of responsible and accountable AI development&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>🔑 KEY TAKEAWAYS</strong>&nbsp;</p><p><br></p><p><strong>1. California’s Policy Power</strong>&nbsp;</p><p><br></p><p>&nbsp;California’s tech dominance allows it to shape national and global AI standards even when Congress stalls.&nbsp;</p><p><br></p><p><strong>2. SB 1047 vs. SB 53 Explained</strong>&nbsp;</p><p><br></p><p>&nbsp;SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly&nbsp;disclose&nbsp;safety and risk practices.&nbsp;</p><p><br></p><p><strong>3. Why Transparency Won</strong>&nbsp;</p><p><br></p><p>&nbsp;After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53.&nbsp;</p><p><br></p><p><strong>4. AI Job Disruption Is Accelerating</strong>&nbsp;</p><p><br></p><p>&nbsp;Senator Wiener warns that workforce displacement from AI is happening faster than expected.&nbsp;</p><p><br></p><p><strong>5. A Realistic Middle Path</strong>&nbsp;</p><p><br></p><p>&nbsp;He advocates for smart AI guardrails — avoiding both overregulation and total deregulation.&nbsp;</p><p><br></p><p>If you found this conversation valuable,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, subscribe, and share</strong>&nbsp;to stay updated on global conversations shaping the future of AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/company/ascet-center-of-excellence" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/company/ascet-center-of-excellence</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/james-h-dickerson-phd" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/james-h-dickerson-phd</a>&nbsp;&nbsp;&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the&nbsp;<strong>RegulatingAI&nbsp;Podcast</strong>, we host&nbsp;<strong>California State Senator Scott Wiener</strong>, one of the most influential policymakers shaping the future of&nbsp;<strong>AI regulation, AI safety, and transparency standards in the United States</strong>.&nbsp;</p><p><br></p><p>As President&nbsp;<strong>Donald Trump’s new AI executive order</strong>&nbsp;pushes for federal control over AI regulation, Senator Wiener explains why states like&nbsp;<strong>California must&nbsp;retain&nbsp;the power to regulate artificial intelligence</strong>&nbsp;— and how California’s laws could influence global AI governance.&nbsp;</p><p><br></p><p>Senator Wiener is the author of:&nbsp;</p><p><br></p><p>&nbsp;•&nbsp;<strong>SB 1047</strong>&nbsp;– California’s proposed liability bill for high-risk AI systems&nbsp;</p><p><br></p><p>&nbsp;•&nbsp;<strong>SB 53</strong>&nbsp;– California’s new AI transparency law, now in effect&nbsp;</p><p><br></p><p>We dive deep into:&nbsp;</p><p><br></p><p>&nbsp;• The battle between&nbsp;<strong>federal vs. state AI regulation</strong>&nbsp;</p><p><br></p><p>&nbsp;• Why California&nbsp;remains&nbsp;the frontline of&nbsp;<strong>AI governance</strong>&nbsp;</p><p><br></p><p>&nbsp;• The real impact of&nbsp;<strong>Trump’s AI executive order</strong>&nbsp;</p><p><br></p><p>&nbsp;• Growing risks of&nbsp;<strong>AI-driven job displacement</strong>&nbsp;</p><p><br></p><p>&nbsp;• How governments can balance&nbsp;<strong>innovation with public safety</strong>&nbsp;</p><p><br></p><p>&nbsp;• The future of responsible and accountable AI development&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>🔑 KEY TAKEAWAYS</strong>&nbsp;</p><p><br></p><p><strong>1. California’s Policy Power</strong>&nbsp;</p><p><br></p><p>&nbsp;California’s tech dominance allows it to shape national and global AI standards even when Congress stalls.&nbsp;</p><p><br></p><p><strong>2. SB 1047 vs. SB 53 Explained</strong>&nbsp;</p><p><br></p><p>&nbsp;SB 1047 proposed legal liability for dangerous AI systems, while SB 53 — now law — requires AI companies to publicly&nbsp;disclose&nbsp;safety and risk practices.&nbsp;</p><p><br></p><p><strong>3. Why Transparency Won</strong>&nbsp;</p><p><br></p><p>&nbsp;After SB 1047 was vetoed, California shifted toward transparency as a regulatory first step through SB 53.&nbsp;</p><p><br></p><p><strong>4. AI Job Disruption Is Accelerating</strong>&nbsp;</p><p><br></p><p>&nbsp;Senator Wiener warns that workforce displacement from AI is happening faster than expected.&nbsp;</p><p><br></p><p><strong>5. A Realistic Middle Path</strong>&nbsp;</p><p><br></p><p>&nbsp;He advocates for smart AI guardrails — avoiding both overregulation and total deregulation.&nbsp;</p><p><br></p><p>If you found this conversation valuable,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, subscribe, and share</strong>&nbsp;to stay updated on global conversations shaping the future of AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/company/ascet-center-of-excellence" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/company/ascet-center-of-excellence</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/james-h-dickerson-phd" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/james-h-dickerson-phd</a>&nbsp;&nbsp;&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, we host California State Senator Scott Wiener, one of the most influential policymakers shaping the future of AI regulation, AI safety, and transparency standards in the United States. As President Donal...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[6941867b-4eda-432a-bd25-97bbb7c409dc]]></guid>
  <title><![CDATA[#141 Inside AI Policy with Congresswoman Sarah McBride | RegulatingAI Podcast with Sanjay Puri ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of&nbsp;<strong>RegulatingAI</strong>, host&nbsp;<strong>Sanjay Puri</strong>&nbsp;sits down with&nbsp;<strong>Congresswoman Sarah McBride</strong>&nbsp;of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can&nbsp;<em>lead responsibly</em>&nbsp;in the global AI race.&nbsp;</p><p><br></p><p class="ql-align-justify">From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her&nbsp;<strong>human-centered&nbsp;vision</strong>&nbsp;for how AI can advance democracy, fairness, and opportunity for everyone.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Here are 5 key takeaways from the conversation:</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">💡&nbsp;<strong>Finding the “Goldilocks” Zone:</strong>&nbsp;How to strike that just-right balance where AI regulation protects people&nbsp;<em>without</em>&nbsp;holding back innovation.&nbsp;</p><p><br></p><p class="ql-align-justify">🏛️&nbsp;<strong>Federal vs. State Regulation:</strong>&nbsp;Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility.&nbsp;</p><p><br></p><p class="ql-align-justify">👩‍💻&nbsp;<strong>AI and the Workforce:</strong>&nbsp;What policymakers can do to make sure AI&nbsp;<em>augments</em>&nbsp;human talent rather than replacing it.&nbsp;</p><p><br></p><p class="ql-align-justify">🌎&nbsp;<strong>Democracy vs. Authoritarianism:</strong>&nbsp;The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy.&nbsp;</p><p><br></p><p class="ql-align-justify">🔔&nbsp;<strong>Delaware’s Legacy of Innovation:</strong>&nbsp;How Delaware’s collaborative approach to growth can be a model for responsible tech leadership.&nbsp;</p><p><br></p><p class="ql-align-justify">If you enjoyed this episode,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, comment, share, and subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more conversations with global policymakers shaping the future of artificial intelligence.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify">mcbride.house.gov&nbsp;&nbsp;</p><p><a href="https://mcbride.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://mcbride.house.gov/about</strong></a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/b00bf2c1-70e7-4516-ba0f-dd6dae05f8a5/ea9053dce3.jpg" />
  <pubDate>Thu, 20 Nov 2025 11:14:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="24079508" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/b00bf2c1-70e7-4516-ba0f-dd6dae05f8a5/episode.mp3" />
  <itunes:title><![CDATA[#141 Inside AI Policy with Congresswoman Sarah McBride | RegulatingAI Podcast with Sanjay Puri ]]></itunes:title>
  <itunes:duration>25:04</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of&nbsp;<strong>RegulatingAI</strong>, host&nbsp;<strong>Sanjay Puri</strong>&nbsp;sits down with&nbsp;<strong>Congresswoman Sarah McBride</strong>&nbsp;of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can&nbsp;<em>lead responsibly</em>&nbsp;in the global AI race.&nbsp;</p><p><br></p><p class="ql-align-justify">From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her&nbsp;<strong>human-centered&nbsp;vision</strong>&nbsp;for how AI can advance democracy, fairness, and opportunity for everyone.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Here are 5 key takeaways from the conversation:</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">💡&nbsp;<strong>Finding the “Goldilocks” Zone:</strong>&nbsp;How to strike that just-right balance where AI regulation protects people&nbsp;<em>without</em>&nbsp;holding back innovation.&nbsp;</p><p><br></p><p class="ql-align-justify">🏛️&nbsp;<strong>Federal vs. State Regulation:</strong>&nbsp;Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility.&nbsp;</p><p><br></p><p class="ql-align-justify">👩‍💻&nbsp;<strong>AI and the Workforce:</strong>&nbsp;What policymakers can do to make sure AI&nbsp;<em>augments</em>&nbsp;human talent rather than replacing it.&nbsp;</p><p><br></p><p class="ql-align-justify">🌎&nbsp;<strong>Democracy vs. Authoritarianism:</strong>&nbsp;The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy.&nbsp;</p><p><br></p><p class="ql-align-justify">🔔&nbsp;<strong>Delaware’s Legacy of Innovation:</strong>&nbsp;How Delaware’s collaborative approach to growth can be a model for responsible tech leadership.&nbsp;</p><p><br></p><p class="ql-align-justify">If you enjoyed this episode,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, comment, share, and subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more conversations with global policymakers shaping the future of artificial intelligence.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify">mcbride.house.gov&nbsp;&nbsp;</p><p><a href="https://mcbride.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://mcbride.house.gov/about</strong></a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of&nbsp;<strong>RegulatingAI</strong>, host&nbsp;<strong>Sanjay Puri</strong>&nbsp;sits down with&nbsp;<strong>Congresswoman Sarah McBride</strong>&nbsp;of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can&nbsp;<em>lead responsibly</em>&nbsp;in the global AI race.&nbsp;</p><p><br></p><p class="ql-align-justify">From finding the right balance between innovation and regulation to making sure AI truly benefits workers and small businesses, Rep. McBride shares her&nbsp;<strong>human-centered&nbsp;vision</strong>&nbsp;for how AI can advance democracy, fairness, and opportunity for everyone.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Here are 5 key takeaways from the conversation:</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">💡&nbsp;<strong>Finding the “Goldilocks” Zone:</strong>&nbsp;How to strike that just-right balance where AI regulation protects people&nbsp;<em>without</em>&nbsp;holding back innovation.&nbsp;</p><p><br></p><p class="ql-align-justify">🏛️&nbsp;<strong>Federal vs. State Regulation:</strong>&nbsp;Why McBride believes the U.S. needs a unified national AI framework — but one that still values state leadership and flexibility.&nbsp;</p><p><br></p><p class="ql-align-justify">👩‍💻&nbsp;<strong>AI and the Workforce:</strong>&nbsp;What policymakers can do to make sure AI&nbsp;<em>augments</em>&nbsp;human talent rather than replacing it.&nbsp;</p><p><br></p><p class="ql-align-justify">🌎&nbsp;<strong>Democracy vs. Authoritarianism:</strong>&nbsp;The U.S.’s role in leading with values and shaping AI that reflects openness, ethics, and democracy.&nbsp;</p><p><br></p><p class="ql-align-justify">🔔&nbsp;<strong>Delaware’s Legacy of Innovation:</strong>&nbsp;How Delaware’s collaborative approach to growth can be a model for responsible tech leadership.&nbsp;</p><p><br></p><p class="ql-align-justify">If you enjoyed this episode,&nbsp;don’t&nbsp;forget to&nbsp;<strong>like, comment, share, and subscribe</strong>&nbsp;to&nbsp;<em>RegulatingAI</em>&nbsp;for more conversations with global policymakers shaping the future of artificial intelligence.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify">mcbride.house.gov&nbsp;&nbsp;</p><p><a href="https://mcbride.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://mcbride.house.gov/about</strong></a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, host Sanjay Puri sits down with Congresswoman Sarah McBride of Delaware — a member of the U.S. Congressional AI Caucus — to talk about how America can lead responsibly in the global AI race. From finding the right b...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[ad019e81-1652-46ce-8ccc-2dad3659810b]]></guid>
  <title><![CDATA[Small Nations & Big AI Ideas]]></title>
  <description><![CDATA[<p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">In this episode, I sit down with Armenia's Minister of Finance to discuss:</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Why Nvidia is building a massive AI factory in Armenia</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ The secret advantage: abundant energy + Soviet-era engineering talent</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Is the AI investment boom a bubble or the real deal?</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ How AI is already being used in tax collection and government services</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ The peace agreement with Azerbaijan and what it means for tech investment</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Why the "Middle Corridor" could make Armenia the next tech destination</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">About the Guest:</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">💬 Leave a comment: What surprised you most about Armenia's AI strategy?</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">🔔 Hit the bell to catch our next episode</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/37baec27-436d-4855-8c78-192e25a220db/3f28f48c31.jpg" />
  <pubDate>Fri, 07 Nov 2025 07:09:06 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="14945428" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/37baec27-436d-4855-8c78-192e25a220db/episode.mp3" />
  <itunes:title><![CDATA[Small Nations & Big AI Ideas]]></itunes:title>
  <itunes:duration>15:34</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">In this episode, I sit down with Armenia's Minister of Finance to discuss:</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Why Nvidia is building a massive AI factory in Armenia</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ The secret advantage: abundant energy + Soviet-era engineering talent</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Is the AI investment boom a bubble or the real deal?</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ How AI is already being used in tax collection and government services</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ The peace agreement with Azerbaijan and what it means for tech investment</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Why the "Middle Corridor" could make Armenia the next tech destination</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">About the Guest:</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">💬 Leave a comment: What surprised you most about Armenia's AI strategy?</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">🔔 Hit the bell to catch our next episode</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">In this episode, I sit down with Armenia's Minister of Finance to discuss:</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Why Nvidia is building a massive AI factory in Armenia</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ How a country of 3 million is attracting Synopsis, Yandex, and major tech companies</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ The secret advantage: abundant energy + Soviet-era engineering talent</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Is the AI investment boom a bubble or the real deal?</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ How AI is already being used in tax collection and government services</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ The peace agreement with Azerbaijan and what it means for tech investment</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">~ Why the "Middle Corridor" could make Armenia the next tech destination</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">The Minister doesn't think AI investment is a bubble—he thinks we're just getting started. He shares honest insights about job displacement, efficiency gains, and why human connection still matters in an AI-driven world.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">About the Guest:</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">Armenia's Minister of Finance is an economist who rose from bank accounting to leading the nation's fiscal policy. He oversees Armenia's economic transformation during a pivotal era of digital ambitions and AI development.</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">🎙️ Subscribe for conversations with global leaders at the intersection of AI, policy, and innovation</span></p><p><br></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">💬 Leave a comment: What surprised you most about Armenia's AI strategy?</span></p><p><span style="background-color: rgb(40, 40, 40); color: rgb(255, 255, 255);">🔔 Hit the bell to catch our next episode</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Armenia is quietly becoming one of the world's most interesting AI hubs—and you probably haven't heard about it yet.In this episode, I sit down with Armenia's Minister of Finance to discuss:~ Why Nvidia is building a massive AI factory in Armenia~ ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[5a50f540-02d1-4868-81d3-3bffc523a306]]></guid>
  <title><![CDATA[Why the World Needs a UN AI Agency with Dr. Mark Robinson | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>In this episode of <em>RegulatingAI</em>, host Sanjay Puri welcomes Dr. Mark Robinson — <strong>Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford</strong>. Drawing on decades of experience leading projects like ITER and the European Southern Observatory, Dr. Robinson shares his bold vision: establishing an international AI agency under the United Nations. Together, we explore the urgent need for global AI governance, parallels with past scientific collaborations, and the challenges of balancing innovation, safety, and sovereignty.&nbsp;</p><p><br></p><p><strong>5 Key Takeaways</strong>&nbsp;</p><p><br></p><ul><li>Why massive global science collaborations like ITER offer lessons for AI governance.&nbsp;</li><li>The case for a UN-backed International AI Agency to coordinate regulation.&nbsp;</li><li>How U.S.–China cooperation could unlock a global framework for AI oversight.&nbsp;</li><li>The risks of leaving governance solely to fragmented national initiatives and big tech.&nbsp;</li><li>Why timing, leadership, and inclusivity (including the Global South) are critical to shaping AI’s future.&nbsp;</li></ul><p>&nbsp;</p><p>If you found this conversation insightful, don’t forget to like, comment, and share — and subscribe to <em>RegulatingAI</em> for more global perspectives on building a trustworthy AI future.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://iaia4life.org/" target="_blank" style="color: rgb(70, 120, 134);">https://iaia4life.org/</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/in/mark-robinson-3594132b/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/mark-robinson-3594132b/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/8a76f75a-51fe-4d16-b108-fe462bb49b2f/7dd6f6bc7e.jpg" />
  <pubDate>Thu, 30 Oct 2025 14:09:16 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38270058" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/8a76f75a-51fe-4d16-b108-fe462bb49b2f/episode.mp3" />
  <itunes:title><![CDATA[Why the World Needs a UN AI Agency with Dr. Mark Robinson | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>39:51</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of <em>RegulatingAI</em>, host Sanjay Puri welcomes Dr. Mark Robinson — <strong>Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford</strong>. Drawing on decades of experience leading projects like ITER and the European Southern Observatory, Dr. Robinson shares his bold vision: establishing an international AI agency under the United Nations. Together, we explore the urgent need for global AI governance, parallels with past scientific collaborations, and the challenges of balancing innovation, safety, and sovereignty.&nbsp;</p><p><br></p><p><strong>5 Key Takeaways</strong>&nbsp;</p><p><br></p><ul><li>Why massive global science collaborations like ITER offer lessons for AI governance.&nbsp;</li><li>The case for a UN-backed International AI Agency to coordinate regulation.&nbsp;</li><li>How U.S.–China cooperation could unlock a global framework for AI oversight.&nbsp;</li><li>The risks of leaving governance solely to fragmented national initiatives and big tech.&nbsp;</li><li>Why timing, leadership, and inclusivity (including the Global South) are critical to shaping AI’s future.&nbsp;</li></ul><p>&nbsp;</p><p>If you found this conversation insightful, don’t forget to like, comment, and share — and subscribe to <em>RegulatingAI</em> for more global perspectives on building a trustworthy AI future.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://iaia4life.org/" target="_blank" style="color: rgb(70, 120, 134);">https://iaia4life.org/</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/in/mark-robinson-3594132b/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/mark-robinson-3594132b/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of <em>RegulatingAI</em>, host Sanjay Puri welcomes Dr. Mark Robinson — <strong>Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford</strong>. Drawing on decades of experience leading projects like ITER and the European Southern Observatory, Dr. Robinson shares his bold vision: establishing an international AI agency under the United Nations. Together, we explore the urgent need for global AI governance, parallels with past scientific collaborations, and the challenges of balancing innovation, safety, and sovereignty.&nbsp;</p><p><br></p><p><strong>5 Key Takeaways</strong>&nbsp;</p><p><br></p><ul><li>Why massive global science collaborations like ITER offer lessons for AI governance.&nbsp;</li><li>The case for a UN-backed International AI Agency to coordinate regulation.&nbsp;</li><li>How U.S.–China cooperation could unlock a global framework for AI oversight.&nbsp;</li><li>The risks of leaving governance solely to fragmented national initiatives and big tech.&nbsp;</li><li>Why timing, leadership, and inclusivity (including the Global South) are critical to shaping AI’s future.&nbsp;</li></ul><p>&nbsp;</p><p>If you found this conversation insightful, don’t forget to like, comment, and share — and subscribe to <em>RegulatingAI</em> for more global perspectives on building a trustworthy AI future.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://iaia4life.org/" target="_blank" style="color: rgb(70, 120, 134);">https://iaia4life.org/</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/in/mark-robinson-3594132b/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/mark-robinson-3594132b/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, host Sanjay Puri welcomes Dr. Mark Robinson — Senior Science Diplomacy Advisor, Oxford Martin AI Governance Initiative, University of Oxford. Drawing on decades of experience leading projects like ITER and the Europ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b9e094a1-0e47-4931-9481-888fed64df9b]]></guid>
  <title><![CDATA[Regulation Meets Revolution: Africa’s AI Story Ft. Dr. Nick Bradshaw | RegulationAI Podcast]]></title>
  <description><![CDATA[<p>🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retain talent, and shape regulation rooted in local realities.</p><p><br></p><p>From data sovereignty to the “brain drain” challenge, we explore what responsible AI looks like for Africa—and how regulation can drive innovation, not restrict it.</p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/nickbradshaw/</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/8fd35e51-e30a-489d-821f-5a63dfd635aa/b57bc07d19.jpg" />
  <pubDate>Thu, 23 Oct 2025 20:00:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="28677059" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/8fd35e51-e30a-489d-821f-5a63dfd635aa/episode.mp3" />
  <itunes:title><![CDATA[Regulation Meets Revolution: Africa’s AI Story Ft. Dr. Nick Bradshaw | RegulationAI Podcast]]></itunes:title>
  <itunes:duration>29:52</itunes:duration>
  <itunes:summary><![CDATA[<p>🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retain talent, and shape regulation rooted in local realities.</p><p><br></p><p>From data sovereignty to the “brain drain” challenge, we explore what responsible AI looks like for Africa—and how regulation can drive innovation, not restrict it.</p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/nickbradshaw/</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retain talent, and shape regulation rooted in local realities.</p><p><br></p><p>From data sovereignty to the “brain drain” challenge, we explore what responsible AI looks like for Africa—and how regulation can drive innovation, not restrict it.</p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/nickbradshaw/</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🎙 While global AI conversations are dominated by the US, China, and Europe, Africa is crafting its own path. Dr. Nick Bradshaw, Founder of the South African AI Association, joins us to discuss how the continent can build sovereign AI systems, retai...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1a24135c-3e58-4f17-a3e1-37e24a905bd5]]></guid>
  <title><![CDATA[Governor Matt Meyer on Building America’s First AI-Ready State | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>In this episode of the <strong>RegulatingAI Podcast</strong>, host Sanjay Puri had an engaging podcast with <strong>Governor Matt Meyer</strong>, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through initiatives like the AI sandbox, the OpenAI workforce certification partnership, and efforts to safeguard democracy from deepfakes. This masterclass in state-led AI regulation explores how innovation and accountability can—and must—go hand in hand.&nbsp;</p><p><br></p><p><strong>5 Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li><strong>AI as a Tool, Not Destiny</strong>: Governor Meyer emphasizes that AI’s value lies in how it improves lives—not in the technology itself.&nbsp;</li><li><strong>First to Value, Not First to Hype</strong>: Delaware is piloting and scaling AI responsibly, ensuring guardrails before mass adoption.&nbsp;</li><li><strong>Workforce First</strong>: With OpenAI certification programs, Delaware is leading in preparing workers and students for the AI-powered economy.&nbsp;</li><li><strong>Balancing Innovation &amp; Regulation</strong>: The state’s AI sandbox offers a safe testbed for companies to experiment responsibly.&nbsp;</li><li><strong>Protecting Democracy &amp; People</strong>: From tackling election deepfakes to ensuring job transitions, Meyer highlights human-centered governance.&nbsp;</li></ul><p>&nbsp;</p><p>If you found this conversation insightful, don’t forget to <strong>like, comment, share, and subscribe</strong> to the RegulatingAI Podcast for more expert perspectives on the future of AI.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/company/governor-delaware-matt-meyer/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/company/governor-delaware-matt-meyer/</strong></a><strong>&nbsp;</strong>&nbsp;</p><p><a href="https://governor.delaware.gov/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://governor.delaware.gov/</strong></a>&nbsp;</p><p><a href="https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/</strong></a>&nbsp;</p><p><br></p><p><a href="https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/</strong></a> &nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/750f0c76-6e59-4e54-aa8f-bd63a256496b/a98b649938.jpg" />
  <pubDate>Wed, 15 Oct 2025 16:03:18 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="26358639" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/750f0c76-6e59-4e54-aa8f-bd63a256496b/episode.mp3" />
  <itunes:title><![CDATA[Governor Matt Meyer on Building America’s First AI-Ready State | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>27:27</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the <strong>RegulatingAI Podcast</strong>, host Sanjay Puri had an engaging podcast with <strong>Governor Matt Meyer</strong>, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through initiatives like the AI sandbox, the OpenAI workforce certification partnership, and efforts to safeguard democracy from deepfakes. This masterclass in state-led AI regulation explores how innovation and accountability can—and must—go hand in hand.&nbsp;</p><p><br></p><p><strong>5 Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li><strong>AI as a Tool, Not Destiny</strong>: Governor Meyer emphasizes that AI’s value lies in how it improves lives—not in the technology itself.&nbsp;</li><li><strong>First to Value, Not First to Hype</strong>: Delaware is piloting and scaling AI responsibly, ensuring guardrails before mass adoption.&nbsp;</li><li><strong>Workforce First</strong>: With OpenAI certification programs, Delaware is leading in preparing workers and students for the AI-powered economy.&nbsp;</li><li><strong>Balancing Innovation &amp; Regulation</strong>: The state’s AI sandbox offers a safe testbed for companies to experiment responsibly.&nbsp;</li><li><strong>Protecting Democracy &amp; People</strong>: From tackling election deepfakes to ensuring job transitions, Meyer highlights human-centered governance.&nbsp;</li></ul><p>&nbsp;</p><p>If you found this conversation insightful, don’t forget to <strong>like, comment, share, and subscribe</strong> to the RegulatingAI Podcast for more expert perspectives on the future of AI.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/company/governor-delaware-matt-meyer/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/company/governor-delaware-matt-meyer/</strong></a><strong>&nbsp;</strong>&nbsp;</p><p><a href="https://governor.delaware.gov/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://governor.delaware.gov/</strong></a>&nbsp;</p><p><a href="https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/</strong></a>&nbsp;</p><p><br></p><p><a href="https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/</strong></a> &nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the <strong>RegulatingAI Podcast</strong>, host Sanjay Puri had an engaging podcast with <strong>Governor Matt Meyer</strong>, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through initiatives like the AI sandbox, the OpenAI workforce certification partnership, and efforts to safeguard democracy from deepfakes. This masterclass in state-led AI regulation explores how innovation and accountability can—and must—go hand in hand.&nbsp;</p><p><br></p><p><strong>5 Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li><strong>AI as a Tool, Not Destiny</strong>: Governor Meyer emphasizes that AI’s value lies in how it improves lives—not in the technology itself.&nbsp;</li><li><strong>First to Value, Not First to Hype</strong>: Delaware is piloting and scaling AI responsibly, ensuring guardrails before mass adoption.&nbsp;</li><li><strong>Workforce First</strong>: With OpenAI certification programs, Delaware is leading in preparing workers and students for the AI-powered economy.&nbsp;</li><li><strong>Balancing Innovation &amp; Regulation</strong>: The state’s AI sandbox offers a safe testbed for companies to experiment responsibly.&nbsp;</li><li><strong>Protecting Democracy &amp; People</strong>: From tackling election deepfakes to ensuring job transitions, Meyer highlights human-centered governance.&nbsp;</li></ul><p>&nbsp;</p><p>If you found this conversation insightful, don’t forget to <strong>like, comment, share, and subscribe</strong> to the RegulatingAI Podcast for more expert perspectives on the future of AI.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/company/governor-delaware-matt-meyer/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/company/governor-delaware-matt-meyer/</strong></a><strong>&nbsp;</strong>&nbsp;</p><p><a href="https://governor.delaware.gov/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://governor.delaware.gov/</strong></a>&nbsp;</p><p><a href="https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://news.delaware.gov/2025/07/23/delaware-launches-bold-ai-sandbox-initiative-cementing-its-role-as-a-national-leader-in-responsible-tech-innovation/</strong></a>&nbsp;</p><p><br></p><p><a href="https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://news.delaware.gov/2025/09/04/delaware-first-state-in-the-nation-to-partner-with-openai-on-certification-program/</strong></a> &nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, host Sanjay Puri had an engaging podcast with Governor Matt Meyer, Delaware’s 76th Governor and a national leader in AI governance. Governor Meyer shares how Delaware is pioneering responsible AI through...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[4ce78579-5053-4348-9ae8-90a5d7944219]]></guid>
  <title><![CDATA[Protecting Children from AI Exploitation with Attorney General Mike Hilgers | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>In this episode of <strong>RegulatingAI</strong>, Sanjay Puri speaks with <strong>Nebraska Attorney General Mike Hilgers</strong>, who is leading efforts to combat AI-enabled child exploitation.&nbsp;</p><p><br></p><p>You’ll learn:&nbsp;</p><ul><li>Why AI-generated CSAM (child sexual abuse material) presents unprecedented risks&nbsp;</li><li>How Nebraska passed LB 383 to prohibit AI-generated CSAM&nbsp;</li><li>The challenges of prosecuting AI crimes compared to traditional crimes&nbsp;</li><li>Why bipartisan coalitions matter in AI governance&nbsp;</li><li>How innovation and child protection can coexist in law and policy&nbsp;</li></ul><p>Hilgers also shares his perspective on the U.S.–China AI race and why legal frameworks must adapt to fast-moving technologies.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/company/nebraska-department-of-justice" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/company/nebraska-department-of-justice</strong></a>&nbsp;</p><p>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/72d93f88-5b4e-43ad-a0c1-cf014de21927/19f9fac387.jpg" />
  <pubDate>Thu, 09 Oct 2025 12:04:13 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="36142646" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/72d93f88-5b4e-43ad-a0c1-cf014de21927/episode.mp3" />
  <itunes:title><![CDATA[Protecting Children from AI Exploitation with Attorney General Mike Hilgers | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>37:38</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of <strong>RegulatingAI</strong>, Sanjay Puri speaks with <strong>Nebraska Attorney General Mike Hilgers</strong>, who is leading efforts to combat AI-enabled child exploitation.&nbsp;</p><p><br></p><p>You’ll learn:&nbsp;</p><ul><li>Why AI-generated CSAM (child sexual abuse material) presents unprecedented risks&nbsp;</li><li>How Nebraska passed LB 383 to prohibit AI-generated CSAM&nbsp;</li><li>The challenges of prosecuting AI crimes compared to traditional crimes&nbsp;</li><li>Why bipartisan coalitions matter in AI governance&nbsp;</li><li>How innovation and child protection can coexist in law and policy&nbsp;</li></ul><p>Hilgers also shares his perspective on the U.S.–China AI race and why legal frameworks must adapt to fast-moving technologies.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/company/nebraska-department-of-justice" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/company/nebraska-department-of-justice</strong></a>&nbsp;</p><p>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of <strong>RegulatingAI</strong>, Sanjay Puri speaks with <strong>Nebraska Attorney General Mike Hilgers</strong>, who is leading efforts to combat AI-enabled child exploitation.&nbsp;</p><p><br></p><p>You’ll learn:&nbsp;</p><ul><li>Why AI-generated CSAM (child sexual abuse material) presents unprecedented risks&nbsp;</li><li>How Nebraska passed LB 383 to prohibit AI-generated CSAM&nbsp;</li><li>The challenges of prosecuting AI crimes compared to traditional crimes&nbsp;</li><li>Why bipartisan coalitions matter in AI governance&nbsp;</li><li>How innovation and child protection can coexist in law and policy&nbsp;</li></ul><p>Hilgers also shares his perspective on the U.S.–China AI race and why legal frameworks must adapt to fast-moving technologies.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/company/nebraska-department-of-justice" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/company/nebraska-department-of-justice</strong></a>&nbsp;</p><p>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, Sanjay Puri speaks with Nebraska Attorney General Mike Hilgers, who is leading efforts to combat AI-enabled child exploitation. You’ll learn: Why AI-generated CSAM (child sexual abuse material) presents unprecedente...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[73a3cc12-9f82-45bc-aeff-a3055f94681d]]></guid>
  <title><![CDATA[Rui Duarte on AI Diplomacy, Statecraft, and the Urgency of Global Governance | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>In this episode of the <em>RegulatingAI Podcast</em>, Sanjay Puri speaks with <strong>Rui Pedro Duarte</strong>, Managing Director at Loop Future Switzerland and author of <em>The Age of AI Diplomacy</em>. A former Member of Parliament in Portugal, Rui shares a unique perspective on how political experience and technology collide in shaping AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify">Key discussion points:&nbsp;</p><ul><li class="ql-align-justify">Why AI diplomacy must evolve to operate at machine speed&nbsp;</li><li class="ql-align-justify">The concept of “quantum diplomacy” and treaties that self-update&nbsp;</li><li class="ql-align-justify">The role of coders and open-source communities as new diplomats&nbsp;</li><li class="ql-align-justify">Why AI should be treated as critical infrastructure, not just a product&nbsp;</li><li class="ql-align-justify">How collaborative velocity can drive equity between the Global North and South&nbsp;</li></ul><p class="ql-align-justify"><br></p><p class="ql-align-justify">Watch now for a deep exploration of AI’s role in diplomacy and the urgent need for systemic, global cooperation.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/rpgduarte" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/in/rpgduarte</strong></a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d9766b8e-afdb-492f-91d3-d66348a4be8a/ba13a8c7c9.jpg" />
  <pubDate>Thu, 25 Sep 2025 11:54:03 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="28058062" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d9766b8e-afdb-492f-91d3-d66348a4be8a/episode.mp3" />
  <itunes:title><![CDATA[Rui Duarte on AI Diplomacy, Statecraft, and the Urgency of Global Governance | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>29:13</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the <em>RegulatingAI Podcast</em>, Sanjay Puri speaks with <strong>Rui Pedro Duarte</strong>, Managing Director at Loop Future Switzerland and author of <em>The Age of AI Diplomacy</em>. A former Member of Parliament in Portugal, Rui shares a unique perspective on how political experience and technology collide in shaping AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify">Key discussion points:&nbsp;</p><ul><li class="ql-align-justify">Why AI diplomacy must evolve to operate at machine speed&nbsp;</li><li class="ql-align-justify">The concept of “quantum diplomacy” and treaties that self-update&nbsp;</li><li class="ql-align-justify">The role of coders and open-source communities as new diplomats&nbsp;</li><li class="ql-align-justify">Why AI should be treated as critical infrastructure, not just a product&nbsp;</li><li class="ql-align-justify">How collaborative velocity can drive equity between the Global North and South&nbsp;</li></ul><p class="ql-align-justify"><br></p><p class="ql-align-justify">Watch now for a deep exploration of AI’s role in diplomacy and the urgent need for systemic, global cooperation.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/rpgduarte" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/in/rpgduarte</strong></a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the <em>RegulatingAI Podcast</em>, Sanjay Puri speaks with <strong>Rui Pedro Duarte</strong>, Managing Director at Loop Future Switzerland and author of <em>The Age of AI Diplomacy</em>. A former Member of Parliament in Portugal, Rui shares a unique perspective on how political experience and technology collide in shaping AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify">Key discussion points:&nbsp;</p><ul><li class="ql-align-justify">Why AI diplomacy must evolve to operate at machine speed&nbsp;</li><li class="ql-align-justify">The concept of “quantum diplomacy” and treaties that self-update&nbsp;</li><li class="ql-align-justify">The role of coders and open-source communities as new diplomats&nbsp;</li><li class="ql-align-justify">Why AI should be treated as critical infrastructure, not just a product&nbsp;</li><li class="ql-align-justify">How collaborative velocity can drive equity between the Global North and South&nbsp;</li></ul><p class="ql-align-justify"><br></p><p class="ql-align-justify">Watch now for a deep exploration of AI’s role in diplomacy and the urgent need for systemic, global cooperation.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/rpgduarte" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/in/rpgduarte</strong></a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, Sanjay Puri speaks with Rui Pedro Duarte, Managing Director at Loop Future Switzerland and author of The Age of AI Diplomacy. A former Member of Parliament in Portugal, Rui shares a unique perspective on...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[6c83979d-cd52-46bf-8488-ea75e658a290]]></guid>
  <title><![CDATA[Brando Benifei on Balancing Transparency, IP, and Enforcement in the EU AI Act | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of the <em>RegulatingAI Podcast</em>, Sanjay Puri speaks with <strong>Brando Benifei, Member of the European Parliament</strong>, and one of the lead architects of the EU AI Act—the world’s first binding legislation on artificial intelligence.&nbsp;</p><p class="ql-align-justify">Brando shares deep insights into the challenges of implementation, balancing transparency with intellectual property, and safeguarding freedoms in a rapidly evolving AI landscape.&nbsp;</p><p class="ql-align-justify">Key highlights include:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The role of transparency and auditability in AI governance&nbsp;</li><li class="ql-align-justify">Proportional fines and their impact on SMEs versus Big Tech&nbsp;</li><li class="ql-align-justify">Why certain AI practices, like predictive policing and mass surveillance, are prohibited&nbsp;</li><li class="ql-align-justify">How the EU AI Act integrates with global governance efforts&nbsp;</li><li class="ql-align-justify">The importance of education, sandboxes, and support for SMEs&nbsp;</li></ul><p class="ql-align-justify">🔗 Watch now to understand how Europe is shaping AI regulation—and what it means for the world.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://en.wikipedia.org/wiki/Brando_Benifei" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://en.wikipedia.org/wiki/Brando_Benifei</strong></a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home</strong></a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ee470657-9af1-4db8-bc17-812088a721c8/b81dacd7f1.jpg" />
  <pubDate>Tue, 16 Sep 2025 09:34:12 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="44172478" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ee470657-9af1-4db8-bc17-812088a721c8/episode.mp3" />
  <itunes:title><![CDATA[Brando Benifei on Balancing Transparency, IP, and Enforcement in the EU AI Act | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>46:00</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of the <em>RegulatingAI Podcast</em>, Sanjay Puri speaks with <strong>Brando Benifei, Member of the European Parliament</strong>, and one of the lead architects of the EU AI Act—the world’s first binding legislation on artificial intelligence.&nbsp;</p><p class="ql-align-justify">Brando shares deep insights into the challenges of implementation, balancing transparency with intellectual property, and safeguarding freedoms in a rapidly evolving AI landscape.&nbsp;</p><p class="ql-align-justify">Key highlights include:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The role of transparency and auditability in AI governance&nbsp;</li><li class="ql-align-justify">Proportional fines and their impact on SMEs versus Big Tech&nbsp;</li><li class="ql-align-justify">Why certain AI practices, like predictive policing and mass surveillance, are prohibited&nbsp;</li><li class="ql-align-justify">How the EU AI Act integrates with global governance efforts&nbsp;</li><li class="ql-align-justify">The importance of education, sandboxes, and support for SMEs&nbsp;</li></ul><p class="ql-align-justify">🔗 Watch now to understand how Europe is shaping AI regulation—and what it means for the world.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://en.wikipedia.org/wiki/Brando_Benifei" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://en.wikipedia.org/wiki/Brando_Benifei</strong></a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home</strong></a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of the <em>RegulatingAI Podcast</em>, Sanjay Puri speaks with <strong>Brando Benifei, Member of the European Parliament</strong>, and one of the lead architects of the EU AI Act—the world’s first binding legislation on artificial intelligence.&nbsp;</p><p class="ql-align-justify">Brando shares deep insights into the challenges of implementation, balancing transparency with intellectual property, and safeguarding freedoms in a rapidly evolving AI landscape.&nbsp;</p><p class="ql-align-justify">Key highlights include:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The role of transparency and auditability in AI governance&nbsp;</li><li class="ql-align-justify">Proportional fines and their impact on SMEs versus Big Tech&nbsp;</li><li class="ql-align-justify">Why certain AI practices, like predictive policing and mass surveillance, are prohibited&nbsp;</li><li class="ql-align-justify">How the EU AI Act integrates with global governance efforts&nbsp;</li><li class="ql-align-justify">The importance of education, sandboxes, and support for SMEs&nbsp;</li></ul><p class="ql-align-justify">🔗 Watch now to understand how Europe is shaping AI regulation—and what it means for the world.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://en.wikipedia.org/wiki/Brando_Benifei" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://en.wikipedia.org/wiki/Brando_Benifei</strong></a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.europarl.europa.eu/meps/en/124867/BRANDO_BENIFEI/home</strong></a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, Sanjay Puri speaks with Brando Benifei, Member of the European Parliament, and one of the lead architects of the EU AI Act—the world’s first binding legislation on artificial intelligence. Brando shares ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7927d95d-081a-487d-bdc2-2a3e67384778]]></guid>
  <title><![CDATA[How the UN’s ITU Is Shaping Global AI Standards Ft. Tomas Lamanauskas | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">RegulatingAI Podcast: How the UN’s ITU Is Shaping Global AI Standards | Tomas Lamanauskas&nbsp;</p><p class="ql-align-justify">In this compelling episode, host Sanjay Puri sits down with <strong>Tomas Lamanauskas</strong>, Deputy Secretary-General of the <strong>International Telecommunication Union (ITU)</strong>, to explore the global architecture of AI governance.&nbsp;</p><p class="ql-align-justify">🔍 <strong>What you’ll learn:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How ITU transitioned from regulating telegraphs to AI governance&nbsp;</li><li class="ql-align-justify">Why AI standardization is <em>not</em> a barrier to innovation&nbsp;</li><li class="ql-align-justify">The ITU’s pivotal role in connecting 8 billion people&nbsp;</li><li class="ql-align-justify">The balance between innovation, regulation, and inclusion&nbsp;</li><li class="ql-align-justify">Behind-the-scenes of the AI for Good Global Summit&nbsp;</li></ul><p class="ql-align-justify">🌍 <strong>A must-watch for:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">Policymakers and AI regulators&nbsp;</li><li class="ql-align-justify">Tech entrepreneurs and infrastructure investors&nbsp;</li><li class="ql-align-justify">Anyone who cares about global equity in the age of AI&nbsp;</li></ul><p class="ql-align-justify"><strong>Subscribe</strong> for future episodes diving deep into global AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/tlamanauskas/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/tlamanauskas/</a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx" target="_blank" style="color: rgb(70, 120, 134);">https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx</a>&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">0:00 Podcast Highlights &amp; Introduction &nbsp;</p><p class="ql-align-justify">2:00 What is the ITU and its role in AI regulation? &nbsp;</p><p class="ql-align-justify">2:45 From telegraph to AI: A history of the ITU &nbsp;</p><p class="ql-align-justify">8:42 Standardizing AI in a rapidly moving world &nbsp;</p><p class="ql-align-justify">14:03 The ITU's role in enforcing standards &nbsp;</p><p class="ql-align-justify">18:51 Three approaches to AI governance: EU, US, and China &nbsp;</p><p class="ql-align-justify">25:01 Geopolitics and national security in AI &nbsp;</p><p class="ql-align-justify">30:24 The importance of undersea cables &nbsp;</p><p class="ql-align-justify">34:41 Ensuring AI benefits everyone and bridging the digital divide &nbsp;</p><p class="ql-align-justify">43:21 The AI for Good Global Summit &nbsp;</p><p class="ql-align-justify">48:28 Conclusion and farewell&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/549dffda-75d3-405a-ad1b-f287231d2db8/913e7e8948.jpg" />
  <pubDate>Thu, 04 Sep 2025 14:31:26 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="48994473" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/549dffda-75d3-405a-ad1b-f287231d2db8/episode.mp3" />
  <itunes:title><![CDATA[How the UN’s ITU Is Shaping Global AI Standards Ft. Tomas Lamanauskas | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>51:02</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">RegulatingAI Podcast: How the UN’s ITU Is Shaping Global AI Standards | Tomas Lamanauskas&nbsp;</p><p class="ql-align-justify">In this compelling episode, host Sanjay Puri sits down with <strong>Tomas Lamanauskas</strong>, Deputy Secretary-General of the <strong>International Telecommunication Union (ITU)</strong>, to explore the global architecture of AI governance.&nbsp;</p><p class="ql-align-justify">🔍 <strong>What you’ll learn:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How ITU transitioned from regulating telegraphs to AI governance&nbsp;</li><li class="ql-align-justify">Why AI standardization is <em>not</em> a barrier to innovation&nbsp;</li><li class="ql-align-justify">The ITU’s pivotal role in connecting 8 billion people&nbsp;</li><li class="ql-align-justify">The balance between innovation, regulation, and inclusion&nbsp;</li><li class="ql-align-justify">Behind-the-scenes of the AI for Good Global Summit&nbsp;</li></ul><p class="ql-align-justify">🌍 <strong>A must-watch for:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">Policymakers and AI regulators&nbsp;</li><li class="ql-align-justify">Tech entrepreneurs and infrastructure investors&nbsp;</li><li class="ql-align-justify">Anyone who cares about global equity in the age of AI&nbsp;</li></ul><p class="ql-align-justify"><strong>Subscribe</strong> for future episodes diving deep into global AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/tlamanauskas/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/tlamanauskas/</a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx" target="_blank" style="color: rgb(70, 120, 134);">https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx</a>&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">0:00 Podcast Highlights &amp; Introduction &nbsp;</p><p class="ql-align-justify">2:00 What is the ITU and its role in AI regulation? &nbsp;</p><p class="ql-align-justify">2:45 From telegraph to AI: A history of the ITU &nbsp;</p><p class="ql-align-justify">8:42 Standardizing AI in a rapidly moving world &nbsp;</p><p class="ql-align-justify">14:03 The ITU's role in enforcing standards &nbsp;</p><p class="ql-align-justify">18:51 Three approaches to AI governance: EU, US, and China &nbsp;</p><p class="ql-align-justify">25:01 Geopolitics and national security in AI &nbsp;</p><p class="ql-align-justify">30:24 The importance of undersea cables &nbsp;</p><p class="ql-align-justify">34:41 Ensuring AI benefits everyone and bridging the digital divide &nbsp;</p><p class="ql-align-justify">43:21 The AI for Good Global Summit &nbsp;</p><p class="ql-align-justify">48:28 Conclusion and farewell&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">RegulatingAI Podcast: How the UN’s ITU Is Shaping Global AI Standards | Tomas Lamanauskas&nbsp;</p><p class="ql-align-justify">In this compelling episode, host Sanjay Puri sits down with <strong>Tomas Lamanauskas</strong>, Deputy Secretary-General of the <strong>International Telecommunication Union (ITU)</strong>, to explore the global architecture of AI governance.&nbsp;</p><p class="ql-align-justify">🔍 <strong>What you’ll learn:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How ITU transitioned from regulating telegraphs to AI governance&nbsp;</li><li class="ql-align-justify">Why AI standardization is <em>not</em> a barrier to innovation&nbsp;</li><li class="ql-align-justify">The ITU’s pivotal role in connecting 8 billion people&nbsp;</li><li class="ql-align-justify">The balance between innovation, regulation, and inclusion&nbsp;</li><li class="ql-align-justify">Behind-the-scenes of the AI for Good Global Summit&nbsp;</li></ul><p class="ql-align-justify">🌍 <strong>A must-watch for:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">Policymakers and AI regulators&nbsp;</li><li class="ql-align-justify">Tech entrepreneurs and infrastructure investors&nbsp;</li><li class="ql-align-justify">Anyone who cares about global equity in the age of AI&nbsp;</li></ul><p class="ql-align-justify"><strong>Subscribe</strong> for future episodes diving deep into global AI governance.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/tlamanauskas/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/tlamanauskas/</a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx" target="_blank" style="color: rgb(70, 120, 134);">https://www.itu.int/en/osg/Pages/biography-itu-dsg-tomas.aspx</a>&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">0:00 Podcast Highlights &amp; Introduction &nbsp;</p><p class="ql-align-justify">2:00 What is the ITU and its role in AI regulation? &nbsp;</p><p class="ql-align-justify">2:45 From telegraph to AI: A history of the ITU &nbsp;</p><p class="ql-align-justify">8:42 Standardizing AI in a rapidly moving world &nbsp;</p><p class="ql-align-justify">14:03 The ITU's role in enforcing standards &nbsp;</p><p class="ql-align-justify">18:51 Three approaches to AI governance: EU, US, and China &nbsp;</p><p class="ql-align-justify">25:01 Geopolitics and national security in AI &nbsp;</p><p class="ql-align-justify">30:24 The importance of undersea cables &nbsp;</p><p class="ql-align-justify">34:41 Ensuring AI benefits everyone and bridging the digital divide &nbsp;</p><p class="ql-align-justify">43:21 The AI for Good Global Summit &nbsp;</p><p class="ql-align-justify">48:28 Conclusion and farewell&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[RegulatingAI Podcast: How the UN’s ITU Is Shaping Global AI Standards | Tomas Lamanauskas In this compelling episode, host Sanjay Puri sits down with Tomas Lamanauskas, Deputy Secretary-General of the International Telecommunication Union (ITU), to...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[4c34eaca-84f6-4b4e-aff5-f0958e8b32ac]]></guid>
  <title><![CDATA[Andrew Reiskind (Mastercard CDO) on Trust, AI, and the Future of Agentic Commerce | Regulating AI Podcast]]></title>
  <description><![CDATA[<p>Live from the AI4 Conference in Las Vegas, <strong>Andrew Reiskind, Chief Data Officer at Mastercard</strong>, joins the <em>Regulating AI Podcast</em> to discuss the critical intersection of <strong>data, AI, and trust</strong>. From AI-powered fraud detection to personalization, responsible AI governance, and the rise of agentic commerce, Andrew shares how Mastercard is navigating global challenges in data sovereignty while keeping safety and security at the core.&nbsp;</p><p><br></p><p>Topics Covered:&nbsp;</p><p><br></p><ul><li>How Mastercard has used AI for 20+ years in fraud prevention &amp; personalization&nbsp;</li><li>The role of <strong>agentic AI</strong> in customer service &amp; commerce&nbsp;</li><li>Why <strong>trust and security</strong> must guide AI innovation&nbsp;</li><li>Frameworks for responsible AI &amp; governance across global markets&nbsp;</li></ul><p>Subscribe for more insights from AI leaders shaping the future.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/andrew-reiskind-53a743/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/andrew-reiskind-53a743/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/17acea09-9ac5-407e-b1d5-4dc50ae78ff9/b23ba5e5be.jpg" />
  <pubDate>Sat, 23 Aug 2025 10:10:48 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="8490048" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/17acea09-9ac5-407e-b1d5-4dc50ae78ff9/episode.mp3" />
  <itunes:title><![CDATA[Andrew Reiskind (Mastercard CDO) on Trust, AI, and the Future of Agentic Commerce | Regulating AI Podcast]]></itunes:title>
  <itunes:duration>8:50</itunes:duration>
  <itunes:summary><![CDATA[<p>Live from the AI4 Conference in Las Vegas, <strong>Andrew Reiskind, Chief Data Officer at Mastercard</strong>, joins the <em>Regulating AI Podcast</em> to discuss the critical intersection of <strong>data, AI, and trust</strong>. From AI-powered fraud detection to personalization, responsible AI governance, and the rise of agentic commerce, Andrew shares how Mastercard is navigating global challenges in data sovereignty while keeping safety and security at the core.&nbsp;</p><p><br></p><p>Topics Covered:&nbsp;</p><p><br></p><ul><li>How Mastercard has used AI for 20+ years in fraud prevention &amp; personalization&nbsp;</li><li>The role of <strong>agentic AI</strong> in customer service &amp; commerce&nbsp;</li><li>Why <strong>trust and security</strong> must guide AI innovation&nbsp;</li><li>Frameworks for responsible AI &amp; governance across global markets&nbsp;</li></ul><p>Subscribe for more insights from AI leaders shaping the future.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/andrew-reiskind-53a743/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/andrew-reiskind-53a743/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Live from the AI4 Conference in Las Vegas, <strong>Andrew Reiskind, Chief Data Officer at Mastercard</strong>, joins the <em>Regulating AI Podcast</em> to discuss the critical intersection of <strong>data, AI, and trust</strong>. From AI-powered fraud detection to personalization, responsible AI governance, and the rise of agentic commerce, Andrew shares how Mastercard is navigating global challenges in data sovereignty while keeping safety and security at the core.&nbsp;</p><p><br></p><p>Topics Covered:&nbsp;</p><p><br></p><ul><li>How Mastercard has used AI for 20+ years in fraud prevention &amp; personalization&nbsp;</li><li>The role of <strong>agentic AI</strong> in customer service &amp; commerce&nbsp;</li><li>Why <strong>trust and security</strong> must guide AI innovation&nbsp;</li><li>Frameworks for responsible AI &amp; governance across global markets&nbsp;</li></ul><p>Subscribe for more insights from AI leaders shaping the future.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/andrew-reiskind-53a743/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/andrew-reiskind-53a743/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Live from the AI4 Conference in Las Vegas, Andrew Reiskind, Chief Data Officer at Mastercard, joins the Regulating AI Podcast to discuss the critical intersection of data, AI, and trust. From AI-powered fraud detection to personalization, responsib...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[741c0b65-e864-42a6-b86c-f2b7d4f462a9]]></guid>
  <title><![CDATA[Edward Santow on Predictive Policing, Racial Bias and AI’s Impact on Human Rights | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to support justice can actually perpetuate discrimination.&nbsp;</p><p><br></p><p class="ql-align-justify">Key topics include:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How Australia’s largest police force used AI to profile Indigenous youth&nbsp;</li><li class="ql-align-justify">The consequences of using historical data without correcting historical bias&nbsp;</li><li class="ql-align-justify">Why system-level harms from AI demand policy-level responses&nbsp;</li><li class="ql-align-justify">What governments must do to protect rights while embracing innovation&nbsp;</li></ul><p class="ql-align-justify">A sobering and essential conversation about AI, justice, and what ethical governance looks like in practice.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/esantow/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/esantow/</a>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Podcast Highlights &nbsp;</p><p>1:34 Ed’s background and journey into technology governance &nbsp;</p><p>2:12 The 'aha' moment: an algorithm targeting young people based on race &nbsp;</p><p>5:36 Finding a balance between AI's dystopian problems and positive use cases &nbsp;</p><p>9:07 The global fear of missing out (FOMO) and the trade-off with fundamental rights &nbsp;</p><p>11:12 Why innovation and regulation are not a trade-off &nbsp;</p><p>12:22 Comparing the AI regulatory approaches of the EU, US, and China &nbsp;</p><p>13:57 Australia's practical, non-ideological approach to AI &nbsp;</p><p>15:45 How Australia is building its niche on liberal democratic values &nbsp;</p><p>19:22 The shift from "fluffy principles" to practical AI safety standards &nbsp;</p><p>22:37 The three most common issues for corporate leaders in AI governance &nbsp;</p><p>23:08 The problem with the "AI guru" model of governance &nbsp;</p><p>25:08 The "dirty secret" of AI and the importance of engaging workers &nbsp;</p><p>35:24 The impact of AI on jobs and the workplace &nbsp;</p><p>40:28 The Asia-Pacific region's role in AI governance &nbsp;</p><p>44:07 Preserving indigenous cultures and languages in AI training data &nbsp;</p><p>47:14 The concentration of power in a handful of AI companies &nbsp;</p><p>50:09 Facial recognition: good uses vs. bad uses &nbsp;</p><p>53:57 Lightning round of questions &nbsp;</p><p>55:22 Conclusion and farewell&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/fd8a8a4b-7e28-4b79-8f3d-bce1a417f0e8/f5ef77070a.jpg" />
  <pubDate>Wed, 20 Aug 2025 12:01:11 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="55278907" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/fd8a8a4b-7e28-4b79-8f3d-bce1a417f0e8/episode.mp3" />
  <itunes:title><![CDATA[Edward Santow on Predictive Policing, Racial Bias and AI’s Impact on Human Rights | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>57:34</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to support justice can actually perpetuate discrimination.&nbsp;</p><p><br></p><p class="ql-align-justify">Key topics include:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How Australia’s largest police force used AI to profile Indigenous youth&nbsp;</li><li class="ql-align-justify">The consequences of using historical data without correcting historical bias&nbsp;</li><li class="ql-align-justify">Why system-level harms from AI demand policy-level responses&nbsp;</li><li class="ql-align-justify">What governments must do to protect rights while embracing innovation&nbsp;</li></ul><p class="ql-align-justify">A sobering and essential conversation about AI, justice, and what ethical governance looks like in practice.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/esantow/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/esantow/</a>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Podcast Highlights &nbsp;</p><p>1:34 Ed’s background and journey into technology governance &nbsp;</p><p>2:12 The 'aha' moment: an algorithm targeting young people based on race &nbsp;</p><p>5:36 Finding a balance between AI's dystopian problems and positive use cases &nbsp;</p><p>9:07 The global fear of missing out (FOMO) and the trade-off with fundamental rights &nbsp;</p><p>11:12 Why innovation and regulation are not a trade-off &nbsp;</p><p>12:22 Comparing the AI regulatory approaches of the EU, US, and China &nbsp;</p><p>13:57 Australia's practical, non-ideological approach to AI &nbsp;</p><p>15:45 How Australia is building its niche on liberal democratic values &nbsp;</p><p>19:22 The shift from "fluffy principles" to practical AI safety standards &nbsp;</p><p>22:37 The three most common issues for corporate leaders in AI governance &nbsp;</p><p>23:08 The problem with the "AI guru" model of governance &nbsp;</p><p>25:08 The "dirty secret" of AI and the importance of engaging workers &nbsp;</p><p>35:24 The impact of AI on jobs and the workplace &nbsp;</p><p>40:28 The Asia-Pacific region's role in AI governance &nbsp;</p><p>44:07 Preserving indigenous cultures and languages in AI training data &nbsp;</p><p>47:14 The concentration of power in a handful of AI companies &nbsp;</p><p>50:09 Facial recognition: good uses vs. bad uses &nbsp;</p><p>53:57 Lightning round of questions &nbsp;</p><p>55:22 Conclusion and farewell&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to support justice can actually perpetuate discrimination.&nbsp;</p><p><br></p><p class="ql-align-justify">Key topics include:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How Australia’s largest police force used AI to profile Indigenous youth&nbsp;</li><li class="ql-align-justify">The consequences of using historical data without correcting historical bias&nbsp;</li><li class="ql-align-justify">Why system-level harms from AI demand policy-level responses&nbsp;</li><li class="ql-align-justify">What governments must do to protect rights while embracing innovation&nbsp;</li></ul><p class="ql-align-justify">A sobering and essential conversation about AI, justice, and what ethical governance looks like in practice.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/esantow/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/esantow/</a>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Podcast Highlights &nbsp;</p><p>1:34 Ed’s background and journey into technology governance &nbsp;</p><p>2:12 The 'aha' moment: an algorithm targeting young people based on race &nbsp;</p><p>5:36 Finding a balance between AI's dystopian problems and positive use cases &nbsp;</p><p>9:07 The global fear of missing out (FOMO) and the trade-off with fundamental rights &nbsp;</p><p>11:12 Why innovation and regulation are not a trade-off &nbsp;</p><p>12:22 Comparing the AI regulatory approaches of the EU, US, and China &nbsp;</p><p>13:57 Australia's practical, non-ideological approach to AI &nbsp;</p><p>15:45 How Australia is building its niche on liberal democratic values &nbsp;</p><p>19:22 The shift from "fluffy principles" to practical AI safety standards &nbsp;</p><p>22:37 The three most common issues for corporate leaders in AI governance &nbsp;</p><p>23:08 The problem with the "AI guru" model of governance &nbsp;</p><p>25:08 The "dirty secret" of AI and the importance of engaging workers &nbsp;</p><p>35:24 The impact of AI on jobs and the workplace &nbsp;</p><p>40:28 The Asia-Pacific region's role in AI governance &nbsp;</p><p>44:07 Preserving indigenous cultures and languages in AI training data &nbsp;</p><p>47:14 The concentration of power in a handful of AI companies &nbsp;</p><p>50:09 Facial recognition: good uses vs. bad uses &nbsp;</p><p>53:57 Lightning round of questions &nbsp;</p><p>55:22 Conclusion and farewell&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI podcast, host Sanjay Puri speaks with Professor Edward Santow, former Australian Human Rights Commissioner and co-director of the Human Technology Institute. Together, they explore how algorithms intended to supp...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[3b9c7ec6-2e37-4be2-af0c-4fa5435017f7]]></guid>
  <title><![CDATA[Dr. Cari Miller: “The U.S. AI Action Plan & Why It May Be a Global Risk” | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>Join host <strong>Sanjay Puri</strong> in conversation with <strong>Dr. Cari Miller</strong>, a leading voice in AI governance, as they unpack the recently announced <strong>America’s AI Action Plan</strong>.&nbsp;</p><p><br></p><p>🔍 What you'll learn:&nbsp;</p><p><br></p><ul><li>Why Pillar One of the U.S. plan may spark global misalignment&nbsp;</li><li>The risks of removing "misinformation" from AI frameworks&nbsp;</li><li>Why U.S. innovation might clash with the <strong>EU AI Act</strong> and global regulatory norms&nbsp;</li><li>How free speech and foundation models intersect with international policy&nbsp;</li></ul><p>🌍 Global policymakers, this one is for you.&nbsp;</p><p><br></p><p>🎯 <strong>Watch now</strong> to understand why the latest U.S. move could raise alarms worldwide.&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/cari-miller/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/cari-miller/</a>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Introduction of Dr. Cari Miller &nbsp;</p><p>2:52 The three pillars of America's AI Action Plan &nbsp;</p><p>7:20 Comparing the AI Action Plan to the EU AI Act &nbsp;</p><p>8:27 "Hurry up and innovate" and the geopolitical dimension of AI &nbsp;</p><p>10:45 The dilemma between innovation and regulation &nbsp;</p><p>13:09 The moratorium on state-level AI regulation &nbsp;</p><p>15:50 A spectrum for regulation: reversible vs. irreversible harm &nbsp;</p><p>17:17 The EU's approach to regulation &nbsp;</p><p>19:10 Why AI procurement is the "gate of all gates" for governance &nbsp;</p><p>21:27 What makes AI procurement different &nbsp;</p><p>23:32 The need for augmented procurement practices and training &nbsp;</p><p>24:14 Accounting for hallucination and vendor disclaimers &nbsp;</p><p>27:55 Procurement for foundation models vs. fine-tuned solutions &nbsp;</p><p>29:39 The possibility of AI insurance &nbsp;</p><p>31:02 Distinguishing between trustworthy and "AI snake oil" vendors &nbsp;</p><p>33:23 Strengths and weaknesses of existing AI procurement frameworks 35:26 The three checkpoints before issuing an AI RFP &nbsp;</p><p>37:41 Sovereign AI and procurement for global south nations &nbsp;</p><p>40:20 Concerns about agents and agentic AI systems &nbsp;</p><p>44:02 The domain professional and complex multi-turn tasks &nbsp;</p><p>45:59 Procurement and pricing models for AI agents &nbsp;</p><p>49:00 The maturity of agents and the role of CISOs &nbsp;</p><p>52:35 Liability and governance for autonomous agents &nbsp;</p><p>55:33 The use of synthetic data: benefits and risks &nbsp;</p><p>58:50 Lightning round of questions &nbsp;</p><p>1:01:53 Concluding remarks&nbsp;&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/f8bc4f44-cb4e-4442-b234-feee4edb138a/b508e04355.jpg" />
  <pubDate>Wed, 13 Aug 2025 11:22:08 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="61377350" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/f8bc4f44-cb4e-4442-b234-feee4edb138a/episode.mp3" />
  <itunes:title><![CDATA[Dr. Cari Miller: “The U.S. AI Action Plan & Why It May Be a Global Risk” | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>1:03:56</itunes:duration>
  <itunes:summary><![CDATA[<p>Join host <strong>Sanjay Puri</strong> in conversation with <strong>Dr. Cari Miller</strong>, a leading voice in AI governance, as they unpack the recently announced <strong>America’s AI Action Plan</strong>.&nbsp;</p><p><br></p><p>🔍 What you'll learn:&nbsp;</p><p><br></p><ul><li>Why Pillar One of the U.S. plan may spark global misalignment&nbsp;</li><li>The risks of removing "misinformation" from AI frameworks&nbsp;</li><li>Why U.S. innovation might clash with the <strong>EU AI Act</strong> and global regulatory norms&nbsp;</li><li>How free speech and foundation models intersect with international policy&nbsp;</li></ul><p>🌍 Global policymakers, this one is for you.&nbsp;</p><p><br></p><p>🎯 <strong>Watch now</strong> to understand why the latest U.S. move could raise alarms worldwide.&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/cari-miller/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/cari-miller/</a>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Introduction of Dr. Cari Miller &nbsp;</p><p>2:52 The three pillars of America's AI Action Plan &nbsp;</p><p>7:20 Comparing the AI Action Plan to the EU AI Act &nbsp;</p><p>8:27 "Hurry up and innovate" and the geopolitical dimension of AI &nbsp;</p><p>10:45 The dilemma between innovation and regulation &nbsp;</p><p>13:09 The moratorium on state-level AI regulation &nbsp;</p><p>15:50 A spectrum for regulation: reversible vs. irreversible harm &nbsp;</p><p>17:17 The EU's approach to regulation &nbsp;</p><p>19:10 Why AI procurement is the "gate of all gates" for governance &nbsp;</p><p>21:27 What makes AI procurement different &nbsp;</p><p>23:32 The need for augmented procurement practices and training &nbsp;</p><p>24:14 Accounting for hallucination and vendor disclaimers &nbsp;</p><p>27:55 Procurement for foundation models vs. fine-tuned solutions &nbsp;</p><p>29:39 The possibility of AI insurance &nbsp;</p><p>31:02 Distinguishing between trustworthy and "AI snake oil" vendors &nbsp;</p><p>33:23 Strengths and weaknesses of existing AI procurement frameworks 35:26 The three checkpoints before issuing an AI RFP &nbsp;</p><p>37:41 Sovereign AI and procurement for global south nations &nbsp;</p><p>40:20 Concerns about agents and agentic AI systems &nbsp;</p><p>44:02 The domain professional and complex multi-turn tasks &nbsp;</p><p>45:59 Procurement and pricing models for AI agents &nbsp;</p><p>49:00 The maturity of agents and the role of CISOs &nbsp;</p><p>52:35 Liability and governance for autonomous agents &nbsp;</p><p>55:33 The use of synthetic data: benefits and risks &nbsp;</p><p>58:50 Lightning round of questions &nbsp;</p><p>1:01:53 Concluding remarks&nbsp;&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Join host <strong>Sanjay Puri</strong> in conversation with <strong>Dr. Cari Miller</strong>, a leading voice in AI governance, as they unpack the recently announced <strong>America’s AI Action Plan</strong>.&nbsp;</p><p><br></p><p>🔍 What you'll learn:&nbsp;</p><p><br></p><ul><li>Why Pillar One of the U.S. plan may spark global misalignment&nbsp;</li><li>The risks of removing "misinformation" from AI frameworks&nbsp;</li><li>Why U.S. innovation might clash with the <strong>EU AI Act</strong> and global regulatory norms&nbsp;</li><li>How free speech and foundation models intersect with international policy&nbsp;</li></ul><p>🌍 Global policymakers, this one is for you.&nbsp;</p><p><br></p><p>🎯 <strong>Watch now</strong> to understand why the latest U.S. move could raise alarms worldwide.&nbsp;</p><p><br></p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/cari-miller/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/cari-miller/</a>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Introduction of Dr. Cari Miller &nbsp;</p><p>2:52 The three pillars of America's AI Action Plan &nbsp;</p><p>7:20 Comparing the AI Action Plan to the EU AI Act &nbsp;</p><p>8:27 "Hurry up and innovate" and the geopolitical dimension of AI &nbsp;</p><p>10:45 The dilemma between innovation and regulation &nbsp;</p><p>13:09 The moratorium on state-level AI regulation &nbsp;</p><p>15:50 A spectrum for regulation: reversible vs. irreversible harm &nbsp;</p><p>17:17 The EU's approach to regulation &nbsp;</p><p>19:10 Why AI procurement is the "gate of all gates" for governance &nbsp;</p><p>21:27 What makes AI procurement different &nbsp;</p><p>23:32 The need for augmented procurement practices and training &nbsp;</p><p>24:14 Accounting for hallucination and vendor disclaimers &nbsp;</p><p>27:55 Procurement for foundation models vs. fine-tuned solutions &nbsp;</p><p>29:39 The possibility of AI insurance &nbsp;</p><p>31:02 Distinguishing between trustworthy and "AI snake oil" vendors &nbsp;</p><p>33:23 Strengths and weaknesses of existing AI procurement frameworks 35:26 The three checkpoints before issuing an AI RFP &nbsp;</p><p>37:41 Sovereign AI and procurement for global south nations &nbsp;</p><p>40:20 Concerns about agents and agentic AI systems &nbsp;</p><p>44:02 The domain professional and complex multi-turn tasks &nbsp;</p><p>45:59 Procurement and pricing models for AI agents &nbsp;</p><p>49:00 The maturity of agents and the role of CISOs &nbsp;</p><p>52:35 Liability and governance for autonomous agents &nbsp;</p><p>55:33 The use of synthetic data: benefits and risks &nbsp;</p><p>58:50 Lightning round of questions &nbsp;</p><p>1:01:53 Concluding remarks&nbsp;&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join host Sanjay Puri in conversation with Dr. Cari Miller, a leading voice in AI governance, as they unpack the recently announced America’s AI Action Plan. 🔍 What you'll learn: Why Pillar One of the U.S. plan may spark global misalignment The ris...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[da54e00a-c159-4b8e-b3a2-491848830212]]></guid>
  <title><![CDATA[Can Constitutional Law Protect Us From AI? | Prof. Raquel Brízida Castro | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections. </p><p><br></p><p>📌 Topics Covered: </p><p>~ The EU AI Act’s categorisation of risk – does it go far enough? </p><p>~ The collision between data sovereignty, latency, and user rights </p><p>~ Why current legal remedies like GDPR aren't enough for generative AI </p><p>~ Does the Brussels effect stand a chance against the Washington effect? </p><p>~ Will national courts lose relevance in the age of EU digital regulation? </p><p>~ Raquel's legal insight warns of a quiet constitutional revolution underway and why citizen protection must evolve urgently. </p><p><br></p><p>🎧 Watch Now: This conversation is vital for anyone navigating AI governance in democratic societies. </p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/raquel-a-br%C3%ADzida-castro-15317a105/ </p><p><br></p><p>⏱️ Timestamps: </p><p>0:00 Introduction to the podcast and guest, Raquel Brízida Castro  </p><p>2:21 Magnificent Introduction  </p><p>2:58 The EU AI Act from a Constitutional Law Perspective  </p><p>3:20 Constitutional Challenges and the Digital Social Democratic Rule of Law  </p><p>5:59 New Fundamental Rights in the AI Age  </p><p>8:27 The Right to Explainability: Rule of Law vs. Rule of Algorithm  </p><p>11:34 Is the EU AI Act's Risk-Based Approach Adequate?  </p><p>12:05 The Impact of AI on Fundamental Rights  </p><p>14:52 Regulation vs. Bureaucracy and Self-Regulation  </p><p>16:26 The Implementation of the AI Act and its Challenges  </p><p>21:58 The EU vs. US Approach: Regulation vs. Innovation  </p><p>23:55 The False Dilemma Between Regulating and Innovation  </p><p>27:09 The Washington Effect  </p><p>30:51 Implications for American Companies in Europe  </p><p>31:49 Digital Sovereignty and the Problem of Latency  </p><p>35:28 Constitutional Safeguards and Regulatory Overreach  </p><p>35:40 The Primacy of European Law and the Role of Constitutional Courts  </p><p>38:58 The Two-Year Moratorium on the EU Act  </p><p>40:30 Lightning Round of Questions  </p><p>43:24 Final thoughts </p><p> </p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/5827af4a-6384-4754-a601-439ee3e13c70/52206b32f1.jpg" />
  <pubDate>Fri, 08 Aug 2025 15:11:20 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="43577304" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/5827af4a-6384-4754-a601-439ee3e13c70/episode.mp3" />
  <itunes:title><![CDATA[Can Constitutional Law Protect Us From AI? | Prof. Raquel Brízida Castro | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>45:23</itunes:duration>
  <itunes:summary><![CDATA[<p>The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections. </p><p><br></p><p>📌 Topics Covered: </p><p>~ The EU AI Act’s categorisation of risk – does it go far enough? </p><p>~ The collision between data sovereignty, latency, and user rights </p><p>~ Why current legal remedies like GDPR aren't enough for generative AI </p><p>~ Does the Brussels effect stand a chance against the Washington effect? </p><p>~ Will national courts lose relevance in the age of EU digital regulation? </p><p>~ Raquel's legal insight warns of a quiet constitutional revolution underway and why citizen protection must evolve urgently. </p><p><br></p><p>🎧 Watch Now: This conversation is vital for anyone navigating AI governance in democratic societies. </p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/raquel-a-br%C3%ADzida-castro-15317a105/ </p><p><br></p><p>⏱️ Timestamps: </p><p>0:00 Introduction to the podcast and guest, Raquel Brízida Castro  </p><p>2:21 Magnificent Introduction  </p><p>2:58 The EU AI Act from a Constitutional Law Perspective  </p><p>3:20 Constitutional Challenges and the Digital Social Democratic Rule of Law  </p><p>5:59 New Fundamental Rights in the AI Age  </p><p>8:27 The Right to Explainability: Rule of Law vs. Rule of Algorithm  </p><p>11:34 Is the EU AI Act's Risk-Based Approach Adequate?  </p><p>12:05 The Impact of AI on Fundamental Rights  </p><p>14:52 Regulation vs. Bureaucracy and Self-Regulation  </p><p>16:26 The Implementation of the AI Act and its Challenges  </p><p>21:58 The EU vs. US Approach: Regulation vs. Innovation  </p><p>23:55 The False Dilemma Between Regulating and Innovation  </p><p>27:09 The Washington Effect  </p><p>30:51 Implications for American Companies in Europe  </p><p>31:49 Digital Sovereignty and the Problem of Latency  </p><p>35:28 Constitutional Safeguards and Regulatory Overreach  </p><p>35:40 The Primacy of European Law and the Role of Constitutional Courts  </p><p>38:58 The Two-Year Moratorium on the EU Act  </p><p>40:30 Lightning Round of Questions  </p><p>43:24 Final thoughts </p><p> </p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections. </p><p><br></p><p>📌 Topics Covered: </p><p>~ The EU AI Act’s categorisation of risk – does it go far enough? </p><p>~ The collision between data sovereignty, latency, and user rights </p><p>~ Why current legal remedies like GDPR aren't enough for generative AI </p><p>~ Does the Brussels effect stand a chance against the Washington effect? </p><p>~ Will national courts lose relevance in the age of EU digital regulation? </p><p>~ Raquel's legal insight warns of a quiet constitutional revolution underway and why citizen protection must evolve urgently. </p><p><br></p><p>🎧 Watch Now: This conversation is vital for anyone navigating AI governance in democratic societies. </p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/raquel-a-br%C3%ADzida-castro-15317a105/ </p><p><br></p><p>⏱️ Timestamps: </p><p>0:00 Introduction to the podcast and guest, Raquel Brízida Castro  </p><p>2:21 Magnificent Introduction  </p><p>2:58 The EU AI Act from a Constitutional Law Perspective  </p><p>3:20 Constitutional Challenges and the Digital Social Democratic Rule of Law  </p><p>5:59 New Fundamental Rights in the AI Age  </p><p>8:27 The Right to Explainability: Rule of Law vs. Rule of Algorithm  </p><p>11:34 Is the EU AI Act's Risk-Based Approach Adequate?  </p><p>12:05 The Impact of AI on Fundamental Rights  </p><p>14:52 Regulation vs. Bureaucracy and Self-Regulation  </p><p>16:26 The Implementation of the AI Act and its Challenges  </p><p>21:58 The EU vs. US Approach: Regulation vs. Innovation  </p><p>23:55 The False Dilemma Between Regulating and Innovation  </p><p>27:09 The Washington Effect  </p><p>30:51 Implications for American Companies in Europe  </p><p>31:49 Digital Sovereignty and the Problem of Latency  </p><p>35:28 Constitutional Safeguards and Regulatory Overreach  </p><p>35:40 The Primacy of European Law and the Role of Constitutional Courts  </p><p>38:58 The Two-Year Moratorium on the EU Act  </p><p>40:30 Lightning Round of Questions  </p><p>43:24 Final thoughts </p><p> </p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The RegulatingAI Podcast welcomes Prof. Raquel Brízida Castro to examine how Europe's AI regulatory framework measures up against core constitutional protections. 📌 Topics Covered: ~ The EU AI Act’s categorisation of risk – does it go far enough? ~...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[dfb69d70-67ba-4066-b8c4-6644b5deb73e]]></guid>
  <title><![CDATA[Trump's AI Action Plan Decoded: Fair Use, Export Controls & US-China Competition with Joshua Geltzer ]]></title>
  <description><![CDATA[<p><strong>🚨 BREAKING: Former Deputy&nbsp;White House Counsel's Latest Interview on Trump's AI Strategy&nbsp;</strong></p><p><br></p><p>In this episode of the RegulatingAI Podcast, we sit down with Joshua Geltzer, who advised President Biden,&nbsp;to discuss&nbsp;the details behind America's new AI Action Plan. This is the definitive breakdown every tech executive, investor, and policymaker need to watch.&nbsp;</p><p><br></p><p><strong>🎯 CRITICAL TAKEAWAYS:</strong>&nbsp;</p><ul><li>Why some states may LOSE federal AI funding&nbsp;&nbsp;</li><li>How fair use laws could save AI companies billions&nbsp;&nbsp;</li><li>The infrastructure revolution coming to your state&nbsp;&nbsp;</li><li>Export control&nbsp;politics&nbsp;that will reshape global tech&nbsp;&nbsp;</li><li>Why Trump chose to back&nbsp;open source&nbsp;</li></ul><p><strong>About the Guest:</strong> Joshua Geltzer is a partner at Wilmer Hale focusing on AI, cybersecurity, and national security litigation. Until January 2025, he served as Deputy Assistant to the President, Deputy White House Counsel, and Legal Adviser to the National Security Council.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/joshua-geltzer-6209b3198/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/in/joshua-geltzer-6209b3198/</strong></a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.wilmerhale.com/en/people/joshua-geltzer" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.wilmerhale.com/en/people/joshua-geltzer</strong></a>&nbsp;</p><p>&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Introduction to the podcast and guest Joshua Geltzer &nbsp;</p><p>4:29 Welcome to Regulating AI: The Podcast &nbsp;</p><p>5:44 The Three Pillars of the AI Action Plan &nbsp;</p><p>6:37 Fair Use, Training Data, and the Courts &nbsp;</p><p>8:17 Power, Land, and Permitting for Data Centers &nbsp;</p><p>10:19 Countering Synthetic Media and Deepfakes &nbsp;</p><p>11:45 The Effectiveness and Limitations of Export Controls &nbsp;</p><p>13:39 Leading International AI Governance While Prioritizing National Dominance &nbsp;</p><p>15:28 Federal-State Dynamics in AI Governance &nbsp;</p><p>19:00 The Open Source vs. Closed Model Debate &nbsp;</p><p>20:45 The Competitive Framing with China and National Security &nbsp;</p><p>22:54 Global AI Regulation and the Future &nbsp;</p><p>23:41 Concluding the discussion&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/5edd3c7c-3cb2-41de-bf08-32fd4f766a2b/c297c66b1c.jpg" />
  <pubDate>Wed, 06 Aug 2025 13:36:35 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="20430724" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/5edd3c7c-3cb2-41de-bf08-32fd4f766a2b/episode.mp3" />
  <itunes:title><![CDATA[Trump's AI Action Plan Decoded: Fair Use, Export Controls & US-China Competition with Joshua Geltzer ]]></itunes:title>
  <itunes:duration>21:16</itunes:duration>
  <itunes:summary><![CDATA[<p><strong>🚨 BREAKING: Former Deputy&nbsp;White House Counsel's Latest Interview on Trump's AI Strategy&nbsp;</strong></p><p><br></p><p>In this episode of the RegulatingAI Podcast, we sit down with Joshua Geltzer, who advised President Biden,&nbsp;to discuss&nbsp;the details behind America's new AI Action Plan. This is the definitive breakdown every tech executive, investor, and policymaker need to watch.&nbsp;</p><p><br></p><p><strong>🎯 CRITICAL TAKEAWAYS:</strong>&nbsp;</p><ul><li>Why some states may LOSE federal AI funding&nbsp;&nbsp;</li><li>How fair use laws could save AI companies billions&nbsp;&nbsp;</li><li>The infrastructure revolution coming to your state&nbsp;&nbsp;</li><li>Export control&nbsp;politics&nbsp;that will reshape global tech&nbsp;&nbsp;</li><li>Why Trump chose to back&nbsp;open source&nbsp;</li></ul><p><strong>About the Guest:</strong> Joshua Geltzer is a partner at Wilmer Hale focusing on AI, cybersecurity, and national security litigation. Until January 2025, he served as Deputy Assistant to the President, Deputy White House Counsel, and Legal Adviser to the National Security Council.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/joshua-geltzer-6209b3198/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/in/joshua-geltzer-6209b3198/</strong></a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.wilmerhale.com/en/people/joshua-geltzer" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.wilmerhale.com/en/people/joshua-geltzer</strong></a>&nbsp;</p><p>&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Introduction to the podcast and guest Joshua Geltzer &nbsp;</p><p>4:29 Welcome to Regulating AI: The Podcast &nbsp;</p><p>5:44 The Three Pillars of the AI Action Plan &nbsp;</p><p>6:37 Fair Use, Training Data, and the Courts &nbsp;</p><p>8:17 Power, Land, and Permitting for Data Centers &nbsp;</p><p>10:19 Countering Synthetic Media and Deepfakes &nbsp;</p><p>11:45 The Effectiveness and Limitations of Export Controls &nbsp;</p><p>13:39 Leading International AI Governance While Prioritizing National Dominance &nbsp;</p><p>15:28 Federal-State Dynamics in AI Governance &nbsp;</p><p>19:00 The Open Source vs. Closed Model Debate &nbsp;</p><p>20:45 The Competitive Framing with China and National Security &nbsp;</p><p>22:54 Global AI Regulation and the Future &nbsp;</p><p>23:41 Concluding the discussion&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong>🚨 BREAKING: Former Deputy&nbsp;White House Counsel's Latest Interview on Trump's AI Strategy&nbsp;</strong></p><p><br></p><p>In this episode of the RegulatingAI Podcast, we sit down with Joshua Geltzer, who advised President Biden,&nbsp;to discuss&nbsp;the details behind America's new AI Action Plan. This is the definitive breakdown every tech executive, investor, and policymaker need to watch.&nbsp;</p><p><br></p><p><strong>🎯 CRITICAL TAKEAWAYS:</strong>&nbsp;</p><ul><li>Why some states may LOSE federal AI funding&nbsp;&nbsp;</li><li>How fair use laws could save AI companies billions&nbsp;&nbsp;</li><li>The infrastructure revolution coming to your state&nbsp;&nbsp;</li><li>Export control&nbsp;politics&nbsp;that will reshape global tech&nbsp;&nbsp;</li><li>Why Trump chose to back&nbsp;open source&nbsp;</li></ul><p><strong>About the Guest:</strong> Joshua Geltzer is a partner at Wilmer Hale focusing on AI, cybersecurity, and national security litigation. Until January 2025, he served as Deputy Assistant to the President, Deputy White House Counsel, and Legal Adviser to the National Security Council.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/joshua-geltzer-6209b3198/" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.linkedin.com/in/joshua-geltzer-6209b3198/</strong></a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.wilmerhale.com/en/people/joshua-geltzer" target="_blank" style="color: rgb(70, 120, 134);"><strong>https://www.wilmerhale.com/en/people/joshua-geltzer</strong></a>&nbsp;</p><p>&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱</strong><strong style="color: rgb(15, 71, 97);">️ Timestamps:</strong><span style="color: rgb(15, 71, 97);">&nbsp;</span></p><p>0:00 Introduction to the podcast and guest Joshua Geltzer &nbsp;</p><p>4:29 Welcome to Regulating AI: The Podcast &nbsp;</p><p>5:44 The Three Pillars of the AI Action Plan &nbsp;</p><p>6:37 Fair Use, Training Data, and the Courts &nbsp;</p><p>8:17 Power, Land, and Permitting for Data Centers &nbsp;</p><p>10:19 Countering Synthetic Media and Deepfakes &nbsp;</p><p>11:45 The Effectiveness and Limitations of Export Controls &nbsp;</p><p>13:39 Leading International AI Governance While Prioritizing National Dominance &nbsp;</p><p>15:28 Federal-State Dynamics in AI Governance &nbsp;</p><p>19:00 The Open Source vs. Closed Model Debate &nbsp;</p><p>20:45 The Competitive Framing with China and National Security &nbsp;</p><p>22:54 Global AI Regulation and the Future &nbsp;</p><p>23:41 Concluding the discussion&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🚨 BREAKING: Former Deputy White House Counsel's Latest Interview on Trump's AI Strategy In this episode of the RegulatingAI Podcast, we sit down with Joshua Geltzer, who advised President Biden, to discuss the details behind America's new AI Action...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7c1a7baa-ee3b-4746-9778-dcff2f7c2c5a]]></guid>
  <title><![CDATA[The Security Risks in America’s AI Action Plan – Rob T. Lee | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of RegulatingAI, Sanjay speaks with Rob T. Lee, Chief AI Officer at the SANS Institute and advisor to the U.S. Foreign Intelligence Surveillance Court.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>What you’ll learn:&nbsp;</strong></p><p><br></p><ul><li class="ql-align-justify">Why Rob believes America’s AI systems are already under attack&nbsp;</li><li class="ql-align-justify">How adversaries are leveraging generative AI without regulatory constraints&nbsp;</li><li class="ql-align-justify">Why current cybersecurity approaches are inadequate for AI-based threats&nbsp;</li><li class="ql-align-justify">The challenge of balancing speed with safety in federal AI deployments&nbsp;</li><li class="ql-align-justify">Insights into critical gaps in open-source model evaluations&nbsp;</li></ul><p class="ql-align-justify">This conversation is a wake-up call for regulators, enterprise leaders, and anyone navigating AI implementation at scale.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify">Rob T. Lee, Chief of Research and Chief AI Officer, SANS Institute&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/leerob/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/leerob/</a>&nbsp;</p><p class="ql-align-justify">Substack: <a href="https://robtlee73.substack.com/" target="_blank" style="color: rgb(70, 120, 134);">https://robtlee73.substack.com/</a>&nbsp;&nbsp;</p><p class="ql-align-justify">X: <a href="https://x.com/robtlee" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/robtlee</a>&nbsp;&nbsp;</p><p class="ql-align-justify">YouTube: <a href="https://www.youtube.com/@RobLee96" target="_blank" style="color: rgb(70, 120, 134);">https://www.youtube.com/@RobLee96</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/c63c2089-e9a2-4898-afcc-e33b5569ca3a/343057bab2.jpg" />
  <pubDate>Mon, 04 Aug 2025 18:10:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32656866" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/c63c2089-e9a2-4898-afcc-e33b5569ca3a/episode.mp3" />
  <itunes:title><![CDATA[The Security Risks in America’s AI Action Plan – Rob T. Lee | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>34:01</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of RegulatingAI, Sanjay speaks with Rob T. Lee, Chief AI Officer at the SANS Institute and advisor to the U.S. Foreign Intelligence Surveillance Court.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>What you’ll learn:&nbsp;</strong></p><p><br></p><ul><li class="ql-align-justify">Why Rob believes America’s AI systems are already under attack&nbsp;</li><li class="ql-align-justify">How adversaries are leveraging generative AI without regulatory constraints&nbsp;</li><li class="ql-align-justify">Why current cybersecurity approaches are inadequate for AI-based threats&nbsp;</li><li class="ql-align-justify">The challenge of balancing speed with safety in federal AI deployments&nbsp;</li><li class="ql-align-justify">Insights into critical gaps in open-source model evaluations&nbsp;</li></ul><p class="ql-align-justify">This conversation is a wake-up call for regulators, enterprise leaders, and anyone navigating AI implementation at scale.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify">Rob T. Lee, Chief of Research and Chief AI Officer, SANS Institute&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/leerob/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/leerob/</a>&nbsp;</p><p class="ql-align-justify">Substack: <a href="https://robtlee73.substack.com/" target="_blank" style="color: rgb(70, 120, 134);">https://robtlee73.substack.com/</a>&nbsp;&nbsp;</p><p class="ql-align-justify">X: <a href="https://x.com/robtlee" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/robtlee</a>&nbsp;&nbsp;</p><p class="ql-align-justify">YouTube: <a href="https://www.youtube.com/@RobLee96" target="_blank" style="color: rgb(70, 120, 134);">https://www.youtube.com/@RobLee96</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of RegulatingAI, Sanjay speaks with Rob T. Lee, Chief AI Officer at the SANS Institute and advisor to the U.S. Foreign Intelligence Surveillance Court.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>What you’ll learn:&nbsp;</strong></p><p><br></p><ul><li class="ql-align-justify">Why Rob believes America’s AI systems are already under attack&nbsp;</li><li class="ql-align-justify">How adversaries are leveraging generative AI without regulatory constraints&nbsp;</li><li class="ql-align-justify">Why current cybersecurity approaches are inadequate for AI-based threats&nbsp;</li><li class="ql-align-justify">The challenge of balancing speed with safety in federal AI deployments&nbsp;</li><li class="ql-align-justify">Insights into critical gaps in open-source model evaluations&nbsp;</li></ul><p class="ql-align-justify">This conversation is a wake-up call for regulators, enterprise leaders, and anyone navigating AI implementation at scale.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify">Rob T. Lee, Chief of Research and Chief AI Officer, SANS Institute&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/leerob/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/leerob/</a>&nbsp;</p><p class="ql-align-justify">Substack: <a href="https://robtlee73.substack.com/" target="_blank" style="color: rgb(70, 120, 134);">https://robtlee73.substack.com/</a>&nbsp;&nbsp;</p><p class="ql-align-justify">X: <a href="https://x.com/robtlee" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/robtlee</a>&nbsp;&nbsp;</p><p class="ql-align-justify">YouTube: <a href="https://www.youtube.com/@RobLee96" target="_blank" style="color: rgb(70, 120, 134);">https://www.youtube.com/@RobLee96</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, Sanjay speaks with Rob T. Lee, Chief AI Officer at the SANS Institute and advisor to the U.S. Foreign Intelligence Surveillance Court. What you’ll learn: Why Rob believes America’s AI systems are already under attac...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7fb627d1-7a42-4960-905d-2e9a6d213cec]]></guid>
  <title><![CDATA[Peter Sands & Sania Nishtar on Revolutionizing Global Health | AI for Good]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this compelling ‘AI for Good’ panel, moderator <strong>Sanjay Puri</strong> brings together two of the most influential voices in global health: <strong>Peter Sands</strong>, Executive Director of <strong>The Global Fund</strong>, and <strong>Sania Nishtar</strong>, CEO of <strong>Gavi, the Vaccine Alliance</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">Together, they dive deep into the <strong>transformative power of artificial intelligence</strong> in addressing some of the world’s most pressing health challenges. From improving disease surveillance and accelerating vaccine delivery to enhancing decision-making in underserved regions, this discussion highlights the real-world impact and ethical considerations of AI in global health.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Key topics covered:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI is being applied to strengthen health systems globally&nbsp;</li><li class="ql-align-justify">Real-life examples of AI driving change in low- and middle-income countries&nbsp;</li><li class="ql-align-justify">The role of public-private partnerships in scaling AI for health&nbsp;</li><li class="ql-align-justify">Challenges around data, equity, and governance in AI adoption&nbsp;</li></ul><p class="ql-align-justify">Whether you're a policymaker, health professional, technologist, or simply interested in how AI can serve humanity, this conversation offers critical insights and bold visions for the future.&nbsp;</p><p><br></p><p class="ql-align-justify">🔔 Don’t forget to <strong>like, comment, and subscribe</strong> for more discussions at the intersection of <strong>technology and social impact</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">#AIforGood #GlobalHealth #PeterSands #SaniaNishtar&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/sania-nishtar-bb2a8123a" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/sania-nishtar-bb2a8123a</a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/peter-sands-0808bb6b" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/peter-sands-0808bb6b</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ac08e700-1e3b-4704-bffa-53ac33c778fe/6028ab4e60.jpg" />
  <pubDate>Thu, 31 Jul 2025 15:48:59 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="21765268" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ac08e700-1e3b-4704-bffa-53ac33c778fe/episode.mp3" />
  <itunes:title><![CDATA[Peter Sands & Sania Nishtar on Revolutionizing Global Health | AI for Good]]></itunes:title>
  <itunes:duration>22:40</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this compelling ‘AI for Good’ panel, moderator <strong>Sanjay Puri</strong> brings together two of the most influential voices in global health: <strong>Peter Sands</strong>, Executive Director of <strong>The Global Fund</strong>, and <strong>Sania Nishtar</strong>, CEO of <strong>Gavi, the Vaccine Alliance</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">Together, they dive deep into the <strong>transformative power of artificial intelligence</strong> in addressing some of the world’s most pressing health challenges. From improving disease surveillance and accelerating vaccine delivery to enhancing decision-making in underserved regions, this discussion highlights the real-world impact and ethical considerations of AI in global health.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Key topics covered:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI is being applied to strengthen health systems globally&nbsp;</li><li class="ql-align-justify">Real-life examples of AI driving change in low- and middle-income countries&nbsp;</li><li class="ql-align-justify">The role of public-private partnerships in scaling AI for health&nbsp;</li><li class="ql-align-justify">Challenges around data, equity, and governance in AI adoption&nbsp;</li></ul><p class="ql-align-justify">Whether you're a policymaker, health professional, technologist, or simply interested in how AI can serve humanity, this conversation offers critical insights and bold visions for the future.&nbsp;</p><p><br></p><p class="ql-align-justify">🔔 Don’t forget to <strong>like, comment, and subscribe</strong> for more discussions at the intersection of <strong>technology and social impact</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">#AIforGood #GlobalHealth #PeterSands #SaniaNishtar&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/sania-nishtar-bb2a8123a" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/sania-nishtar-bb2a8123a</a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/peter-sands-0808bb6b" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/peter-sands-0808bb6b</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this compelling ‘AI for Good’ panel, moderator <strong>Sanjay Puri</strong> brings together two of the most influential voices in global health: <strong>Peter Sands</strong>, Executive Director of <strong>The Global Fund</strong>, and <strong>Sania Nishtar</strong>, CEO of <strong>Gavi, the Vaccine Alliance</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">Together, they dive deep into the <strong>transformative power of artificial intelligence</strong> in addressing some of the world’s most pressing health challenges. From improving disease surveillance and accelerating vaccine delivery to enhancing decision-making in underserved regions, this discussion highlights the real-world impact and ethical considerations of AI in global health.&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Key topics covered:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI is being applied to strengthen health systems globally&nbsp;</li><li class="ql-align-justify">Real-life examples of AI driving change in low- and middle-income countries&nbsp;</li><li class="ql-align-justify">The role of public-private partnerships in scaling AI for health&nbsp;</li><li class="ql-align-justify">Challenges around data, equity, and governance in AI adoption&nbsp;</li></ul><p class="ql-align-justify">Whether you're a policymaker, health professional, technologist, or simply interested in how AI can serve humanity, this conversation offers critical insights and bold visions for the future.&nbsp;</p><p><br></p><p class="ql-align-justify">🔔 Don’t forget to <strong>like, comment, and subscribe</strong> for more discussions at the intersection of <strong>technology and social impact</strong>.&nbsp;</p><p><br></p><p class="ql-align-justify">#AIforGood #GlobalHealth #PeterSands #SaniaNishtar&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/sania-nishtar-bb2a8123a" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/sania-nishtar-bb2a8123a</a>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/peter-sands-0808bb6b" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/peter-sands-0808bb6b</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this compelling ‘AI for Good’ panel, moderator Sanjay Puri brings together two of the most influential voices in global health: Peter Sands, Executive Director of The Global Fund, and Sania Nishtar, CEO of Gavi, the Vaccine Alliance. Together, t...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[0e136d5a-7772-42fe-8822-0ddd7ee1e141]]></guid>
  <title><![CDATA[11,000 Attendees, 169 Countries: Inside AI for Good Summit 2025 | Frederic Werner  | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this special episode of the <strong>Regulating AI Podcast</strong>, live from the <strong>AI for Good Summit in Geneva</strong>, host Sanjay Puri sits down with <strong>Frederic Werner</strong>, Chief of Strategy and Operations at AI for Good, to explore how the initiative has evolved into a global movement touching every corner of society.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 <strong>Key topics discussed:</strong>&nbsp;</p><ul><li class="ql-align-justify">The origin and growth of AI for Good&nbsp;</li><li class="ql-align-justify">Why AI for Good is more than just a summit—it’s a year-round platform&nbsp;</li><li class="ql-align-justify">Building community and capacity through inclusivity and innovation&nbsp;</li><li class="ql-align-justify">Engaging youth, startups, governments, and NGOs alike&nbsp;</li><li class="ql-align-justify">AI for Good’s partnerships with 53 UN sister agencies&nbsp;</li></ul><p class="ql-align-justify">🧠 Whether you're in policy, tech, education, or just AI-curious, this episode will show how AI can be a force for equity and progress.&nbsp;</p><p><br></p><p class="ql-align-justify">📢 <strong>Subscribe for more deep dives into AI policy, governance, and innovation.</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/groups/8567748/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/groups/8567748/</a> &nbsp;</p><p><a href="https://x.com/FredericWerner" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/FredericWerner</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/in/fredericwerner/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/fredericwerner/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/6a2aa949-b35f-40d7-a7f9-0e4f54bd9c6c/0cdb14c593.jpg" />
  <pubDate>Fri, 18 Jul 2025 17:20:22 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="6551972" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/6a2aa949-b35f-40d7-a7f9-0e4f54bd9c6c/episode.mp3" />
  <itunes:title><![CDATA[11,000 Attendees, 169 Countries: Inside AI for Good Summit 2025 | Frederic Werner  | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>6:49</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this special episode of the <strong>Regulating AI Podcast</strong>, live from the <strong>AI for Good Summit in Geneva</strong>, host Sanjay Puri sits down with <strong>Frederic Werner</strong>, Chief of Strategy and Operations at AI for Good, to explore how the initiative has evolved into a global movement touching every corner of society.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 <strong>Key topics discussed:</strong>&nbsp;</p><ul><li class="ql-align-justify">The origin and growth of AI for Good&nbsp;</li><li class="ql-align-justify">Why AI for Good is more than just a summit—it’s a year-round platform&nbsp;</li><li class="ql-align-justify">Building community and capacity through inclusivity and innovation&nbsp;</li><li class="ql-align-justify">Engaging youth, startups, governments, and NGOs alike&nbsp;</li><li class="ql-align-justify">AI for Good’s partnerships with 53 UN sister agencies&nbsp;</li></ul><p class="ql-align-justify">🧠 Whether you're in policy, tech, education, or just AI-curious, this episode will show how AI can be a force for equity and progress.&nbsp;</p><p><br></p><p class="ql-align-justify">📢 <strong>Subscribe for more deep dives into AI policy, governance, and innovation.</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/groups/8567748/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/groups/8567748/</a> &nbsp;</p><p><a href="https://x.com/FredericWerner" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/FredericWerner</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/in/fredericwerner/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/fredericwerner/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this special episode of the <strong>Regulating AI Podcast</strong>, live from the <strong>AI for Good Summit in Geneva</strong>, host Sanjay Puri sits down with <strong>Frederic Werner</strong>, Chief of Strategy and Operations at AI for Good, to explore how the initiative has evolved into a global movement touching every corner of society.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 <strong>Key topics discussed:</strong>&nbsp;</p><ul><li class="ql-align-justify">The origin and growth of AI for Good&nbsp;</li><li class="ql-align-justify">Why AI for Good is more than just a summit—it’s a year-round platform&nbsp;</li><li class="ql-align-justify">Building community and capacity through inclusivity and innovation&nbsp;</li><li class="ql-align-justify">Engaging youth, startups, governments, and NGOs alike&nbsp;</li><li class="ql-align-justify">AI for Good’s partnerships with 53 UN sister agencies&nbsp;</li></ul><p class="ql-align-justify">🧠 Whether you're in policy, tech, education, or just AI-curious, this episode will show how AI can be a force for equity and progress.&nbsp;</p><p><br></p><p class="ql-align-justify">📢 <strong>Subscribe for more deep dives into AI policy, governance, and innovation.</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/groups/8567748/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/groups/8567748/</a> &nbsp;</p><p><a href="https://x.com/FredericWerner" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/FredericWerner</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/in/fredericwerner/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/fredericwerner/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this special episode of the Regulating AI Podcast, live from the AI for Good Summit in Geneva, host Sanjay Puri sits down with Frederic Werner, Chief of Strategy and Operations at AI for Good, to explore how the initiative has evolved into a glo...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b96c86fb-8f32-4c77-805c-c4fdc34fc45a]]></guid>
  <title><![CDATA[How Salesforce Balances AI Innovation with Responsibility | Eric Loeb on Policy & Governance | RegulatingAI Podcast]]></title>
  <description><![CDATA[<p>Can responsibility, innovation, and success truly coexist in the age of AI? Salesforce's Eric Loeb believes they must—and shares how the company is putting that vision into action through agentic AI and values-led governance.</p><p><br></p><p><strong><em>💡 You’ll learn:</em></strong></p><p>· What agentic AI is and why it changes enterprise workflows</p><p>· Why AI agents should always augment—not replace—humans</p><p>· The role of internal governance, "job descriptions" for agents, and ethical oversight</p><p>· Why shared responsibility will define AI liability in the future</p><p>· How Salesforce integrates safety, trust, and innovation into every layer of its AI stack</p><p>📌 A rare look into how one of the world’s most respected tech companies handles AI governance. </p><p><br></p><p>#AIgovernance #AgenticAI #RegulatingAI</p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/eric-loeb-33a86b/</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/4c8c662b-0326-4b6f-af64-56d237eb633f/090a20b5f5.jpg" />
  <pubDate>Mon, 14 Jul 2025 03:40:51 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="27561944" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/4c8c662b-0326-4b6f-af64-56d237eb633f/episode.mp3" />
  <itunes:title><![CDATA[How Salesforce Balances AI Innovation with Responsibility | Eric Loeb on Policy & Governance | RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>28:42</itunes:duration>
  <itunes:summary><![CDATA[<p>Can responsibility, innovation, and success truly coexist in the age of AI? Salesforce's Eric Loeb believes they must—and shares how the company is putting that vision into action through agentic AI and values-led governance.</p><p><br></p><p><strong><em>💡 You’ll learn:</em></strong></p><p>· What agentic AI is and why it changes enterprise workflows</p><p>· Why AI agents should always augment—not replace—humans</p><p>· The role of internal governance, "job descriptions" for agents, and ethical oversight</p><p>· Why shared responsibility will define AI liability in the future</p><p>· How Salesforce integrates safety, trust, and innovation into every layer of its AI stack</p><p>📌 A rare look into how one of the world’s most respected tech companies handles AI governance. </p><p><br></p><p>#AIgovernance #AgenticAI #RegulatingAI</p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/eric-loeb-33a86b/</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Can responsibility, innovation, and success truly coexist in the age of AI? Salesforce's Eric Loeb believes they must—and shares how the company is putting that vision into action through agentic AI and values-led governance.</p><p><br></p><p><strong><em>💡 You’ll learn:</em></strong></p><p>· What agentic AI is and why it changes enterprise workflows</p><p>· Why AI agents should always augment—not replace—humans</p><p>· The role of internal governance, "job descriptions" for agents, and ethical oversight</p><p>· Why shared responsibility will define AI liability in the future</p><p>· How Salesforce integrates safety, trust, and innovation into every layer of its AI stack</p><p>📌 A rare look into how one of the world’s most respected tech companies handles AI governance. </p><p><br></p><p>#AIgovernance #AgenticAI #RegulatingAI</p><p><br></p><p>Resources Mentioned: </p><p>https://www.linkedin.com/in/eric-loeb-33a86b/</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Can responsibility, innovation, and success truly coexist in the age of AI? Salesforce's Eric Loeb believes they must—and shares how the company is putting that vision into action through agentic AI and values-led governance.💡 You’ll learn:· What a...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b9145d83-31a7-4827-bf40-7a8d188224f5]]></guid>
  <title><![CDATA[How David Sinclair uses AI to reverse the effects of aging and develop life-extending drugs |  RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>Join us for a groundbreaking conversation with <strong>Harvard Professor David A. Sinclair</strong>, a global authority on aging and longevity. Live from the ITU AI for Good conference in Geneva, Dr. Sinclair explains how his lab is leveraging AI to identify molecules that may <strong>reverse the aging process</strong>.&nbsp;</p><p><br></p><p><strong>In this episode:</strong>&nbsp;</p><p><br></p><ul><li>How generative AI is transforming drug discovery timelines and cost&nbsp;</li><li>Real-world examples of AI identifying age-reversing molecules&nbsp;</li><li>The future of age-resetting gene therapies&nbsp;</li><li>Why Sinclair believes AI labs can now function like pharma companies&nbsp;</li><li>The urgent need for regulatory reform to accelerate innovation&nbsp;</li></ul><p>📣 Support Sinclair’s research: <a href="https://friendsofsinclairlab.org/" target="_blank" style="color: rgb(70, 120, 134);">friendsofsinclairlab.org</a>&nbsp;</p><p><br></p><p><strong>Know our guest</strong>: <a href="https://davidasinclair.com/" target="_blank" style="color: rgb(70, 120, 134);">https://davidasinclair.com/</a>&nbsp;&nbsp;</p><p><strong>Read his book at:</strong> <a href="https://www.amazon.com/dp/0008380325/?bestFormat=true&amp;k=lifespan%20book&amp;ref_=nb_sb_ss_w_scx-ent-pd-bk-d_de_k0_1_7&amp;crid=GMIFBPN6CS8J&amp;sprefix=lifespn" target="_blank" style="color: rgb(70, 120, 134);">https://www.amazon.com/dp/0008380325</a>&nbsp;</p><p><br></p><p>Listen to his podcast: <a href="http://www.youtube.com/@LifespanOfficial" target="_blank" style="color: rgb(70, 120, 134);">http://www.youtube.com/@LifespanOfficial</a>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/hovig-etyemezian-9b33994/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/hovig-etyemezian-9b33994/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/34e07391-33b9-452f-9631-77196eb45b72/6c74cd456d.jpg" />
  <pubDate>Fri, 11 Jul 2025 17:12:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="20757568" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/34e07391-33b9-452f-9631-77196eb45b72/episode.mp3" />
  <itunes:title><![CDATA[How David Sinclair uses AI to reverse the effects of aging and develop life-extending drugs |  RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>21:37</itunes:duration>
  <itunes:summary><![CDATA[<p>Join us for a groundbreaking conversation with <strong>Harvard Professor David A. Sinclair</strong>, a global authority on aging and longevity. Live from the ITU AI for Good conference in Geneva, Dr. Sinclair explains how his lab is leveraging AI to identify molecules that may <strong>reverse the aging process</strong>.&nbsp;</p><p><br></p><p><strong>In this episode:</strong>&nbsp;</p><p><br></p><ul><li>How generative AI is transforming drug discovery timelines and cost&nbsp;</li><li>Real-world examples of AI identifying age-reversing molecules&nbsp;</li><li>The future of age-resetting gene therapies&nbsp;</li><li>Why Sinclair believes AI labs can now function like pharma companies&nbsp;</li><li>The urgent need for regulatory reform to accelerate innovation&nbsp;</li></ul><p>📣 Support Sinclair’s research: <a href="https://friendsofsinclairlab.org/" target="_blank" style="color: rgb(70, 120, 134);">friendsofsinclairlab.org</a>&nbsp;</p><p><br></p><p><strong>Know our guest</strong>: <a href="https://davidasinclair.com/" target="_blank" style="color: rgb(70, 120, 134);">https://davidasinclair.com/</a>&nbsp;&nbsp;</p><p><strong>Read his book at:</strong> <a href="https://www.amazon.com/dp/0008380325/?bestFormat=true&amp;k=lifespan%20book&amp;ref_=nb_sb_ss_w_scx-ent-pd-bk-d_de_k0_1_7&amp;crid=GMIFBPN6CS8J&amp;sprefix=lifespn" target="_blank" style="color: rgb(70, 120, 134);">https://www.amazon.com/dp/0008380325</a>&nbsp;</p><p><br></p><p>Listen to his podcast: <a href="http://www.youtube.com/@LifespanOfficial" target="_blank" style="color: rgb(70, 120, 134);">http://www.youtube.com/@LifespanOfficial</a>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/hovig-etyemezian-9b33994/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/hovig-etyemezian-9b33994/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Join us for a groundbreaking conversation with <strong>Harvard Professor David A. Sinclair</strong>, a global authority on aging and longevity. Live from the ITU AI for Good conference in Geneva, Dr. Sinclair explains how his lab is leveraging AI to identify molecules that may <strong>reverse the aging process</strong>.&nbsp;</p><p><br></p><p><strong>In this episode:</strong>&nbsp;</p><p><br></p><ul><li>How generative AI is transforming drug discovery timelines and cost&nbsp;</li><li>Real-world examples of AI identifying age-reversing molecules&nbsp;</li><li>The future of age-resetting gene therapies&nbsp;</li><li>Why Sinclair believes AI labs can now function like pharma companies&nbsp;</li><li>The urgent need for regulatory reform to accelerate innovation&nbsp;</li></ul><p>📣 Support Sinclair’s research: <a href="https://friendsofsinclairlab.org/" target="_blank" style="color: rgb(70, 120, 134);">friendsofsinclairlab.org</a>&nbsp;</p><p><br></p><p><strong>Know our guest</strong>: <a href="https://davidasinclair.com/" target="_blank" style="color: rgb(70, 120, 134);">https://davidasinclair.com/</a>&nbsp;&nbsp;</p><p><strong>Read his book at:</strong> <a href="https://www.amazon.com/dp/0008380325/?bestFormat=true&amp;k=lifespan%20book&amp;ref_=nb_sb_ss_w_scx-ent-pd-bk-d_de_k0_1_7&amp;crid=GMIFBPN6CS8J&amp;sprefix=lifespn" target="_blank" style="color: rgb(70, 120, 134);">https://www.amazon.com/dp/0008380325</a>&nbsp;</p><p><br></p><p>Listen to his podcast: <a href="http://www.youtube.com/@LifespanOfficial" target="_blank" style="color: rgb(70, 120, 134);">http://www.youtube.com/@LifespanOfficial</a>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/hovig-etyemezian-9b33994/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/hovig-etyemezian-9b33994/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join us for a groundbreaking conversation with Harvard Professor David A. Sinclair, a global authority on aging and longevity. Live from the ITU AI for Good conference in Geneva, Dr. Sinclair explains how his lab is leveraging AI to identify molecu...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[528b36ca-0c7f-46fc-b559-f6f4bbf9c83b]]></guid>
  <title><![CDATA[Why AI Degrees May Be Meaningless Without Certification – Dr. Kathleen Kramer, IEEE | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>Dr. Kathleen Kramer doesn’t hold back. As IEEE President and a renowned professor, she shares blunt truths on AI education and credentials in this powerful RegulatingAI episode from Geneva’s AI for Good Summit.&nbsp;</p><p><br></p><p>💥 In this episode:&nbsp;</p><p><br></p><ul><li>Why saying "I have a master’s in AI" means nothing without recognized standards&nbsp;</li><li>The importance of grit, resilience, and doing the hard things in education&nbsp;</li><li>Why certifications—not degrees—are the future of AI talent validation&nbsp;</li><li>How IEEE's 141-year history positions it to shape tomorrow’s ethical AI&nbsp;</li><li>What it means to "advance technology for humanity" in a rapidly shifting workforce&nbsp;</li></ul><p>This is a call to rethink how we educate, certify, and empower the next generation of AI leaders.&nbsp;</p><p>🎙️ Real talk. Real insight. Only on RegulatingAI.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.ieee.org/kathleen-a-kramer" target="_blank" style="color: rgb(70, 120, 134);">https://www.ieee.org/kathleen-a-kramer</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/company/ieee/posts/?feedView=all" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/company/ieee/posts/?feedView=all</a>&nbsp;&nbsp;</p><p><a href="https://www.facebook.com/IEEE.org" target="_blank" style="color: rgb(70, 120, 134);">https://www.facebook.com/IEEE.org</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/daa6658f-54fe-4861-a279-613a46217861/fdbd03b710.jpg" />
  <pubDate>Thu, 10 Jul 2025 15:56:09 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="33711795" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/daa6658f-54fe-4861-a279-613a46217861/episode.mp3" />
  <itunes:title><![CDATA[Why AI Degrees May Be Meaningless Without Certification – Dr. Kathleen Kramer, IEEE | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>35:06</itunes:duration>
  <itunes:summary><![CDATA[<p>Dr. Kathleen Kramer doesn’t hold back. As IEEE President and a renowned professor, she shares blunt truths on AI education and credentials in this powerful RegulatingAI episode from Geneva’s AI for Good Summit.&nbsp;</p><p><br></p><p>💥 In this episode:&nbsp;</p><p><br></p><ul><li>Why saying "I have a master’s in AI" means nothing without recognized standards&nbsp;</li><li>The importance of grit, resilience, and doing the hard things in education&nbsp;</li><li>Why certifications—not degrees—are the future of AI talent validation&nbsp;</li><li>How IEEE's 141-year history positions it to shape tomorrow’s ethical AI&nbsp;</li><li>What it means to "advance technology for humanity" in a rapidly shifting workforce&nbsp;</li></ul><p>This is a call to rethink how we educate, certify, and empower the next generation of AI leaders.&nbsp;</p><p>🎙️ Real talk. Real insight. Only on RegulatingAI.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.ieee.org/kathleen-a-kramer" target="_blank" style="color: rgb(70, 120, 134);">https://www.ieee.org/kathleen-a-kramer</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/company/ieee/posts/?feedView=all" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/company/ieee/posts/?feedView=all</a>&nbsp;&nbsp;</p><p><a href="https://www.facebook.com/IEEE.org" target="_blank" style="color: rgb(70, 120, 134);">https://www.facebook.com/IEEE.org</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Dr. Kathleen Kramer doesn’t hold back. As IEEE President and a renowned professor, she shares blunt truths on AI education and credentials in this powerful RegulatingAI episode from Geneva’s AI for Good Summit.&nbsp;</p><p><br></p><p>💥 In this episode:&nbsp;</p><p><br></p><ul><li>Why saying "I have a master’s in AI" means nothing without recognized standards&nbsp;</li><li>The importance of grit, resilience, and doing the hard things in education&nbsp;</li><li>Why certifications—not degrees—are the future of AI talent validation&nbsp;</li><li>How IEEE's 141-year history positions it to shape tomorrow’s ethical AI&nbsp;</li><li>What it means to "advance technology for humanity" in a rapidly shifting workforce&nbsp;</li></ul><p>This is a call to rethink how we educate, certify, and empower the next generation of AI leaders.&nbsp;</p><p>🎙️ Real talk. Real insight. Only on RegulatingAI.&nbsp;</p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.ieee.org/kathleen-a-kramer" target="_blank" style="color: rgb(70, 120, 134);">https://www.ieee.org/kathleen-a-kramer</a>&nbsp;&nbsp;</p><p><a href="https://www.linkedin.com/company/ieee/posts/?feedView=all" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/company/ieee/posts/?feedView=all</a>&nbsp;&nbsp;</p><p><a href="https://www.facebook.com/IEEE.org" target="_blank" style="color: rgb(70, 120, 134);">https://www.facebook.com/IEEE.org</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Dr. Kathleen Kramer doesn’t hold back. As IEEE President and a renowned professor, she shares blunt truths on AI education and credentials in this powerful RegulatingAI episode from Geneva’s AI for Good Summit. 💥 In this episode: Why saying "I have...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[ead0d2e5-01b3-49fc-ab37-a1e84ec2a4cb]]></guid>
  <title><![CDATA[How UNHCR Uses AI to Transform Refugee Services with Hovig Etyemezian |  RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of the <em>RegulatingAI Podcast</em>, we speak to <strong>Hovig Etyemezian</strong>, Head of Innovation at UNHCR, the UN Refugee Agency. From fieldwork in Mosul to AI-powered systems in Geneva, Hovig shares a compelling narrative of innovation, ethics, and resilience in refugee services.&nbsp;</p><p><br></p><p class="ql-align-justify">🎯 Key Takeaways:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How UNHCR uses AI to process refugee feedback at scale&nbsp;</li><li class="ql-align-justify">Why chatbots, messaging apps, and call centers are critical digital tools&nbsp;</li><li class="ql-align-justify">The balance between automation and “human in the loop” care&nbsp;</li><li class="ql-align-justify">Refugee-led innovation programs and grassroots solutions&nbsp;</li><li class="ql-align-justify">Ethical safeguards and the importance of not “parachuting” tech solutions&nbsp;</li></ul><p class="ql-align-justify">💬 A conversation that humanizes AI and shows how responsible innovation can restore dignity to displaced communities.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/hovig-etyemezian-9b33994/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/hovig-etyemezian-9b33994/</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/32ddca13-7523-41bf-9d63-cc8223a6dd11/88fbef457b.jpg" />
  <pubDate>Thu, 10 Jul 2025 05:43:02 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="21081905" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/32ddca13-7523-41bf-9d63-cc8223a6dd11/episode.mp3" />
  <itunes:title><![CDATA[How UNHCR Uses AI to Transform Refugee Services with Hovig Etyemezian |  RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>21:57</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of the <em>RegulatingAI Podcast</em>, we speak to <strong>Hovig Etyemezian</strong>, Head of Innovation at UNHCR, the UN Refugee Agency. From fieldwork in Mosul to AI-powered systems in Geneva, Hovig shares a compelling narrative of innovation, ethics, and resilience in refugee services.&nbsp;</p><p><br></p><p class="ql-align-justify">🎯 Key Takeaways:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How UNHCR uses AI to process refugee feedback at scale&nbsp;</li><li class="ql-align-justify">Why chatbots, messaging apps, and call centers are critical digital tools&nbsp;</li><li class="ql-align-justify">The balance between automation and “human in the loop” care&nbsp;</li><li class="ql-align-justify">Refugee-led innovation programs and grassroots solutions&nbsp;</li><li class="ql-align-justify">Ethical safeguards and the importance of not “parachuting” tech solutions&nbsp;</li></ul><p class="ql-align-justify">💬 A conversation that humanizes AI and shows how responsible innovation can restore dignity to displaced communities.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/hovig-etyemezian-9b33994/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/hovig-etyemezian-9b33994/</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of the <em>RegulatingAI Podcast</em>, we speak to <strong>Hovig Etyemezian</strong>, Head of Innovation at UNHCR, the UN Refugee Agency. From fieldwork in Mosul to AI-powered systems in Geneva, Hovig shares a compelling narrative of innovation, ethics, and resilience in refugee services.&nbsp;</p><p><br></p><p class="ql-align-justify">🎯 Key Takeaways:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How UNHCR uses AI to process refugee feedback at scale&nbsp;</li><li class="ql-align-justify">Why chatbots, messaging apps, and call centers are critical digital tools&nbsp;</li><li class="ql-align-justify">The balance between automation and “human in the loop” care&nbsp;</li><li class="ql-align-justify">Refugee-led innovation programs and grassroots solutions&nbsp;</li><li class="ql-align-justify">Ethical safeguards and the importance of not “parachuting” tech solutions&nbsp;</li></ul><p class="ql-align-justify">💬 A conversation that humanizes AI and shows how responsible innovation can restore dignity to displaced communities.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/hovig-etyemezian-9b33994/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/hovig-etyemezian-9b33994/</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, we speak to Hovig Etyemezian, Head of Innovation at UNHCR, the UN Refugee Agency. From fieldwork in Mosul to AI-powered systems in Geneva, Hovig shares a compelling narrative of innovation, ethics, and r...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[9e162ebc-bf06-4d64-81a3-7ebfa8307b16]]></guid>
  <title><![CDATA[Nicholas Thompson on Open Source, China, and AI Power Games | RegulatingAI Podcast]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of the RegulatingAI Podcast, host Sanjay Puri&nbsp;is joined by Nicholas Thompson, CEO of The Atlantic, to talk about one of the most pressing issues in AI today: the scraping of content and the future of journalism in an AI-first world. &nbsp;</p><p><br></p><p class="ql-align-justify">✅ Topics covered:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The “original sin” of AI companies and scraped content&nbsp;</li><li class="ql-align-justify">How The Atlantic is navigating AI disruption in publishing&nbsp;</li><li class="ql-align-justify">Legal and ethical paths forward: lawsuits, licensing, and collaboration&nbsp;</li><li class="ql-align-justify">Why Thompson thinks AI companies should <em>drive traffic</em> to journalism&nbsp;</li><li class="ql-align-justify">A peek into The Atlantic's deal with OpenAI&nbsp;</li></ul><p class="ql-align-justify">🔍 Nicholas shares candid takes on balancing innovation and fairness and what the future might look like if we don’t course correct.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/nicholasxthompson/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/nicholasxthompson/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">00:00 - Podcast Highlights&nbsp;</p><p class="ql-align-justify">02:56 - The "Original Sin" of AI Companies - Data Scraping &amp; Compensation&nbsp;</p><p class="ql-align-justify">05:04 - Media vs AI Companies: Finding Fair Value Exchange&nbsp;</p><p class="ql-align-justify">08:19 - Recent Court Rulings: Anthropic &amp; Meta Cases Analysis&nbsp;</p><p class="ql-align-justify">12:19 - The Future of Search &amp; Web Architecture&nbsp;</p><p class="ql-align-justify">16:45 - Generational Impact: How Young People Consume Information&nbsp;</p><p class="ql-align-justify">17:47 - Federal vs State AI Regulation Debate&nbsp;</p><p class="ql-align-justify">21:47 - What AI Developers Actually Want from Regulation&nbsp;</p><p class="ql-align-justify">22:05 - EU AI Act: Over-regulation Concerns&nbsp;</p><p class="ql-align-justify">23:45 - Open Source vs Closed Source AI Models&nbsp;</p><p class="ql-align-justify">24:43 - China's Open Source AI Strategy &amp; US-China Relations&nbsp;</p><p class="ql-align-justify">26:19 - Chip Export Restrictions: Effectiveness &amp; Consequences&nbsp;</p><p class="ql-align-justify">28:41 - US-China AI Cooperation Needs&nbsp;</p><p class="ql-align-justify">29:08 - The "Job Apocalypse" Debate: Dario vs Jensen&nbsp;</p><p class="ql-align-justify">32:48 - Government Role in AI Transition &amp; Retraining&nbsp;</p><p class="ql-align-justify">34:43 - The "First Rung" Problem: Entry-Level Jobs at Risk&nbsp;</p><p class="ql-align-justify">35:51 - AI Medical Diagnosis: Outperforming Human Doctors&nbsp;</p><p class="ql-align-justify">39:07 - AI Companionship: Solution or Danger for Loneliness?&nbsp;</p><p class="ql-align-justify">41:10 - The "Westworld" Risk: AI-Powered Social Media Dystopia&nbsp;</p><p class="ql-align-justify">42:46 - Key AI Thinkers: Audrey Tang &amp; The Vatican's AI Paper&nbsp;</p><p class="ql-align-justify">44:59 - Lightning Round: Quick Takes on AI's Future&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2281ea63-254b-4d1f-8b66-60677b453ea4/936d5f0e8d.jpg" />
  <pubDate>Fri, 04 Jul 2025 14:59:19 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="46720357" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2281ea63-254b-4d1f-8b66-60677b453ea4/episode.mp3" />
  <itunes:title><![CDATA[Nicholas Thompson on Open Source, China, and AI Power Games | RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>48:39</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of the RegulatingAI Podcast, host Sanjay Puri&nbsp;is joined by Nicholas Thompson, CEO of The Atlantic, to talk about one of the most pressing issues in AI today: the scraping of content and the future of journalism in an AI-first world. &nbsp;</p><p><br></p><p class="ql-align-justify">✅ Topics covered:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The “original sin” of AI companies and scraped content&nbsp;</li><li class="ql-align-justify">How The Atlantic is navigating AI disruption in publishing&nbsp;</li><li class="ql-align-justify">Legal and ethical paths forward: lawsuits, licensing, and collaboration&nbsp;</li><li class="ql-align-justify">Why Thompson thinks AI companies should <em>drive traffic</em> to journalism&nbsp;</li><li class="ql-align-justify">A peek into The Atlantic's deal with OpenAI&nbsp;</li></ul><p class="ql-align-justify">🔍 Nicholas shares candid takes on balancing innovation and fairness and what the future might look like if we don’t course correct.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/nicholasxthompson/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/nicholasxthompson/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">00:00 - Podcast Highlights&nbsp;</p><p class="ql-align-justify">02:56 - The "Original Sin" of AI Companies - Data Scraping &amp; Compensation&nbsp;</p><p class="ql-align-justify">05:04 - Media vs AI Companies: Finding Fair Value Exchange&nbsp;</p><p class="ql-align-justify">08:19 - Recent Court Rulings: Anthropic &amp; Meta Cases Analysis&nbsp;</p><p class="ql-align-justify">12:19 - The Future of Search &amp; Web Architecture&nbsp;</p><p class="ql-align-justify">16:45 - Generational Impact: How Young People Consume Information&nbsp;</p><p class="ql-align-justify">17:47 - Federal vs State AI Regulation Debate&nbsp;</p><p class="ql-align-justify">21:47 - What AI Developers Actually Want from Regulation&nbsp;</p><p class="ql-align-justify">22:05 - EU AI Act: Over-regulation Concerns&nbsp;</p><p class="ql-align-justify">23:45 - Open Source vs Closed Source AI Models&nbsp;</p><p class="ql-align-justify">24:43 - China's Open Source AI Strategy &amp; US-China Relations&nbsp;</p><p class="ql-align-justify">26:19 - Chip Export Restrictions: Effectiveness &amp; Consequences&nbsp;</p><p class="ql-align-justify">28:41 - US-China AI Cooperation Needs&nbsp;</p><p class="ql-align-justify">29:08 - The "Job Apocalypse" Debate: Dario vs Jensen&nbsp;</p><p class="ql-align-justify">32:48 - Government Role in AI Transition &amp; Retraining&nbsp;</p><p class="ql-align-justify">34:43 - The "First Rung" Problem: Entry-Level Jobs at Risk&nbsp;</p><p class="ql-align-justify">35:51 - AI Medical Diagnosis: Outperforming Human Doctors&nbsp;</p><p class="ql-align-justify">39:07 - AI Companionship: Solution or Danger for Loneliness?&nbsp;</p><p class="ql-align-justify">41:10 - The "Westworld" Risk: AI-Powered Social Media Dystopia&nbsp;</p><p class="ql-align-justify">42:46 - Key AI Thinkers: Audrey Tang &amp; The Vatican's AI Paper&nbsp;</p><p class="ql-align-justify">44:59 - Lightning Round: Quick Takes on AI's Future&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of the RegulatingAI Podcast, host Sanjay Puri&nbsp;is joined by Nicholas Thompson, CEO of The Atlantic, to talk about one of the most pressing issues in AI today: the scraping of content and the future of journalism in an AI-first world. &nbsp;</p><p><br></p><p class="ql-align-justify">✅ Topics covered:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The “original sin” of AI companies and scraped content&nbsp;</li><li class="ql-align-justify">How The Atlantic is navigating AI disruption in publishing&nbsp;</li><li class="ql-align-justify">Legal and ethical paths forward: lawsuits, licensing, and collaboration&nbsp;</li><li class="ql-align-justify">Why Thompson thinks AI companies should <em>drive traffic</em> to journalism&nbsp;</li><li class="ql-align-justify">A peek into The Atlantic's deal with OpenAI&nbsp;</li></ul><p class="ql-align-justify">🔍 Nicholas shares candid takes on balancing innovation and fairness and what the future might look like if we don’t course correct.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/nicholasxthompson/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/nicholasxthompson/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">00:00 - Podcast Highlights&nbsp;</p><p class="ql-align-justify">02:56 - The "Original Sin" of AI Companies - Data Scraping &amp; Compensation&nbsp;</p><p class="ql-align-justify">05:04 - Media vs AI Companies: Finding Fair Value Exchange&nbsp;</p><p class="ql-align-justify">08:19 - Recent Court Rulings: Anthropic &amp; Meta Cases Analysis&nbsp;</p><p class="ql-align-justify">12:19 - The Future of Search &amp; Web Architecture&nbsp;</p><p class="ql-align-justify">16:45 - Generational Impact: How Young People Consume Information&nbsp;</p><p class="ql-align-justify">17:47 - Federal vs State AI Regulation Debate&nbsp;</p><p class="ql-align-justify">21:47 - What AI Developers Actually Want from Regulation&nbsp;</p><p class="ql-align-justify">22:05 - EU AI Act: Over-regulation Concerns&nbsp;</p><p class="ql-align-justify">23:45 - Open Source vs Closed Source AI Models&nbsp;</p><p class="ql-align-justify">24:43 - China's Open Source AI Strategy &amp; US-China Relations&nbsp;</p><p class="ql-align-justify">26:19 - Chip Export Restrictions: Effectiveness &amp; Consequences&nbsp;</p><p class="ql-align-justify">28:41 - US-China AI Cooperation Needs&nbsp;</p><p class="ql-align-justify">29:08 - The "Job Apocalypse" Debate: Dario vs Jensen&nbsp;</p><p class="ql-align-justify">32:48 - Government Role in AI Transition &amp; Retraining&nbsp;</p><p class="ql-align-justify">34:43 - The "First Rung" Problem: Entry-Level Jobs at Risk&nbsp;</p><p class="ql-align-justify">35:51 - AI Medical Diagnosis: Outperforming Human Doctors&nbsp;</p><p class="ql-align-justify">39:07 - AI Companionship: Solution or Danger for Loneliness?&nbsp;</p><p class="ql-align-justify">41:10 - The "Westworld" Risk: AI-Powered Social Media Dystopia&nbsp;</p><p class="ql-align-justify">42:46 - Key AI Thinkers: Audrey Tang &amp; The Vatican's AI Paper&nbsp;</p><p class="ql-align-justify">44:59 - Lightning Round: Quick Takes on AI's Future&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, host Sanjay Puri is joined by Nicholas Thompson, CEO of The Atlantic, to talk about one of the most pressing issues in AI today: the scraping of content and the future of journalism in an AI-first world....]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[90f9a9dc-bdab-4356-9d55-9d50d0ba1c15]]></guid>
  <title><![CDATA[What Milan Teaches Us About Scalable Digital Transformation with Roberta Cocco |  RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this compelling episode of RegulatingAI, Sanjay Puri discusses with Roberta Cocco—Digital Transformation Advisor, University Professor and Board Member—to discuss why human oversight in AI is essential for preserving democracy and civic trust.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 Key Takeaways:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI can both empower and threaten democratic institutions&nbsp;</li><li class="ql-align-justify">The risk of obscured accountability in automated decision-making&nbsp;</li><li class="ql-align-justify">Why explainability and auditability must be embedded into governance frameworks&nbsp;</li><li class="ql-align-justify">Roberta’s firsthand experience in digitizing Milan’s public services&nbsp;</li></ul><p class="ql-align-justify">With insights from her roles at Microsoft, Italian government, and academia, Roberta argues that “AI should never replace human responsibility—it should enable it.”&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify">🎧 Essential listening for policymakers, civic technologists, and digital ethicists.&nbsp;</p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/robertacocco/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/robertacocco/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/40976833-702a-4f60-8e00-75fa601b55ce/b2997ed0f5.jpg" />
  <pubDate>Wed, 18 Jun 2025 14:47:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="44661490" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/40976833-702a-4f60-8e00-75fa601b55ce/episode.mp3" />
  <itunes:title><![CDATA[What Milan Teaches Us About Scalable Digital Transformation with Roberta Cocco |  RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>46:31</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this compelling episode of RegulatingAI, Sanjay Puri discusses with Roberta Cocco—Digital Transformation Advisor, University Professor and Board Member—to discuss why human oversight in AI is essential for preserving democracy and civic trust.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 Key Takeaways:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI can both empower and threaten democratic institutions&nbsp;</li><li class="ql-align-justify">The risk of obscured accountability in automated decision-making&nbsp;</li><li class="ql-align-justify">Why explainability and auditability must be embedded into governance frameworks&nbsp;</li><li class="ql-align-justify">Roberta’s firsthand experience in digitizing Milan’s public services&nbsp;</li></ul><p class="ql-align-justify">With insights from her roles at Microsoft, Italian government, and academia, Roberta argues that “AI should never replace human responsibility—it should enable it.”&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify">🎧 Essential listening for policymakers, civic technologists, and digital ethicists.&nbsp;</p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/robertacocco/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/robertacocco/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this compelling episode of RegulatingAI, Sanjay Puri discusses with Roberta Cocco—Digital Transformation Advisor, University Professor and Board Member—to discuss why human oversight in AI is essential for preserving democracy and civic trust.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 Key Takeaways:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI can both empower and threaten democratic institutions&nbsp;</li><li class="ql-align-justify">The risk of obscured accountability in automated decision-making&nbsp;</li><li class="ql-align-justify">Why explainability and auditability must be embedded into governance frameworks&nbsp;</li><li class="ql-align-justify">Roberta’s firsthand experience in digitizing Milan’s public services&nbsp;</li></ul><p class="ql-align-justify">With insights from her roles at Microsoft, Italian government, and academia, Roberta argues that “AI should never replace human responsibility—it should enable it.”&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify">🎧 Essential listening for policymakers, civic technologists, and digital ethicists.&nbsp;</p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/robertacocco/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/robertacocco/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this compelling episode of RegulatingAI, Sanjay Puri discusses with Roberta Cocco—Digital Transformation Advisor, University Professor and Board Member—to discuss why human oversight in AI is essential for preserving democracy and civic trust. 🔍...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a6b9de99-b7c8-41bb-aecb-6a866f094e5a]]></guid>
  <title><![CDATA[Dr. Bilel Jamoussi on AI for Good, Global Standards and Infrastructure at ITU | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify"><strong>Join Us at AI for Good in Geneva – Booth #4282!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">We’re excited to be part of the global conversation on the future of artificial intelligence! Don’t miss the chance to connect with us at Booth 4282, where we’ll be showcasing insights from thought leaders like Dr. Bilel Jamoussi, Deputy Director of ITU’s Telecommunication Standardization Bureau.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>🎙️ Tune into discussions on:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The AI for Good initiative and its global impact&nbsp;</li><li class="ql-align-justify">The role of international standards in AI safety and interoperability&nbsp;</li><li class="ql-align-justify">Advancing digital infrastructure and closing the global connectivity gap&nbsp;</li><li class="ql-align-justify">Real-world AI applications in health, agriculture, and climate action&nbsp;</li></ul><p class="ql-align-justify">👉 Whether you're a policymaker, innovator, or researcher, stop by to engage in meaningful talks and explore how we can shape a responsible AI future—together. See you in Geneva!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/bilel-jamoussi/?originalSubdomain=ch" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/bilel-jamoussi/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">00:00 – Podcast Episode Highlights&nbsp;</p><p class="ql-align-justify">&nbsp;01:37 - Meet Dr. Bilel Jamoussi, ITU Deputy Director&nbsp;</p><p class="ql-align-justify">&nbsp;03:22 - What is ITU? 160 Years of Evolution&nbsp;</p><p class="ql-align-justify">&nbsp;05:46 - Origin of "AI for Good" - Why This Name?&nbsp;</p><p class="ql-align-justify">&nbsp;09:10 - Healthcare Revolution: WHO &amp; WIPO Partnerships&nbsp;</p><p class="ql-align-justify">&nbsp;14:00 - Real Impact: Rwanda's Healthcare Success Story&nbsp;</p><p class="ql-align-justify">&nbsp;15:28 - Connecting the Unconnected: 2.6B People Offline&nbsp;</p><p class="ql-align-justify">&nbsp;17:51 - AI Skills Coalition: From Kids to Diplomats&nbsp;</p><p class="ql-align-justify">&nbsp;21:35 - Climate Action: Predicting Natural Disasters&nbsp;</p><p class="ql-align-justify">&nbsp;24:44 - Food Security: AI in Agriculture with FAO&nbsp;</p><p class="ql-align-justify">&nbsp;26:49 - Quantum for Good: The Future of Computing&nbsp;</p><p class="ql-align-justify">&nbsp;29:25 - Why International Standards Matter&nbsp;</p><p class="ql-align-justify">&nbsp;31:18 - Fighting Deepfakes: 70 Countries' Election Challenge&nbsp;</p><p class="ql-align-justify">&nbsp;36:55 - Innovation vs Regulation: Finding Balance&nbsp;</p><p class="ql-align-justify">&nbsp;40:47 - Sovereign AI: Reality Check for Small Countries&nbsp;</p><p class="ql-align-justify">&nbsp;43:44 - Art Meets AI: Creative Canvas Platform&nbsp;</p><p class="ql-align-justify">&nbsp;45:04 - Accessibility: 1 Billion People with Disabilities&nbsp;</p><p class="ql-align-justify">&nbsp;46:31 - Summit Preview: Flying Cars &amp; Brain Interfaces&nbsp;</p><p class="ql-align-justify">&nbsp;47:30 - Lightning Round: One-Word Insights&nbsp;</p><p class="ql-align-justify">&nbsp;49:29 - Call to Action &amp; Summit Details&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/7da1544f-4dc3-4f09-b45d-51a16e4bc547/d267a8ffdd.jpg" />
  <pubDate>Wed, 11 Jun 2025 15:50:54 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="47381151" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/7da1544f-4dc3-4f09-b45d-51a16e4bc547/episode.mp3" />
  <itunes:title><![CDATA[Dr. Bilel Jamoussi on AI for Good, Global Standards and Infrastructure at ITU | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>49:21</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify"><strong>Join Us at AI for Good in Geneva – Booth #4282!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">We’re excited to be part of the global conversation on the future of artificial intelligence! Don’t miss the chance to connect with us at Booth 4282, where we’ll be showcasing insights from thought leaders like Dr. Bilel Jamoussi, Deputy Director of ITU’s Telecommunication Standardization Bureau.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>🎙️ Tune into discussions on:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The AI for Good initiative and its global impact&nbsp;</li><li class="ql-align-justify">The role of international standards in AI safety and interoperability&nbsp;</li><li class="ql-align-justify">Advancing digital infrastructure and closing the global connectivity gap&nbsp;</li><li class="ql-align-justify">Real-world AI applications in health, agriculture, and climate action&nbsp;</li></ul><p class="ql-align-justify">👉 Whether you're a policymaker, innovator, or researcher, stop by to engage in meaningful talks and explore how we can shape a responsible AI future—together. See you in Geneva!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/bilel-jamoussi/?originalSubdomain=ch" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/bilel-jamoussi/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">00:00 – Podcast Episode Highlights&nbsp;</p><p class="ql-align-justify">&nbsp;01:37 - Meet Dr. Bilel Jamoussi, ITU Deputy Director&nbsp;</p><p class="ql-align-justify">&nbsp;03:22 - What is ITU? 160 Years of Evolution&nbsp;</p><p class="ql-align-justify">&nbsp;05:46 - Origin of "AI for Good" - Why This Name?&nbsp;</p><p class="ql-align-justify">&nbsp;09:10 - Healthcare Revolution: WHO &amp; WIPO Partnerships&nbsp;</p><p class="ql-align-justify">&nbsp;14:00 - Real Impact: Rwanda's Healthcare Success Story&nbsp;</p><p class="ql-align-justify">&nbsp;15:28 - Connecting the Unconnected: 2.6B People Offline&nbsp;</p><p class="ql-align-justify">&nbsp;17:51 - AI Skills Coalition: From Kids to Diplomats&nbsp;</p><p class="ql-align-justify">&nbsp;21:35 - Climate Action: Predicting Natural Disasters&nbsp;</p><p class="ql-align-justify">&nbsp;24:44 - Food Security: AI in Agriculture with FAO&nbsp;</p><p class="ql-align-justify">&nbsp;26:49 - Quantum for Good: The Future of Computing&nbsp;</p><p class="ql-align-justify">&nbsp;29:25 - Why International Standards Matter&nbsp;</p><p class="ql-align-justify">&nbsp;31:18 - Fighting Deepfakes: 70 Countries' Election Challenge&nbsp;</p><p class="ql-align-justify">&nbsp;36:55 - Innovation vs Regulation: Finding Balance&nbsp;</p><p class="ql-align-justify">&nbsp;40:47 - Sovereign AI: Reality Check for Small Countries&nbsp;</p><p class="ql-align-justify">&nbsp;43:44 - Art Meets AI: Creative Canvas Platform&nbsp;</p><p class="ql-align-justify">&nbsp;45:04 - Accessibility: 1 Billion People with Disabilities&nbsp;</p><p class="ql-align-justify">&nbsp;46:31 - Summit Preview: Flying Cars &amp; Brain Interfaces&nbsp;</p><p class="ql-align-justify">&nbsp;47:30 - Lightning Round: One-Word Insights&nbsp;</p><p class="ql-align-justify">&nbsp;49:29 - Call to Action &amp; Summit Details&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify"><strong>Join Us at AI for Good in Geneva – Booth #4282!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">We’re excited to be part of the global conversation on the future of artificial intelligence! Don’t miss the chance to connect with us at Booth 4282, where we’ll be showcasing insights from thought leaders like Dr. Bilel Jamoussi, Deputy Director of ITU’s Telecommunication Standardization Bureau.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>🎙️ Tune into discussions on:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The AI for Good initiative and its global impact&nbsp;</li><li class="ql-align-justify">The role of international standards in AI safety and interoperability&nbsp;</li><li class="ql-align-justify">Advancing digital infrastructure and closing the global connectivity gap&nbsp;</li><li class="ql-align-justify">Real-world AI applications in health, agriculture, and climate action&nbsp;</li></ul><p class="ql-align-justify">👉 Whether you're a policymaker, innovator, or researcher, stop by to engage in meaningful talks and explore how we can shape a responsible AI future—together. See you in Geneva!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/bilel-jamoussi/?originalSubdomain=ch" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/bilel-jamoussi/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p class="ql-align-justify"><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p class="ql-align-justify">00:00 – Podcast Episode Highlights&nbsp;</p><p class="ql-align-justify">&nbsp;01:37 - Meet Dr. Bilel Jamoussi, ITU Deputy Director&nbsp;</p><p class="ql-align-justify">&nbsp;03:22 - What is ITU? 160 Years of Evolution&nbsp;</p><p class="ql-align-justify">&nbsp;05:46 - Origin of "AI for Good" - Why This Name?&nbsp;</p><p class="ql-align-justify">&nbsp;09:10 - Healthcare Revolution: WHO &amp; WIPO Partnerships&nbsp;</p><p class="ql-align-justify">&nbsp;14:00 - Real Impact: Rwanda's Healthcare Success Story&nbsp;</p><p class="ql-align-justify">&nbsp;15:28 - Connecting the Unconnected: 2.6B People Offline&nbsp;</p><p class="ql-align-justify">&nbsp;17:51 - AI Skills Coalition: From Kids to Diplomats&nbsp;</p><p class="ql-align-justify">&nbsp;21:35 - Climate Action: Predicting Natural Disasters&nbsp;</p><p class="ql-align-justify">&nbsp;24:44 - Food Security: AI in Agriculture with FAO&nbsp;</p><p class="ql-align-justify">&nbsp;26:49 - Quantum for Good: The Future of Computing&nbsp;</p><p class="ql-align-justify">&nbsp;29:25 - Why International Standards Matter&nbsp;</p><p class="ql-align-justify">&nbsp;31:18 - Fighting Deepfakes: 70 Countries' Election Challenge&nbsp;</p><p class="ql-align-justify">&nbsp;36:55 - Innovation vs Regulation: Finding Balance&nbsp;</p><p class="ql-align-justify">&nbsp;40:47 - Sovereign AI: Reality Check for Small Countries&nbsp;</p><p class="ql-align-justify">&nbsp;43:44 - Art Meets AI: Creative Canvas Platform&nbsp;</p><p class="ql-align-justify">&nbsp;45:04 - Accessibility: 1 Billion People with Disabilities&nbsp;</p><p class="ql-align-justify">&nbsp;46:31 - Summit Preview: Flying Cars &amp; Brain Interfaces&nbsp;</p><p class="ql-align-justify">&nbsp;47:30 - Lightning Round: One-Word Insights&nbsp;</p><p class="ql-align-justify">&nbsp;49:29 - Call to Action &amp; Summit Details&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join Us at AI for Good in Geneva – Booth #4282! We’re excited to be part of the global conversation on the future of artificial intelligence! Don’t miss the chance to connect with us at Booth 4282, where we’ll be showcasing insights from thought le...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7744a17e-1a83-4928-9c71-ccfd79235c8e]]></guid>
  <title><![CDATA[ From Regulated Industries to AI Governance: Naresh Dulam on Building Ethical AI at Scale | RegulatingAI Podcast]]></title>
  <description><![CDATA[<p class="ql-align-justify"><span style="background-color: transparent;">In this episode of </span><strong style="background-color: transparent;">The</strong><span style="background-color: transparent;"> </span><strong style="background-color: transparent;">RegulatingAI Podcast</strong><span style="background-color: transparent;">, Sanjay Puri speaks with </span><strong style="background-color: transparent;">Naresh Dulam</strong><span style="background-color: transparent;">, SVP at J.P. Morgan Chase, about the </span><strong style="background-color: transparent;">regulatory voids in today’s AI landscape</strong><span style="background-color: transparent;">. From biased data to unexplainable black boxes, Naresh reveals what’s missing—and what must change.&nbsp;</span></p><p class="ql-align-justify"><span style="background-color: transparent;">🎯 Key Takeaways:&nbsp;</span></p><ul><li><span style="background-color: transparent;">The 3 critical policy gaps in AI today: explainability, bias, and accountability&nbsp;</span></li><li><span style="background-color: transparent;">Why relying on a developer's goodwill is no longer sustainable&nbsp;</span></li><li><span style="background-color: transparent;">The case for outcome-based, risk-tiered regulation&nbsp;</span></li><li><span style="background-color: transparent;">Real-world examples from financial services and beyond&nbsp;</span></li></ul><p class="ql-align-justify"><span style="background-color: transparent;">📺 Watch now to understand what’s broken—and how we fix it.&nbsp;</span></p><p><span style="background-color: transparent;">&nbsp;</span></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong><span style="background-color: transparent;"> &nbsp;</span></p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/naresh-dulam/" target="_blank" style="background-color: transparent; color: rgb(70, 120, 134);">https://www.linkedin.com/in/naresh-dulam/</a><span style="background-color: transparent;">&nbsp;&nbsp;</span></p><p><span style="background-color: transparent;">&nbsp;</span></p><p><strong style="background-color: transparent; color: rgb(76, 148, 216);">Timestamps:&nbsp;</strong></p><p class="ql-align-justify"><span style="background-color: transparent;">00:00 Introduction </span></p><p class="ql-align-justify"><span style="background-color: transparent;">01:33 AI Regulatory Gaps </span></p><p class="ql-align-justify"><span style="background-color: transparent;">03:46 Three Main Problems with AI </span></p><p class="ql-align-justify"><span style="background-color: transparent;">06:01 AI Explainability Issues </span></p><p class="ql-align-justify"><span style="background-color: transparent;">09:10 AI Bias and Fairness </span></p><p class="ql-align-justify"><span style="background-color: transparent;">12:37 Who's Accountable for AI Mistakes </span></p><p class="ql-align-justify"><span style="background-color: transparent;">17:30 Outcome-Based Regulation </span></p><p class="ql-align-justify"><span style="background-color: transparent;">19:40 EU vs US AI Regulation </span></p><p class="ql-align-justify"><span style="background-color: transparent;">23:45 Financial Services AI Rules </span></p><p class="ql-align-justify"><span style="background-color: transparent;">28:11 Innovation vs Safety </span></p><p class="ql-align-justify"><span style="background-color: transparent;">31:35 Open Source AI Discussion </span></p><p class="ql-align-justify"><span style="background-color: transparent;">39:05 AI Impact on Jobs </span></p><p class="ql-align-justify"><span style="background-color: transparent;">42:10 Career Advice for AI Era </span></p><p class="ql-align-justify"><span style="background-color: transparent;">45:50 Lightning Round Questions </span></p><p class="ql-align-justify"><span style="background-color: transparent;">46:58 Conclusion</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2e3b448b-61b0-45dc-aefc-0d3585c246d4/25f465d05b.jpg" />
  <pubDate>Tue, 03 Jun 2025 14:25:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="46942293" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2e3b448b-61b0-45dc-aefc-0d3585c246d4/episode.mp3" />
  <itunes:title><![CDATA[ From Regulated Industries to AI Governance: Naresh Dulam on Building Ethical AI at Scale | RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>48:53</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify"><span style="background-color: transparent;">In this episode of </span><strong style="background-color: transparent;">The</strong><span style="background-color: transparent;"> </span><strong style="background-color: transparent;">RegulatingAI Podcast</strong><span style="background-color: transparent;">, Sanjay Puri speaks with </span><strong style="background-color: transparent;">Naresh Dulam</strong><span style="background-color: transparent;">, SVP at J.P. Morgan Chase, about the </span><strong style="background-color: transparent;">regulatory voids in today’s AI landscape</strong><span style="background-color: transparent;">. From biased data to unexplainable black boxes, Naresh reveals what’s missing—and what must change.&nbsp;</span></p><p class="ql-align-justify"><span style="background-color: transparent;">🎯 Key Takeaways:&nbsp;</span></p><ul><li><span style="background-color: transparent;">The 3 critical policy gaps in AI today: explainability, bias, and accountability&nbsp;</span></li><li><span style="background-color: transparent;">Why relying on a developer's goodwill is no longer sustainable&nbsp;</span></li><li><span style="background-color: transparent;">The case for outcome-based, risk-tiered regulation&nbsp;</span></li><li><span style="background-color: transparent;">Real-world examples from financial services and beyond&nbsp;</span></li></ul><p class="ql-align-justify"><span style="background-color: transparent;">📺 Watch now to understand what’s broken—and how we fix it.&nbsp;</span></p><p><span style="background-color: transparent;">&nbsp;</span></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong><span style="background-color: transparent;"> &nbsp;</span></p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/naresh-dulam/" target="_blank" style="background-color: transparent; color: rgb(70, 120, 134);">https://www.linkedin.com/in/naresh-dulam/</a><span style="background-color: transparent;">&nbsp;&nbsp;</span></p><p><span style="background-color: transparent;">&nbsp;</span></p><p><strong style="background-color: transparent; color: rgb(76, 148, 216);">Timestamps:&nbsp;</strong></p><p class="ql-align-justify"><span style="background-color: transparent;">00:00 Introduction </span></p><p class="ql-align-justify"><span style="background-color: transparent;">01:33 AI Regulatory Gaps </span></p><p class="ql-align-justify"><span style="background-color: transparent;">03:46 Three Main Problems with AI </span></p><p class="ql-align-justify"><span style="background-color: transparent;">06:01 AI Explainability Issues </span></p><p class="ql-align-justify"><span style="background-color: transparent;">09:10 AI Bias and Fairness </span></p><p class="ql-align-justify"><span style="background-color: transparent;">12:37 Who's Accountable for AI Mistakes </span></p><p class="ql-align-justify"><span style="background-color: transparent;">17:30 Outcome-Based Regulation </span></p><p class="ql-align-justify"><span style="background-color: transparent;">19:40 EU vs US AI Regulation </span></p><p class="ql-align-justify"><span style="background-color: transparent;">23:45 Financial Services AI Rules </span></p><p class="ql-align-justify"><span style="background-color: transparent;">28:11 Innovation vs Safety </span></p><p class="ql-align-justify"><span style="background-color: transparent;">31:35 Open Source AI Discussion </span></p><p class="ql-align-justify"><span style="background-color: transparent;">39:05 AI Impact on Jobs </span></p><p class="ql-align-justify"><span style="background-color: transparent;">42:10 Career Advice for AI Era </span></p><p class="ql-align-justify"><span style="background-color: transparent;">45:50 Lightning Round Questions </span></p><p class="ql-align-justify"><span style="background-color: transparent;">46:58 Conclusion</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify"><span style="background-color: transparent;">In this episode of </span><strong style="background-color: transparent;">The</strong><span style="background-color: transparent;"> </span><strong style="background-color: transparent;">RegulatingAI Podcast</strong><span style="background-color: transparent;">, Sanjay Puri speaks with </span><strong style="background-color: transparent;">Naresh Dulam</strong><span style="background-color: transparent;">, SVP at J.P. Morgan Chase, about the </span><strong style="background-color: transparent;">regulatory voids in today’s AI landscape</strong><span style="background-color: transparent;">. From biased data to unexplainable black boxes, Naresh reveals what’s missing—and what must change.&nbsp;</span></p><p class="ql-align-justify"><span style="background-color: transparent;">🎯 Key Takeaways:&nbsp;</span></p><ul><li><span style="background-color: transparent;">The 3 critical policy gaps in AI today: explainability, bias, and accountability&nbsp;</span></li><li><span style="background-color: transparent;">Why relying on a developer's goodwill is no longer sustainable&nbsp;</span></li><li><span style="background-color: transparent;">The case for outcome-based, risk-tiered regulation&nbsp;</span></li><li><span style="background-color: transparent;">Real-world examples from financial services and beyond&nbsp;</span></li></ul><p class="ql-align-justify"><span style="background-color: transparent;">📺 Watch now to understand what’s broken—and how we fix it.&nbsp;</span></p><p><span style="background-color: transparent;">&nbsp;</span></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong><span style="background-color: transparent;"> &nbsp;</span></p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/naresh-dulam/" target="_blank" style="background-color: transparent; color: rgb(70, 120, 134);">https://www.linkedin.com/in/naresh-dulam/</a><span style="background-color: transparent;">&nbsp;&nbsp;</span></p><p><span style="background-color: transparent;">&nbsp;</span></p><p><strong style="background-color: transparent; color: rgb(76, 148, 216);">Timestamps:&nbsp;</strong></p><p class="ql-align-justify"><span style="background-color: transparent;">00:00 Introduction </span></p><p class="ql-align-justify"><span style="background-color: transparent;">01:33 AI Regulatory Gaps </span></p><p class="ql-align-justify"><span style="background-color: transparent;">03:46 Three Main Problems with AI </span></p><p class="ql-align-justify"><span style="background-color: transparent;">06:01 AI Explainability Issues </span></p><p class="ql-align-justify"><span style="background-color: transparent;">09:10 AI Bias and Fairness </span></p><p class="ql-align-justify"><span style="background-color: transparent;">12:37 Who's Accountable for AI Mistakes </span></p><p class="ql-align-justify"><span style="background-color: transparent;">17:30 Outcome-Based Regulation </span></p><p class="ql-align-justify"><span style="background-color: transparent;">19:40 EU vs US AI Regulation </span></p><p class="ql-align-justify"><span style="background-color: transparent;">23:45 Financial Services AI Rules </span></p><p class="ql-align-justify"><span style="background-color: transparent;">28:11 Innovation vs Safety </span></p><p class="ql-align-justify"><span style="background-color: transparent;">31:35 Open Source AI Discussion </span></p><p class="ql-align-justify"><span style="background-color: transparent;">39:05 AI Impact on Jobs </span></p><p class="ql-align-justify"><span style="background-color: transparent;">42:10 Career Advice for AI Era </span></p><p class="ql-align-justify"><span style="background-color: transparent;">45:50 Lightning Round Questions </span></p><p class="ql-align-justify"><span style="background-color: transparent;">46:58 Conclusion</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of The RegulatingAI Podcast, Sanjay Puri speaks with Naresh Dulam, SVP at J.P. Morgan Chase, about the regulatory voids in today’s AI landscape. From biased data to unexplainable black boxes, Naresh reveals what’s missing—and what m...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[9e8e1d71-96b5-419c-8d75-66abdc74c904]]></guid>
  <title><![CDATA[The Geopolitical Stakes of AI with Congressman Ben Cline | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>In this thought-provoking episode, Congressman Ben Cline addresses one of the most pressing issues of our time – the geopolitical implications of artificial intelligence. Host Sanjay Puri navigates this critical conversation, shedding light on AI’s role in international relations and national security. &nbsp;</p><p>Key Insights: &nbsp;</p><ul><li>How China’s AI advancements challenge U.S. technological dominance. &nbsp;</li><li>The importance of maintaining a strategic edge in AI through policy and innovation. &nbsp;</li><li>Trade regulations on AI chips and the significance of the AI Diffusion Rule. &nbsp;</li><li>Balancing global trade while protecting U.S. intellectual property and national security. &nbsp;</li><li>The role of Congress in fostering competitive yet secure AI development. &nbsp;</li></ul><p>🌍 <strong>Watch Now:</strong> Understand the strategic dimensions of AI and what it means for the global landscape!&nbsp;</p><p>&nbsp;</p><p><strong>Resources Mentioned:</strong> &nbsp;</p><p><a href="https://x.com/RepBenCline" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepBenCline</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ee54e438-adac-4058-89d3-1b76f585e7f5/8acffde3c0.jpg" />
  <pubDate>Thu, 29 May 2025 13:54:07 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="33600192" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ee54e438-adac-4058-89d3-1b76f585e7f5/episode.mp3" />
  <itunes:title><![CDATA[The Geopolitical Stakes of AI with Congressman Ben Cline | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>34:31</itunes:duration>
  <itunes:summary><![CDATA[<p>In this thought-provoking episode, Congressman Ben Cline addresses one of the most pressing issues of our time – the geopolitical implications of artificial intelligence. Host Sanjay Puri navigates this critical conversation, shedding light on AI’s role in international relations and national security. &nbsp;</p><p>Key Insights: &nbsp;</p><ul><li>How China’s AI advancements challenge U.S. technological dominance. &nbsp;</li><li>The importance of maintaining a strategic edge in AI through policy and innovation. &nbsp;</li><li>Trade regulations on AI chips and the significance of the AI Diffusion Rule. &nbsp;</li><li>Balancing global trade while protecting U.S. intellectual property and national security. &nbsp;</li><li>The role of Congress in fostering competitive yet secure AI development. &nbsp;</li></ul><p>🌍 <strong>Watch Now:</strong> Understand the strategic dimensions of AI and what it means for the global landscape!&nbsp;</p><p>&nbsp;</p><p><strong>Resources Mentioned:</strong> &nbsp;</p><p><a href="https://x.com/RepBenCline" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepBenCline</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this thought-provoking episode, Congressman Ben Cline addresses one of the most pressing issues of our time – the geopolitical implications of artificial intelligence. Host Sanjay Puri navigates this critical conversation, shedding light on AI’s role in international relations and national security. &nbsp;</p><p>Key Insights: &nbsp;</p><ul><li>How China’s AI advancements challenge U.S. technological dominance. &nbsp;</li><li>The importance of maintaining a strategic edge in AI through policy and innovation. &nbsp;</li><li>Trade regulations on AI chips and the significance of the AI Diffusion Rule. &nbsp;</li><li>Balancing global trade while protecting U.S. intellectual property and national security. &nbsp;</li><li>The role of Congress in fostering competitive yet secure AI development. &nbsp;</li></ul><p>🌍 <strong>Watch Now:</strong> Understand the strategic dimensions of AI and what it means for the global landscape!&nbsp;</p><p>&nbsp;</p><p><strong>Resources Mentioned:</strong> &nbsp;</p><p><a href="https://x.com/RepBenCline" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepBenCline</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this thought-provoking episode, Congressman Ben Cline addresses one of the most pressing issues of our time – the geopolitical implications of artificial intelligence. Host Sanjay Puri navigates this critical conversation, shedding light on AI’s...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[46781218-152c-4aa9-b8f2-638ae2a188a0]]></guid>
  <title><![CDATA[Can Connecticut become the next AI powerhouse?  - RegulatingAI Podcast]]></title>
  <description><![CDATA[<p>Discover how Connecticut is becoming a thriving hub for AI advancements! In this episode, Gov. Ned Lamont and Dan O'Keefe share insights into:&nbsp;</p><ul><li>Fostering AI startups and positioning Connecticut as a leader in the AI space.&nbsp;</li><li>Building a skilled workforce through comprehensive education and training initiatives.&nbsp;</li><li>Balancing innovation with public safety through thoughtful policies.&nbsp;</li><li>Navigating challenges faced by small AI companies in a complex regulatory landscape.&nbsp;</li><li>The importance of inclusive AI education to secure Connecticut’s innovative future.&nbsp;</li></ul><p>💡 Stay tuned for more insightful discussions on AI and its evolving role!&nbsp;</p><p> &nbsp;</p><p><strong>Resources Mentioned:</strong> &nbsp;</p><p><a href="https://x.com/govnedlamont?lang=en" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/govnedlamont?lang=en</a>  &nbsp;</p><p><a href="https://www.youtube.com/@GovNedLamont/videos" target="_blank" style="color: rgb(70, 120, 134);">https://www.youtube.com/@GovNedLamont/videos</a>  &nbsp;</p><p><a href="https://www.instagram.com/govnedlamont/?hl=en" target="_blank" style="color: rgb(70, 120, 134);">https://www.instagram.com/govnedlamont/?hl=en</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/dokeefe/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/dokeefe/</a>  &nbsp;</p><p><a href="https://x.com/okeefe" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/okeefe</a>  &nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/59ecf6a8-444c-45ee-b624-edeaf0af1f49/f7fb7676fd.jpg" />
  <pubDate>Thu, 22 May 2025 09:37:59 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="47342579" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/59ecf6a8-444c-45ee-b624-edeaf0af1f49/episode.mp3" />
  <itunes:title><![CDATA[Can Connecticut become the next AI powerhouse?  - RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>48:35</itunes:duration>
  <itunes:summary><![CDATA[<p>Discover how Connecticut is becoming a thriving hub for AI advancements! In this episode, Gov. Ned Lamont and Dan O'Keefe share insights into:&nbsp;</p><ul><li>Fostering AI startups and positioning Connecticut as a leader in the AI space.&nbsp;</li><li>Building a skilled workforce through comprehensive education and training initiatives.&nbsp;</li><li>Balancing innovation with public safety through thoughtful policies.&nbsp;</li><li>Navigating challenges faced by small AI companies in a complex regulatory landscape.&nbsp;</li><li>The importance of inclusive AI education to secure Connecticut’s innovative future.&nbsp;</li></ul><p>💡 Stay tuned for more insightful discussions on AI and its evolving role!&nbsp;</p><p> &nbsp;</p><p><strong>Resources Mentioned:</strong> &nbsp;</p><p><a href="https://x.com/govnedlamont?lang=en" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/govnedlamont?lang=en</a>  &nbsp;</p><p><a href="https://www.youtube.com/@GovNedLamont/videos" target="_blank" style="color: rgb(70, 120, 134);">https://www.youtube.com/@GovNedLamont/videos</a>  &nbsp;</p><p><a href="https://www.instagram.com/govnedlamont/?hl=en" target="_blank" style="color: rgb(70, 120, 134);">https://www.instagram.com/govnedlamont/?hl=en</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/dokeefe/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/dokeefe/</a>  &nbsp;</p><p><a href="https://x.com/okeefe" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/okeefe</a>  &nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Discover how Connecticut is becoming a thriving hub for AI advancements! In this episode, Gov. Ned Lamont and Dan O'Keefe share insights into:&nbsp;</p><ul><li>Fostering AI startups and positioning Connecticut as a leader in the AI space.&nbsp;</li><li>Building a skilled workforce through comprehensive education and training initiatives.&nbsp;</li><li>Balancing innovation with public safety through thoughtful policies.&nbsp;</li><li>Navigating challenges faced by small AI companies in a complex regulatory landscape.&nbsp;</li><li>The importance of inclusive AI education to secure Connecticut’s innovative future.&nbsp;</li></ul><p>💡 Stay tuned for more insightful discussions on AI and its evolving role!&nbsp;</p><p> &nbsp;</p><p><strong>Resources Mentioned:</strong> &nbsp;</p><p><a href="https://x.com/govnedlamont?lang=en" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/govnedlamont?lang=en</a>  &nbsp;</p><p><a href="https://www.youtube.com/@GovNedLamont/videos" target="_blank" style="color: rgb(70, 120, 134);">https://www.youtube.com/@GovNedLamont/videos</a>  &nbsp;</p><p><a href="https://www.instagram.com/govnedlamont/?hl=en" target="_blank" style="color: rgb(70, 120, 134);">https://www.instagram.com/govnedlamont/?hl=en</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/dokeefe/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/dokeefe/</a>  &nbsp;</p><p><a href="https://x.com/okeefe" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/okeefe</a>  &nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Discover how Connecticut is becoming a thriving hub for AI advancements! In this episode, Gov. Ned Lamont and Dan O'Keefe share insights into: Fostering AI startups and positioning Connecticut as a leader in the AI space. Building a skilled workfor...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[56a3d7a7-9832-4bf2-87b2-bc18ab14d3d5]]></guid>
  <title><![CDATA[AI Regulation and Governance: Congresswoman Kat Cammack Breaks Down the Challenges | RAI Podcast]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">Discover how AI is transforming politics and governance in this insightful episode of the Regulating AI Podcast with Congresswoman Kat Cammack, a member of the House AI Task Force. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">➡️ AI’s growing role in political decision-making.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ Challenges in regulating AI while fostering innovation.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ The importance of bipartisan cooperation in shaping AI policy.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ How AI is influencing voter behavior and campaign strategies. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">👉 Watch the full conversation for expert insights on the future of AI and politics</span></p><p><br></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned:  </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://x.com/RepKatCammack </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://cammack.house.gov/</span></p><p><br></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Timestamps:  </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">00:00 — Highlights </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">01:33 — Intro to podcast and Kat’s leadership roles</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">04:16 — Why Kat got involved in AI regulation</span></p><p><span style="color: rgb(13, 13, 13);"> </span></p><p><span style="color: rgb(13, 13, 13);">08:01 — Warning against patchwork state AI laws</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">10:09 — Blockchain as a tool for AI oversight</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">13:43 — AI, chips, and national security risks</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">18:29 — Kat introduces the REINS Act</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">22:34 — AI’s impact on farms and rural health</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2cfcac7b-4dab-4ab4-843f-fd01e048ef9b/857cf74c25.jpg" />
  <pubDate>Thu, 15 May 2025 13:35:52 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="26292743" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2cfcac7b-4dab-4ab4-843f-fd01e048ef9b/episode.mp3" />
  <itunes:title><![CDATA[AI Regulation and Governance: Congresswoman Kat Cammack Breaks Down the Challenges | RAI Podcast]]></itunes:title>
  <itunes:duration>27:02</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">Discover how AI is transforming politics and governance in this insightful episode of the Regulating AI Podcast with Congresswoman Kat Cammack, a member of the House AI Task Force. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">➡️ AI’s growing role in political decision-making.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ Challenges in regulating AI while fostering innovation.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ The importance of bipartisan cooperation in shaping AI policy.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ How AI is influencing voter behavior and campaign strategies. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">👉 Watch the full conversation for expert insights on the future of AI and politics</span></p><p><br></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned:  </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://x.com/RepKatCammack </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://cammack.house.gov/</span></p><p><br></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Timestamps:  </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">00:00 — Highlights </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">01:33 — Intro to podcast and Kat’s leadership roles</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">04:16 — Why Kat got involved in AI regulation</span></p><p><span style="color: rgb(13, 13, 13);"> </span></p><p><span style="color: rgb(13, 13, 13);">08:01 — Warning against patchwork state AI laws</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">10:09 — Blockchain as a tool for AI oversight</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">13:43 — AI, chips, and national security risks</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">18:29 — Kat introduces the REINS Act</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">22:34 — AI’s impact on farms and rural health</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">Discover how AI is transforming politics and governance in this insightful episode of the Regulating AI Podcast with Congresswoman Kat Cammack, a member of the House AI Task Force. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">➡️ AI’s growing role in political decision-making.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ Challenges in regulating AI while fostering innovation.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ The importance of bipartisan cooperation in shaping AI policy.  </span></p><p><span style="color: rgb(13, 13, 13);">➡️ How AI is influencing voter behavior and campaign strategies. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">👉 Watch the full conversation for expert insights on the future of AI and politics</span></p><p><br></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned:  </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://x.com/RepKatCammack </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://cammack.house.gov/</span></p><p><br></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Timestamps:  </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">00:00 — Highlights </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">01:33 — Intro to podcast and Kat’s leadership roles</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">04:16 — Why Kat got involved in AI regulation</span></p><p><span style="color: rgb(13, 13, 13);"> </span></p><p><span style="color: rgb(13, 13, 13);">08:01 — Warning against patchwork state AI laws</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">10:09 — Blockchain as a tool for AI oversight</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">13:43 — AI, chips, and national security risks</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">18:29 — Kat introduces the REINS Act</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">22:34 — AI’s impact on farms and rural health</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Discover how AI is transforming politics and governance in this insightful episode of the Regulating AI Podcast with Congresswoman Kat Cammack, a member of the House AI Task Force. ➡️ AI’s growing role in political decision-making.  ➡️ Challenges i...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[06bcb369-b958-4c9e-892f-049fefe02180]]></guid>
  <title><![CDATA[How AI Is Transforming Gambling — Product Innovation, Player Protection and Regulation in the Spotlight ]]></title>
  <description><![CDATA[<p>The gaming industry is being transformed by AI, from <strong>personalized player experiences</strong> to <strong>advanced fraud detection</strong>. But is regulation keeping pace? In this episode of the <strong>RegulatingAI Podcast</strong>, <strong>Kasra Ghaharian &amp; Simo Dragicevic</strong> break down:&nbsp;</p><p><br></p><ul><li>How AI is <strong>reshaping the gambling industry</strong> and influencing player behavior.&nbsp;</li><li>The rise of <strong>multimodal AI &amp; digital twins</strong> and what they mean for gaming.&nbsp;</li><li><strong>Regulatory blind spots</strong> and how policymakers can bridge the gap.&nbsp;</li><li>AI’s role in <strong>fraud prevention and responsible gaming</strong>—where it succeeds and where risks remain.&nbsp;</li></ul><p>📺 <strong>Watch the full episode here:</strong> [YouTube Link]&nbsp;</p><p><br></p><p>#AI #Gaming #FraudDetection #EthicalAI #RegulatingAI&nbsp;</p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/simo-dragicevic-54469b13/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/simo-dragicevic-54469b13/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/kasrag/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/kasrag/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/f9900ec8-a59c-4784-a56c-66b2ef79ec92/3dcc5acd68.jpg" />
  <pubDate>Mon, 12 May 2025 15:20:53 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="58200860" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/f9900ec8-a59c-4784-a56c-66b2ef79ec92/episode.mp3" />
  <itunes:title><![CDATA[How AI Is Transforming Gambling — Product Innovation, Player Protection and Regulation in the Spotlight ]]></itunes:title>
  <itunes:duration>1:00:37</itunes:duration>
  <itunes:summary><![CDATA[<p>The gaming industry is being transformed by AI, from <strong>personalized player experiences</strong> to <strong>advanced fraud detection</strong>. But is regulation keeping pace? In this episode of the <strong>RegulatingAI Podcast</strong>, <strong>Kasra Ghaharian &amp; Simo Dragicevic</strong> break down:&nbsp;</p><p><br></p><ul><li>How AI is <strong>reshaping the gambling industry</strong> and influencing player behavior.&nbsp;</li><li>The rise of <strong>multimodal AI &amp; digital twins</strong> and what they mean for gaming.&nbsp;</li><li><strong>Regulatory blind spots</strong> and how policymakers can bridge the gap.&nbsp;</li><li>AI’s role in <strong>fraud prevention and responsible gaming</strong>—where it succeeds and where risks remain.&nbsp;</li></ul><p>📺 <strong>Watch the full episode here:</strong> [YouTube Link]&nbsp;</p><p><br></p><p>#AI #Gaming #FraudDetection #EthicalAI #RegulatingAI&nbsp;</p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/simo-dragicevic-54469b13/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/simo-dragicevic-54469b13/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/kasrag/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/kasrag/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>The gaming industry is being transformed by AI, from <strong>personalized player experiences</strong> to <strong>advanced fraud detection</strong>. But is regulation keeping pace? In this episode of the <strong>RegulatingAI Podcast</strong>, <strong>Kasra Ghaharian &amp; Simo Dragicevic</strong> break down:&nbsp;</p><p><br></p><ul><li>How AI is <strong>reshaping the gambling industry</strong> and influencing player behavior.&nbsp;</li><li>The rise of <strong>multimodal AI &amp; digital twins</strong> and what they mean for gaming.&nbsp;</li><li><strong>Regulatory blind spots</strong> and how policymakers can bridge the gap.&nbsp;</li><li>AI’s role in <strong>fraud prevention and responsible gaming</strong>—where it succeeds and where risks remain.&nbsp;</li></ul><p>📺 <strong>Watch the full episode here:</strong> [YouTube Link]&nbsp;</p><p><br></p><p>#AI #Gaming #FraudDetection #EthicalAI #RegulatingAI&nbsp;</p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/simo-dragicevic-54469b13/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/simo-dragicevic-54469b13/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/kasrag/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/kasrag/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The gaming industry is being transformed by AI, from personalized player experiences to advanced fraud detection. But is regulation keeping pace? In this episode of the RegulatingAI Podcast, Kasra Ghaharian & Simo Dragicevic break down: How AI is r...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[899994e1-ff94-49b7-aea6-47968a7d304e]]></guid>
  <title><![CDATA[From Neuroscience to Global AI Policy: Dr. Ansgar Koene on Responsible Innovation | RAI Podcast]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode of the RegulatingAI Podcast, host Sanjay Puri speaks with Dr. Ansgar Koene, Global AI Ethics and Regulatory Leader at EY and former Chair of the IEEE working group for the IEEE 7003-2024 Standard for Algorithmic Bias Considerations. Dive into a deep discussion on: </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">How neuroscience and robotics shaped Dr. Koene’s unique approach to AI ethics? </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Algorithmic bias: where it begins, and how we can fight it </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Challenges of aligning global AI regulations across continents </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Why internal and external ethics boards matter </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">The delicate balance between innovation and regulation </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">👉 A must-watch for policymakers, tech leaders, and AI governance advocates. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/akoene/ </span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ad6366c5-ba07-4067-93fc-842ac6a7ae41/8339fa6474.jpg" />
  <pubDate>Fri, 09 May 2025 12:07:51 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="53617868" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ad6366c5-ba07-4067-93fc-842ac6a7ae41/episode.mp3" />
  <itunes:title><![CDATA[From Neuroscience to Global AI Policy: Dr. Ansgar Koene on Responsible Innovation | RAI Podcast]]></itunes:title>
  <itunes:duration>52:27</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode of the RegulatingAI Podcast, host Sanjay Puri speaks with Dr. Ansgar Koene, Global AI Ethics and Regulatory Leader at EY and former Chair of the IEEE working group for the IEEE 7003-2024 Standard for Algorithmic Bias Considerations. Dive into a deep discussion on: </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">How neuroscience and robotics shaped Dr. Koene’s unique approach to AI ethics? </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Algorithmic bias: where it begins, and how we can fight it </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Challenges of aligning global AI regulations across continents </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Why internal and external ethics boards matter </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">The delicate balance between innovation and regulation </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">👉 A must-watch for policymakers, tech leaders, and AI governance advocates. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/akoene/ </span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode of the RegulatingAI Podcast, host Sanjay Puri speaks with Dr. Ansgar Koene, Global AI Ethics and Regulatory Leader at EY and former Chair of the IEEE working group for the IEEE 7003-2024 Standard for Algorithmic Bias Considerations. Dive into a deep discussion on: </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">How neuroscience and robotics shaped Dr. Koene’s unique approach to AI ethics? </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Algorithmic bias: where it begins, and how we can fight it </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Challenges of aligning global AI regulations across continents </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Why internal and external ethics boards matter </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">The delicate balance between innovation and regulation </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">👉 A must-watch for policymakers, tech leaders, and AI governance advocates. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/akoene/ </span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, host Sanjay Puri speaks with Dr. Ansgar Koene, Global AI Ethics and Regulatory Leader at EY and former Chair of the IEEE working group for the IEEE 7003-2024 Standard for Algorithmic Bias Considerations....]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[c2e9d2f5-918c-4809-b3ad-c94777879637]]></guid>
  <title><![CDATA[Inside the EU AI Act: What It Means for Businesses & Global AI Policy | Axel Voss, MEP | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>📢 <strong>Listen Now:</strong> A deep dive into the <strong>EU AI Act</strong> with <strong>Axel Voss, Member of the European Parliament (MEP)</strong>, one of the key architects behind this groundbreaking legislation.&nbsp;</p><p><br></p><p>🔹 <strong>Key Topics Covered:</strong>&nbsp;</p><p>✅ The <strong>core principles</strong> of the EU AI Act and its enforcement&nbsp;</p><p>✅ How this regulation <strong>impacts global businesses</strong> and AI startups&nbsp;</p><p>✅ The <strong>EU’s approach</strong> to AI liability, ethics, and innovation&nbsp;</p><p>✅ Comparing the EU AI Act with <strong>US and China’s AI policies</strong>&nbsp;</p><p>✅ Challenges of <strong>implementing AI laws across industries</strong>&nbsp;</p><p>📌 Don't miss this insightful conversation with one of Europe’s leading voices on <strong>AI regulation and digital policy.</strong>&nbsp;</p><p><br></p><p>🔔 <strong>Subscribe to RegulatingAI for more expert conversations on AI governance.</strong>&nbsp;</p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/axel-voss-a1744969/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/axel-voss-a1744969/</a>&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d13431cc-03eb-4cb2-a483-81ad4297c368/d37e8e3a04.jpg" />
  <pubDate>Wed, 07 May 2025 09:28:31 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="44995439" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d13431cc-03eb-4cb2-a483-81ad4297c368/episode.mp3" />
  <itunes:title><![CDATA[Inside the EU AI Act: What It Means for Businesses & Global AI Policy | Axel Voss, MEP | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>46:52</itunes:duration>
  <itunes:summary><![CDATA[<p>📢 <strong>Listen Now:</strong> A deep dive into the <strong>EU AI Act</strong> with <strong>Axel Voss, Member of the European Parliament (MEP)</strong>, one of the key architects behind this groundbreaking legislation.&nbsp;</p><p><br></p><p>🔹 <strong>Key Topics Covered:</strong>&nbsp;</p><p>✅ The <strong>core principles</strong> of the EU AI Act and its enforcement&nbsp;</p><p>✅ How this regulation <strong>impacts global businesses</strong> and AI startups&nbsp;</p><p>✅ The <strong>EU’s approach</strong> to AI liability, ethics, and innovation&nbsp;</p><p>✅ Comparing the EU AI Act with <strong>US and China’s AI policies</strong>&nbsp;</p><p>✅ Challenges of <strong>implementing AI laws across industries</strong>&nbsp;</p><p>📌 Don't miss this insightful conversation with one of Europe’s leading voices on <strong>AI regulation and digital policy.</strong>&nbsp;</p><p><br></p><p>🔔 <strong>Subscribe to RegulatingAI for more expert conversations on AI governance.</strong>&nbsp;</p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/axel-voss-a1744969/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/axel-voss-a1744969/</a>&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>📢 <strong>Listen Now:</strong> A deep dive into the <strong>EU AI Act</strong> with <strong>Axel Voss, Member of the European Parliament (MEP)</strong>, one of the key architects behind this groundbreaking legislation.&nbsp;</p><p><br></p><p>🔹 <strong>Key Topics Covered:</strong>&nbsp;</p><p>✅ The <strong>core principles</strong> of the EU AI Act and its enforcement&nbsp;</p><p>✅ How this regulation <strong>impacts global businesses</strong> and AI startups&nbsp;</p><p>✅ The <strong>EU’s approach</strong> to AI liability, ethics, and innovation&nbsp;</p><p>✅ Comparing the EU AI Act with <strong>US and China’s AI policies</strong>&nbsp;</p><p>✅ Challenges of <strong>implementing AI laws across industries</strong>&nbsp;</p><p>📌 Don't miss this insightful conversation with one of Europe’s leading voices on <strong>AI regulation and digital policy.</strong>&nbsp;</p><p><br></p><p>🔔 <strong>Subscribe to RegulatingAI for more expert conversations on AI governance.</strong>&nbsp;</p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/axel-voss-a1744969/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/axel-voss-a1744969/</a>&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[📢 Listen Now: A deep dive into the EU AI Act with Axel Voss, Member of the European Parliament (MEP), one of the key architects behind this groundbreaking legislation. 🔹 Key Topics Covered: ✅ The core principles of the EU AI Act and its enforcement...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[823e3d85-f429-420c-9da3-a2d027593146]]></guid>
  <title><![CDATA[AI Privacy, Compliance & Trust: Mastercard’s JoAnn Stonier on Responsible AI]]></title>
  <description><![CDATA[<p>📌 In This Episode: In a world where AI is transforming industries, how do we ensure ethical AI governance? In this episode of RegulatingAI, Sanjay Puri sits down with JoAnn Stonier, EVP &amp; Fellow of Data and AI at Mastercard, to discuss:</p><p><br></p><p>✔ The evolving role of data governance and privacy in AI </p><p>✔ How companies like Mastercard are implementing ethical AI frameworks </p><p>✔ The challenges of AI regulation and global compliance </p><p>✔ The balance between AI innovation and consumer trust</p><p><br></p><p>Why Listen? </p><p>JoAnn brings decades of expertise in AI ethics, financial services, and regulatory compliance.</p><p>Whether you're an AI researcher, policymaker, or business leader, this conversation provides actionable insights on responsible AI development.</p><p><br></p><p>Resources Mentioned: https://www.linkedin.com/in/joann-stonier-5540b86/</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/c42af85c-7bfa-4ba1-9c2d-c0145b4e84cf/bea4bdfbbd.jpg" />
  <pubDate>Sun, 04 May 2025 12:01:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="42437111" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/c42af85c-7bfa-4ba1-9c2d-c0145b4e84cf/episode.mp3" />
  <itunes:title><![CDATA[AI Privacy, Compliance & Trust: Mastercard’s JoAnn Stonier on Responsible AI]]></itunes:title>
  <itunes:duration>43:48</itunes:duration>
  <itunes:summary><![CDATA[<p>📌 In This Episode: In a world where AI is transforming industries, how do we ensure ethical AI governance? In this episode of RegulatingAI, Sanjay Puri sits down with JoAnn Stonier, EVP &amp; Fellow of Data and AI at Mastercard, to discuss:</p><p><br></p><p>✔ The evolving role of data governance and privacy in AI </p><p>✔ How companies like Mastercard are implementing ethical AI frameworks </p><p>✔ The challenges of AI regulation and global compliance </p><p>✔ The balance between AI innovation and consumer trust</p><p><br></p><p>Why Listen? </p><p>JoAnn brings decades of expertise in AI ethics, financial services, and regulatory compliance.</p><p>Whether you're an AI researcher, policymaker, or business leader, this conversation provides actionable insights on responsible AI development.</p><p><br></p><p>Resources Mentioned: https://www.linkedin.com/in/joann-stonier-5540b86/</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>📌 In This Episode: In a world where AI is transforming industries, how do we ensure ethical AI governance? In this episode of RegulatingAI, Sanjay Puri sits down with JoAnn Stonier, EVP &amp; Fellow of Data and AI at Mastercard, to discuss:</p><p><br></p><p>✔ The evolving role of data governance and privacy in AI </p><p>✔ How companies like Mastercard are implementing ethical AI frameworks </p><p>✔ The challenges of AI regulation and global compliance </p><p>✔ The balance between AI innovation and consumer trust</p><p><br></p><p>Why Listen? </p><p>JoAnn brings decades of expertise in AI ethics, financial services, and regulatory compliance.</p><p>Whether you're an AI researcher, policymaker, or business leader, this conversation provides actionable insights on responsible AI development.</p><p><br></p><p>Resources Mentioned: https://www.linkedin.com/in/joann-stonier-5540b86/</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[📌 In This Episode: In a world where AI is transforming industries, how do we ensure ethical AI governance? In this episode of RegulatingAI, Sanjay Puri sits down with JoAnn Stonier, EVP & Fellow of Data and AI at Mastercard, to discuss:✔ The evolvi...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a3387f51-7b95-4577-96ec-46ec78b2a65e]]></guid>
  <title><![CDATA[Beyond the Executive Order: Congressman Ted Lieu on Congress's AI Strategy | Part 2 - RegulatingAI Podcast]]></title>
  <description><![CDATA[<p>In this thought-provoking conversation, Congressman Ted Lieu explores the friction between open innovation and responsible oversight in AI. From open-source models to the rise of sovereign AI, we dive deep into the geopolitics of algorithms.</p><p><br></p><p>🔹 Open-source AI: A catalyst for innovation or a national security risk?</p><p>🔹 DeepSeek, LLaMA, and China's AI momentum</p><p>🔹 Export controls, chip access, and tech diplomacy</p><p>🔹 The myth and reality of Artificial General Intelligence (AGI)</p><p>🔹 Why U.S. policymakers must balance global collaboration with domestic resilience</p><p><br></p><p>Explore the intersection of regulation, innovation, and international power dynamics—straight from Capitol Hill.</p><p><br></p><p>Resources Mentioned: https://lieu.house.gov/ https://x.com/RepTedLieu</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/4bf2ce41-05a4-41b4-ba68-67b0f45bab48/ebaea9aeb2.jpg" />
  <pubDate>Mon, 28 Apr 2025 13:57:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="43648667" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/4bf2ce41-05a4-41b4-ba68-67b0f45bab48/episode.mp3" />
  <itunes:title><![CDATA[Beyond the Executive Order: Congressman Ted Lieu on Congress's AI Strategy | Part 2 - RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>45:10</itunes:duration>
  <itunes:summary><![CDATA[<p>In this thought-provoking conversation, Congressman Ted Lieu explores the friction between open innovation and responsible oversight in AI. From open-source models to the rise of sovereign AI, we dive deep into the geopolitics of algorithms.</p><p><br></p><p>🔹 Open-source AI: A catalyst for innovation or a national security risk?</p><p>🔹 DeepSeek, LLaMA, and China's AI momentum</p><p>🔹 Export controls, chip access, and tech diplomacy</p><p>🔹 The myth and reality of Artificial General Intelligence (AGI)</p><p>🔹 Why U.S. policymakers must balance global collaboration with domestic resilience</p><p><br></p><p>Explore the intersection of regulation, innovation, and international power dynamics—straight from Capitol Hill.</p><p><br></p><p>Resources Mentioned: https://lieu.house.gov/ https://x.com/RepTedLieu</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this thought-provoking conversation, Congressman Ted Lieu explores the friction between open innovation and responsible oversight in AI. From open-source models to the rise of sovereign AI, we dive deep into the geopolitics of algorithms.</p><p><br></p><p>🔹 Open-source AI: A catalyst for innovation or a national security risk?</p><p>🔹 DeepSeek, LLaMA, and China's AI momentum</p><p>🔹 Export controls, chip access, and tech diplomacy</p><p>🔹 The myth and reality of Artificial General Intelligence (AGI)</p><p>🔹 Why U.S. policymakers must balance global collaboration with domestic resilience</p><p><br></p><p>Explore the intersection of regulation, innovation, and international power dynamics—straight from Capitol Hill.</p><p><br></p><p>Resources Mentioned: https://lieu.house.gov/ https://x.com/RepTedLieu</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this thought-provoking conversation, Congressman Ted Lieu explores the friction between open innovation and responsible oversight in AI. From open-source models to the rise of sovereign AI, we dive deep into the geopolitics of algorithms.🔹 Open-...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[e760ca89-b092-4a21-a326-a780331b4588]]></guid>
  <title><![CDATA[The Future of Ethical AI – Dr. Emmanuel R. Goffi on Governance, Fairness & Accountability]]></title>
  <description><![CDATA[<p>Artificial intelligence is transforming industries, societies, and daily life—but who ensures it remains ethical? In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Dr. Emmanuel R. Goffi, AI ethicist and professor at the Paris Institute of Digital Technology, to discuss:</p><p>🔹 The role of AI ethics in shaping policy and regulation 🔹 Key concerns around AI bias, fairness, and transparency 🔹 The global impact of AI on human rights and democratic values 🔹 How policymakers and tech leaders can collaborate for responsible AI development</p><p>📢 Join the conversation and shape the future of AI ethics!</p><p><br></p><p>Resources Mentioned: https://www.linkedin.com/in/emmanuelgoffi/</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/288aa7d9-a1ca-4eb8-be2a-1fa4683a5bdf/61ce49f237.jpg" />
  <pubDate>Fri, 25 Apr 2025 12:17:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39667714" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/288aa7d9-a1ca-4eb8-be2a-1fa4683a5bdf/episode.mp3" />
  <itunes:title><![CDATA[The Future of Ethical AI – Dr. Emmanuel R. Goffi on Governance, Fairness & Accountability]]></itunes:title>
  <itunes:duration>41:19</itunes:duration>
  <itunes:summary><![CDATA[<p>Artificial intelligence is transforming industries, societies, and daily life—but who ensures it remains ethical? In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Dr. Emmanuel R. Goffi, AI ethicist and professor at the Paris Institute of Digital Technology, to discuss:</p><p>🔹 The role of AI ethics in shaping policy and regulation 🔹 Key concerns around AI bias, fairness, and transparency 🔹 The global impact of AI on human rights and democratic values 🔹 How policymakers and tech leaders can collaborate for responsible AI development</p><p>📢 Join the conversation and shape the future of AI ethics!</p><p><br></p><p>Resources Mentioned: https://www.linkedin.com/in/emmanuelgoffi/</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Artificial intelligence is transforming industries, societies, and daily life—but who ensures it remains ethical? In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Dr. Emmanuel R. Goffi, AI ethicist and professor at the Paris Institute of Digital Technology, to discuss:</p><p>🔹 The role of AI ethics in shaping policy and regulation 🔹 Key concerns around AI bias, fairness, and transparency 🔹 The global impact of AI on human rights and democratic values 🔹 How policymakers and tech leaders can collaborate for responsible AI development</p><p>📢 Join the conversation and shape the future of AI ethics!</p><p><br></p><p>Resources Mentioned: https://www.linkedin.com/in/emmanuelgoffi/</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Artificial intelligence is transforming industries, societies, and daily life—but who ensures it remains ethical? In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Dr. Emmanuel R. Goffi, AI ethicist and professor at the Paris I...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f06b0eda-7119-44d0-b7bc-ec58a3705891]]></guid>
  <title><![CDATA[Cognitive Assistance and AI in Everyday Life | Prof. Antonio Krüger | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">Join us on the RegulatingAI Podcast as we welcome <strong>Professor Antonio Krüger, CEO of the German Research Center for Artificial Intelligence (DFKI) and Professor of Computer Science at Saarland University</strong>, to discuss the evolving relationship between artificial intelligence and human-computer interaction.&nbsp;</p><p><br></p><p class="ql-align-justify">🔹 <strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI is transforming the way humans interact with technology&nbsp;</li><li class="ql-align-justify">The future of cognitive assistance in everyday applications&nbsp;</li><li class="ql-align-justify">Ethical considerations in AI-driven interfaces&nbsp;</li></ul><p class="ql-align-justify">🎧 <strong>Watch now and explore AI's next frontier!</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.dfki.de/~krueger/" target="_blank" style="color: rgb(70, 120, 134);">https://www.dfki.de/~krueger/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/717a61fb-e12f-4c42-804f-725421a4d087/54d0d85803.jpg" />
  <pubDate>Wed, 23 Apr 2025 13:15:03 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="56575835" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/717a61fb-e12f-4c42-804f-725421a4d087/episode.mp3" />
  <itunes:title><![CDATA[Cognitive Assistance and AI in Everyday Life | Prof. Antonio Krüger | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>58:55</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">Join us on the RegulatingAI Podcast as we welcome <strong>Professor Antonio Krüger, CEO of the German Research Center for Artificial Intelligence (DFKI) and Professor of Computer Science at Saarland University</strong>, to discuss the evolving relationship between artificial intelligence and human-computer interaction.&nbsp;</p><p><br></p><p class="ql-align-justify">🔹 <strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI is transforming the way humans interact with technology&nbsp;</li><li class="ql-align-justify">The future of cognitive assistance in everyday applications&nbsp;</li><li class="ql-align-justify">Ethical considerations in AI-driven interfaces&nbsp;</li></ul><p class="ql-align-justify">🎧 <strong>Watch now and explore AI's next frontier!</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.dfki.de/~krueger/" target="_blank" style="color: rgb(70, 120, 134);">https://www.dfki.de/~krueger/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">Join us on the RegulatingAI Podcast as we welcome <strong>Professor Antonio Krüger, CEO of the German Research Center for Artificial Intelligence (DFKI) and Professor of Computer Science at Saarland University</strong>, to discuss the evolving relationship between artificial intelligence and human-computer interaction.&nbsp;</p><p><br></p><p class="ql-align-justify">🔹 <strong>Key Takeaways:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">How AI is transforming the way humans interact with technology&nbsp;</li><li class="ql-align-justify">The future of cognitive assistance in everyday applications&nbsp;</li><li class="ql-align-justify">Ethical considerations in AI-driven interfaces&nbsp;</li></ul><p class="ql-align-justify">🎧 <strong>Watch now and explore AI's next frontier!</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.dfki.de/~krueger/" target="_blank" style="color: rgb(70, 120, 134);">https://www.dfki.de/~krueger/</a>&nbsp;</p><p><a href="https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/antonio-kr%C3%BCger-3202b46/?originalSubdomain=de</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join us on the RegulatingAI Podcast as we welcome Professor Antonio Krüger, CEO of the German Research Center for Artificial Intelligence (DFKI) and Professor of Computer Science at Saarland University, to discuss the evolving relationship between ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[94bd127a-0d46-42b8-a700-e730e0452b0e]]></guid>
  <title><![CDATA[The Role of AI in Health Care: Innovation & Ethics with Dr. Ami B. Bhatt | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>💡 In this episode of the <strong>RegulatingAI Podcast</strong>, host Sanjay Puri talks with <strong>Dr. Ami B. Bhatt, Chief Innovation Officer at the American College of Cardiology</strong>, to discuss how AI is revolutionizing cardiovascular care.&nbsp;</p><p><br></p><p>🔍 <strong>Key Topics Covered:</strong>&nbsp;</p><p><br></p><ul><li>The intersection of AI and cardiology: Where are we today?&nbsp;</li><li>How AI-driven insights are improving diagnosis and treatment in cardiovascular health.&nbsp;</li><li>The role of <strong>telemedicine and digital health</strong> in expanding patient care.&nbsp;</li><li>Ethical and regulatory challenges in using AI in medicine.&nbsp;</li><li>📌 <strong>Dr. Bhatt shares her expertise on the innovations shaping the future of heart health and the need for responsible AI development in healthcare.</strong>&nbsp;</li></ul><p>👉 <strong>Watch now and discover how AI is redefining cardiovascular care!</strong>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/dramibhatt/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/dramibhatt/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/6e530228-baab-479c-aad8-0c3ec3a5b2da/292ff41d9c.jpg" />
  <pubDate>Fri, 18 Apr 2025 11:42:17 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="58052066" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/6e530228-baab-479c-aad8-0c3ec3a5b2da/episode.mp3" />
  <itunes:title><![CDATA[The Role of AI in Health Care: Innovation & Ethics with Dr. Ami B. Bhatt | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>1:00:28</itunes:duration>
  <itunes:summary><![CDATA[<p>💡 In this episode of the <strong>RegulatingAI Podcast</strong>, host Sanjay Puri talks with <strong>Dr. Ami B. Bhatt, Chief Innovation Officer at the American College of Cardiology</strong>, to discuss how AI is revolutionizing cardiovascular care.&nbsp;</p><p><br></p><p>🔍 <strong>Key Topics Covered:</strong>&nbsp;</p><p><br></p><ul><li>The intersection of AI and cardiology: Where are we today?&nbsp;</li><li>How AI-driven insights are improving diagnosis and treatment in cardiovascular health.&nbsp;</li><li>The role of <strong>telemedicine and digital health</strong> in expanding patient care.&nbsp;</li><li>Ethical and regulatory challenges in using AI in medicine.&nbsp;</li><li>📌 <strong>Dr. Bhatt shares her expertise on the innovations shaping the future of heart health and the need for responsible AI development in healthcare.</strong>&nbsp;</li></ul><p>👉 <strong>Watch now and discover how AI is redefining cardiovascular care!</strong>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/dramibhatt/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/dramibhatt/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>💡 In this episode of the <strong>RegulatingAI Podcast</strong>, host Sanjay Puri talks with <strong>Dr. Ami B. Bhatt, Chief Innovation Officer at the American College of Cardiology</strong>, to discuss how AI is revolutionizing cardiovascular care.&nbsp;</p><p><br></p><p>🔍 <strong>Key Topics Covered:</strong>&nbsp;</p><p><br></p><ul><li>The intersection of AI and cardiology: Where are we today?&nbsp;</li><li>How AI-driven insights are improving diagnosis and treatment in cardiovascular health.&nbsp;</li><li>The role of <strong>telemedicine and digital health</strong> in expanding patient care.&nbsp;</li><li>Ethical and regulatory challenges in using AI in medicine.&nbsp;</li><li>📌 <strong>Dr. Bhatt shares her expertise on the innovations shaping the future of heart health and the need for responsible AI development in healthcare.</strong>&nbsp;</li></ul><p>👉 <strong>Watch now and discover how AI is redefining cardiovascular care!</strong>&nbsp;</p><p>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/dramibhatt/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/dramibhatt/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[💡 In this episode of the RegulatingAI Podcast, host Sanjay Puri talks with Dr. Ami B. Bhatt, Chief Innovation Officer at the American College of Cardiology, to discuss how AI is revolutionizing cardiovascular care. 🔍 Key Topics Covered: The interse...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[bb563db7-262a-40a6-bc72-636faa84c816]]></guid>
  <title><![CDATA[Balancing AI Innovation & Regulation | Congressman Nick Begich | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">AI is reshaping the world, but how do we regulate it without stifling innovation? Congressman Nick Begich joins <strong>RegulatingAI Podcast</strong> host Sanjay Puri to discuss:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The delicate balance between AI regulation and fostering innovation&nbsp;</li><li class="ql-align-justify">Why excessive AI regulation could push technological leadership to other countries&nbsp;</li><li class="ql-align-justify">The role of Congress in setting AI guardrails while ensuring global competitiveness&nbsp;</li><li class="ql-align-justify">How policymakers can keep up with the rapid advancements of AI&nbsp;</li></ul><p class="ql-align-justify">🎧 Watch now to gain insights into the future of AI governance!&nbsp;</p><p><br></p><p class="ql-align-justify">📌 Subscribe to RegulatingAI Podcast for more discussions with global AI leaders!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://begich.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);">https://begich.house.gov/about</a> &nbsp;</p><p class="ql-align-justify"><a href="https://x.com/RepNickBegich" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepNickBegich</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 - Podcast Episode Highlights&nbsp;</p><p>01:45 - Personal Journey into Politics&nbsp;</p><p>05:10 - The Role of Technology in Society&nbsp;</p><p>09:25 - AI and National Security&nbsp;</p><p>13:00 - Energy Independence and Policy&nbsp;</p><p>17:20 - Education and Workforce of the Future&nbsp;</p><p>21:15 - Federal Overreach and States’ Rights&nbsp;</p><p>25:40 - Economic Policy and Small Business&nbsp;</p><p>29:50 - U.S. Global Positioning and Competition&nbsp;</p><p>34:30 - Audience Q&amp;A and Reflections&nbsp;</p><p>38:45 - Closing Remarks&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2c78566a-0706-4206-a47e-39069688cc84/72496550f6.jpg" />
  <pubDate>Wed, 16 Apr 2025 11:03:46 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="41288977" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2c78566a-0706-4206-a47e-39069688cc84/episode.mp3" />
  <itunes:title><![CDATA[Balancing AI Innovation & Regulation | Congressman Nick Begich | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>43:00</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">AI is reshaping the world, but how do we regulate it without stifling innovation? Congressman Nick Begich joins <strong>RegulatingAI Podcast</strong> host Sanjay Puri to discuss:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The delicate balance between AI regulation and fostering innovation&nbsp;</li><li class="ql-align-justify">Why excessive AI regulation could push technological leadership to other countries&nbsp;</li><li class="ql-align-justify">The role of Congress in setting AI guardrails while ensuring global competitiveness&nbsp;</li><li class="ql-align-justify">How policymakers can keep up with the rapid advancements of AI&nbsp;</li></ul><p class="ql-align-justify">🎧 Watch now to gain insights into the future of AI governance!&nbsp;</p><p><br></p><p class="ql-align-justify">📌 Subscribe to RegulatingAI Podcast for more discussions with global AI leaders!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://begich.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);">https://begich.house.gov/about</a> &nbsp;</p><p class="ql-align-justify"><a href="https://x.com/RepNickBegich" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepNickBegich</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 - Podcast Episode Highlights&nbsp;</p><p>01:45 - Personal Journey into Politics&nbsp;</p><p>05:10 - The Role of Technology in Society&nbsp;</p><p>09:25 - AI and National Security&nbsp;</p><p>13:00 - Energy Independence and Policy&nbsp;</p><p>17:20 - Education and Workforce of the Future&nbsp;</p><p>21:15 - Federal Overreach and States’ Rights&nbsp;</p><p>25:40 - Economic Policy and Small Business&nbsp;</p><p>29:50 - U.S. Global Positioning and Competition&nbsp;</p><p>34:30 - Audience Q&amp;A and Reflections&nbsp;</p><p>38:45 - Closing Remarks&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">AI is reshaping the world, but how do we regulate it without stifling innovation? Congressman Nick Begich joins <strong>RegulatingAI Podcast</strong> host Sanjay Puri to discuss:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The delicate balance between AI regulation and fostering innovation&nbsp;</li><li class="ql-align-justify">Why excessive AI regulation could push technological leadership to other countries&nbsp;</li><li class="ql-align-justify">The role of Congress in setting AI guardrails while ensuring global competitiveness&nbsp;</li><li class="ql-align-justify">How policymakers can keep up with the rapid advancements of AI&nbsp;</li></ul><p class="ql-align-justify">🎧 Watch now to gain insights into the future of AI governance!&nbsp;</p><p><br></p><p class="ql-align-justify">📌 Subscribe to RegulatingAI Podcast for more discussions with global AI leaders!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://begich.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);">https://begich.house.gov/about</a> &nbsp;</p><p class="ql-align-justify"><a href="https://x.com/RepNickBegich" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepNickBegich</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 - Podcast Episode Highlights&nbsp;</p><p>01:45 - Personal Journey into Politics&nbsp;</p><p>05:10 - The Role of Technology in Society&nbsp;</p><p>09:25 - AI and National Security&nbsp;</p><p>13:00 - Energy Independence and Policy&nbsp;</p><p>17:20 - Education and Workforce of the Future&nbsp;</p><p>21:15 - Federal Overreach and States’ Rights&nbsp;</p><p>25:40 - Economic Policy and Small Business&nbsp;</p><p>29:50 - U.S. Global Positioning and Competition&nbsp;</p><p>34:30 - Audience Q&amp;A and Reflections&nbsp;</p><p>38:45 - Closing Remarks&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI is reshaping the world, but how do we regulate it without stifling innovation? Congressman Nick Begich joins RegulatingAI Podcast host Sanjay Puri to discuss: The delicate balance between AI regulation and fostering innovation Why excessive AI r...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[14f244e8-46a6-49c9-b744-2b98b8f5fd98]]></guid>
  <title><![CDATA[RegulatingAI Podcast: How AI Impacts Civil Rights – A Conversation with Koustubh "K.J." Bagchi ]]></title>
  <description><![CDATA[<p class="ql-align-justify">👉 Listen Now: How can AI be regulated to protect civil rights? In this episode, Sanjay Puri sits down with <strong>Koustubh "K.J." Bagchi</strong>, Vice President of The Leadership Conference's Center for Civil Rights and Technology, to explore the intersection of AI and civil liberties.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 <strong>In this episode:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">Why civil rights must be at the core of AI policy&nbsp;</li><li class="ql-align-justify">The challenges of bias and discrimination in AI algorithms&nbsp;</li><li class="ql-align-justify">How policymakers and tech companies can collaborate to ensure fairness&nbsp;</li><li class="ql-align-justify">Strategies for improving AI accountability and transparency&nbsp;</li></ul><p class="ql-align-justify">📢 Don't miss this thought-provoking conversation on how to make AI more ethical and equitable.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/kjbagchi/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/kjbagchi/</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/59ae82d0-c82f-40b6-9ba5-fb05b1e131c0/c6ad890142.jpg" />
  <pubDate>Mon, 14 Apr 2025 10:33:18 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="61656997" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/59ae82d0-c82f-40b6-9ba5-fb05b1e131c0/episode.mp3" />
  <itunes:title><![CDATA[RegulatingAI Podcast: How AI Impacts Civil Rights – A Conversation with Koustubh "K.J." Bagchi ]]></itunes:title>
  <itunes:duration>57:46</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">👉 Listen Now: How can AI be regulated to protect civil rights? In this episode, Sanjay Puri sits down with <strong>Koustubh "K.J." Bagchi</strong>, Vice President of The Leadership Conference's Center for Civil Rights and Technology, to explore the intersection of AI and civil liberties.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 <strong>In this episode:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">Why civil rights must be at the core of AI policy&nbsp;</li><li class="ql-align-justify">The challenges of bias and discrimination in AI algorithms&nbsp;</li><li class="ql-align-justify">How policymakers and tech companies can collaborate to ensure fairness&nbsp;</li><li class="ql-align-justify">Strategies for improving AI accountability and transparency&nbsp;</li></ul><p class="ql-align-justify">📢 Don't miss this thought-provoking conversation on how to make AI more ethical and equitable.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/kjbagchi/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/kjbagchi/</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">👉 Listen Now: How can AI be regulated to protect civil rights? In this episode, Sanjay Puri sits down with <strong>Koustubh "K.J." Bagchi</strong>, Vice President of The Leadership Conference's Center for Civil Rights and Technology, to explore the intersection of AI and civil liberties.&nbsp;</p><p><br></p><p class="ql-align-justify">🔍 <strong>In this episode:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">Why civil rights must be at the core of AI policy&nbsp;</li><li class="ql-align-justify">The challenges of bias and discrimination in AI algorithms&nbsp;</li><li class="ql-align-justify">How policymakers and tech companies can collaborate to ensure fairness&nbsp;</li><li class="ql-align-justify">Strategies for improving AI accountability and transparency&nbsp;</li></ul><p class="ql-align-justify">📢 Don't miss this thought-provoking conversation on how to make AI more ethical and equitable.&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/kjbagchi/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/kjbagchi/</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[👉 Listen Now: How can AI be regulated to protect civil rights? In this episode, Sanjay Puri sits down with Koustubh "K.J." Bagchi, Vice President of The Leadership Conference's Center for Civil Rights and Technology, to explore the intersection of ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[6b743638-a43a-4b48-a6a3-f07adc650eba]]></guid>
  <title><![CDATA[The Future of AI Regulation with Congressman Mike Kennedy - RegulatingAI Podcast]]></title>
  <description><![CDATA[<p class="ql-align-justify">Artificial Intelligence is evolving at an unprecedented pace—can policies and regulations keep up? In this episode of the <strong>RegulatingAI Podcast</strong>, Congressman <strong>Mike Kennedy</strong> shares his perspective on:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The biggest AI governance challenges facing policymakers today&nbsp;</li><li class="ql-align-justify">Striking the right balance between innovation and ethical AI development&nbsp;</li><li class="ql-align-justify">How businesses can prepare for upcoming AI regulations&nbsp;</li><li class="ql-align-justify">The role of governments in shaping responsible AI adoption&nbsp;</li></ul><p class="ql-align-justify">🔴 <strong>Watch now for a policymaker’s take on the future of AI governance!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://x.com/RepMikeKennedy" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepMikeKennedy</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>02:00 – Congressman Mike’s Background &amp; Mission&nbsp;</p><p>04:30 – Bridging the Urban-Rural Digital Divide&nbsp;</p><p>07:15 – AI in Education &amp; Skilling&nbsp;</p><p>10:00 – Congressman Mike Kennedy Foundation &amp; Youth Empowerment&nbsp;</p><p>13:30 – The Role of AI in Public Service Delivery&nbsp;</p><p>16:45 – Policy Recommendations for AI &amp; Innovation&nbsp;</p><p>20:00 – Women &amp; AI: Breaking Barriers&nbsp;</p><p>23:15 – Global AI Standards &amp; India’s Role&nbsp;</p><p>26:00 – Final Message: Youth Are the Future&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d007ac6c-06d8-4f7c-9408-c56960e8402d/96c01c46b5.jpg" />
  <pubDate>Fri, 11 Apr 2025 10:58:54 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="31250434" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d007ac6c-06d8-4f7c-9408-c56960e8402d/episode.mp3" />
  <itunes:title><![CDATA[The Future of AI Regulation with Congressman Mike Kennedy - RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>32:33</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">Artificial Intelligence is evolving at an unprecedented pace—can policies and regulations keep up? In this episode of the <strong>RegulatingAI Podcast</strong>, Congressman <strong>Mike Kennedy</strong> shares his perspective on:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The biggest AI governance challenges facing policymakers today&nbsp;</li><li class="ql-align-justify">Striking the right balance between innovation and ethical AI development&nbsp;</li><li class="ql-align-justify">How businesses can prepare for upcoming AI regulations&nbsp;</li><li class="ql-align-justify">The role of governments in shaping responsible AI adoption&nbsp;</li></ul><p class="ql-align-justify">🔴 <strong>Watch now for a policymaker’s take on the future of AI governance!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://x.com/RepMikeKennedy" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepMikeKennedy</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>02:00 – Congressman Mike’s Background &amp; Mission&nbsp;</p><p>04:30 – Bridging the Urban-Rural Digital Divide&nbsp;</p><p>07:15 – AI in Education &amp; Skilling&nbsp;</p><p>10:00 – Congressman Mike Kennedy Foundation &amp; Youth Empowerment&nbsp;</p><p>13:30 – The Role of AI in Public Service Delivery&nbsp;</p><p>16:45 – Policy Recommendations for AI &amp; Innovation&nbsp;</p><p>20:00 – Women &amp; AI: Breaking Barriers&nbsp;</p><p>23:15 – Global AI Standards &amp; India’s Role&nbsp;</p><p>26:00 – Final Message: Youth Are the Future&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">Artificial Intelligence is evolving at an unprecedented pace—can policies and regulations keep up? In this episode of the <strong>RegulatingAI Podcast</strong>, Congressman <strong>Mike Kennedy</strong> shares his perspective on:&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The biggest AI governance challenges facing policymakers today&nbsp;</li><li class="ql-align-justify">Striking the right balance between innovation and ethical AI development&nbsp;</li><li class="ql-align-justify">How businesses can prepare for upcoming AI regulations&nbsp;</li><li class="ql-align-justify">The role of governments in shaping responsible AI adoption&nbsp;</li></ul><p class="ql-align-justify">🔴 <strong>Watch now for a policymaker’s take on the future of AI governance!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://x.com/RepMikeKennedy" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepMikeKennedy</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>02:00 – Congressman Mike’s Background &amp; Mission&nbsp;</p><p>04:30 – Bridging the Urban-Rural Digital Divide&nbsp;</p><p>07:15 – AI in Education &amp; Skilling&nbsp;</p><p>10:00 – Congressman Mike Kennedy Foundation &amp; Youth Empowerment&nbsp;</p><p>13:30 – The Role of AI in Public Service Delivery&nbsp;</p><p>16:45 – Policy Recommendations for AI &amp; Innovation&nbsp;</p><p>20:00 – Women &amp; AI: Breaking Barriers&nbsp;</p><p>23:15 – Global AI Standards &amp; India’s Role&nbsp;</p><p>26:00 – Final Message: Youth Are the Future&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Artificial Intelligence is evolving at an unprecedented pace—can policies and regulations keep up? In this episode of the RegulatingAI Podcast, Congressman Mike Kennedy shares his perspective on: The biggest AI governance challenges facing policyma...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[01aa1aeb-08ca-4720-8695-74634809bd76]]></guid>
  <title><![CDATA[Empowering Africa Through AI: Dr. Shikoh Gitau's Vision for Equitable AI Development | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>🌍 How can AI drive Africa’s digital future while maintaining fairness and accountability?&nbsp;</p><p>In this insightful episode of <em>RegulatingAI</em>, <strong>Dr. Shikoh Gitau</strong>, CEO of <strong>Qhala</strong>, sits down with Sanjay Puri to discuss Africa's growing influence in the AI sector.&nbsp;</p><p>&nbsp;</p><p><strong>Highlights:</strong>&nbsp;</p><p>✔️ Why African voices must shape global AI regulations&nbsp;</p><p>✔️ The challenges of developing ethical AI frameworks&nbsp;</p><p>✔️ Qhala’s innovative approach to AI development and deployment&nbsp;</p><p>👀 <strong>Don't miss this conversation about the future of AI in Africa!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/shikoh/?original_referer=https%3A%2F%2Fwww%2Egoogle%2Ecom%2F&amp;originalSubdomain=ke" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/Shikohh/</a>&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>02:00 – Dr. Gitau’s Journey in AI &amp; Data Science&nbsp;</p><p>05:00 – AI’s Potential for Emerging Markets&nbsp;</p><p>08:30 – Barriers to AI Adoption in the Global South&nbsp;</p><p>12:00 – Ethical AI &amp; Data Sovereignty&nbsp;</p><p>15:30 – The Role of Governments &amp; Policymakers&nbsp;</p><p>18:45 – Innovation vs. Regulation: Striking the Balance&nbsp;</p><p>22:00 – AI in Healthcare &amp; Public Services&nbsp;</p><p>26:30 – AI &amp; Financial Inclusion&nbsp;</p><p>30:00 – The Future of AI in Africa&nbsp;</p><p>34:00 – Final Thoughts &amp; Takeaways</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/64b24eaf-5bb9-45d4-849a-9dad423c2b3b/38697e97d2.jpg" />
  <pubDate>Thu, 10 Apr 2025 07:07:35 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="45468939" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/64b24eaf-5bb9-45d4-849a-9dad423c2b3b/episode.mp3" />
  <itunes:title><![CDATA[Empowering Africa Through AI: Dr. Shikoh Gitau's Vision for Equitable AI Development | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>45:43</itunes:duration>
  <itunes:summary><![CDATA[<p>🌍 How can AI drive Africa’s digital future while maintaining fairness and accountability?&nbsp;</p><p>In this insightful episode of <em>RegulatingAI</em>, <strong>Dr. Shikoh Gitau</strong>, CEO of <strong>Qhala</strong>, sits down with Sanjay Puri to discuss Africa's growing influence in the AI sector.&nbsp;</p><p>&nbsp;</p><p><strong>Highlights:</strong>&nbsp;</p><p>✔️ Why African voices must shape global AI regulations&nbsp;</p><p>✔️ The challenges of developing ethical AI frameworks&nbsp;</p><p>✔️ Qhala’s innovative approach to AI development and deployment&nbsp;</p><p>👀 <strong>Don't miss this conversation about the future of AI in Africa!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/shikoh/?original_referer=https%3A%2F%2Fwww%2Egoogle%2Ecom%2F&amp;originalSubdomain=ke" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/Shikohh/</a>&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>02:00 – Dr. Gitau’s Journey in AI &amp; Data Science&nbsp;</p><p>05:00 – AI’s Potential for Emerging Markets&nbsp;</p><p>08:30 – Barriers to AI Adoption in the Global South&nbsp;</p><p>12:00 – Ethical AI &amp; Data Sovereignty&nbsp;</p><p>15:30 – The Role of Governments &amp; Policymakers&nbsp;</p><p>18:45 – Innovation vs. Regulation: Striking the Balance&nbsp;</p><p>22:00 – AI in Healthcare &amp; Public Services&nbsp;</p><p>26:30 – AI &amp; Financial Inclusion&nbsp;</p><p>30:00 – The Future of AI in Africa&nbsp;</p><p>34:00 – Final Thoughts &amp; Takeaways</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🌍 How can AI drive Africa’s digital future while maintaining fairness and accountability?&nbsp;</p><p>In this insightful episode of <em>RegulatingAI</em>, <strong>Dr. Shikoh Gitau</strong>, CEO of <strong>Qhala</strong>, sits down with Sanjay Puri to discuss Africa's growing influence in the AI sector.&nbsp;</p><p>&nbsp;</p><p><strong>Highlights:</strong>&nbsp;</p><p>✔️ Why African voices must shape global AI regulations&nbsp;</p><p>✔️ The challenges of developing ethical AI frameworks&nbsp;</p><p>✔️ Qhala’s innovative approach to AI development and deployment&nbsp;</p><p>👀 <strong>Don't miss this conversation about the future of AI in Africa!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/shikoh/?original_referer=https%3A%2F%2Fwww%2Egoogle%2Ecom%2F&amp;originalSubdomain=ke" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/Shikohh/</a>&nbsp;</p><p><br></p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>02:00 – Dr. Gitau’s Journey in AI &amp; Data Science&nbsp;</p><p>05:00 – AI’s Potential for Emerging Markets&nbsp;</p><p>08:30 – Barriers to AI Adoption in the Global South&nbsp;</p><p>12:00 – Ethical AI &amp; Data Sovereignty&nbsp;</p><p>15:30 – The Role of Governments &amp; Policymakers&nbsp;</p><p>18:45 – Innovation vs. Regulation: Striking the Balance&nbsp;</p><p>22:00 – AI in Healthcare &amp; Public Services&nbsp;</p><p>26:30 – AI &amp; Financial Inclusion&nbsp;</p><p>30:00 – The Future of AI in Africa&nbsp;</p><p>34:00 – Final Thoughts &amp; Takeaways</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🌍 How can AI drive Africa’s digital future while maintaining fairness and accountability? In this insightful episode of RegulatingAI, Dr. Shikoh Gitau, CEO of Qhala, sits down with Sanjay Puri to discuss Africa's growing influence in the AI sector....]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1e6d336f-3d66-4831-ba9b-4b7090dd238d]]></guid>
  <title><![CDATA[Why the US Needs to Lead AI Innovation – Congressman Jake Auchincloss Speaks Out | Regulating AI Podcast ]]></title>
  <description><![CDATA[<p>🎙 How AI Will Shape Public Policy and National Security – A Conversation with Congressman Jake Auchincloss&nbsp;</p><p><br></p><p>Join host Sanjay Puri in this insightful episode of the <strong>RegulatingAI Podcast</strong> as he sits down with <strong>Congressman Jake Auchincloss</strong> to discuss the intersection of AI, public policy, and national security. Discover why AI regulation matters and how the US can stay ahead in the global AI race.&nbsp;</p><p><br></p><p>&nbsp;</p><p> ✅ <strong>Key topics discussed:</strong>&nbsp;</p><ul><li>Why Congressman Auchincloss used AI-generated speech in Congress&nbsp;</li><li>How AI regulation should be industry-specific, not one-size-fits-all&nbsp;</li><li>Why the US should avoid following the EU’s approach to AI regulation&nbsp;</li><li>How AI will influence healthcare, defence, and financial services&nbsp;</li><li>The role of the US in leading global AI innovation&nbsp;</li></ul><p>👉 <strong>Watch Now:</strong> Don’t miss this important discussion on AI policy and leadership!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://auchincloss.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://auchincloss.house.gov/</a>&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify"><a href="https://x.com/RepAuchincloss" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepAuchincloss</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>01:30 – Why Use AI in Congress?&nbsp;</p><p>03:10 – Industry-Specific Regulation vs. Comprehensive Laws&nbsp;</p><p>06:00 – Critique of the EU AI Act&nbsp;</p><p>08:30 – Outcomes-Based Regulation Explained&nbsp;</p><p>12:00 – Gaps in Current U.S. Law&nbsp;</p><p>14:40 – Democratizing Access to AI&nbsp;</p><p>17:00 – Reforming Section 230&nbsp;</p><p>20:30 – Deepfake Legislation: The Intimate Privacy Protection Act&nbsp;</p><p>23:00 – The Three Pillars of AI Innovation&nbsp;</p><p>26:30 – The Rise of ‘Acquihires’ &amp; Antitrust Loopholes&nbsp;</p><p>30:00 – National Security &amp; the China Challenge&nbsp;</p><p>33:45 – AI &amp; Energy: The Nuclear Opportunity&nbsp;</p><p>36:30 – AI + Robotics = Future Defense&nbsp;</p><p>40:00 – Export Controls Aren’t Enough&nbsp;</p><p>43:00 – Rebuilding Global Trade Leadership&nbsp;</p><p>46:00 – AI Policy in the Next Congress&nbsp;</p><p>48:20 – The Deepfake Bill &amp; Bipartisan Momentum&nbsp;</p><p>50:30 – Keeping Up with AI’s Pace&nbsp;</p><p>53:00 – Open Source vs. Proprietary AI&nbsp;</p><p>54:00 – Final Advice: Support Local News&nbsp;</p><p>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/49dadc91-c3de-476c-9a6e-1c2caf6e0cee/dbc697e3e4.jpg" />
  <pubDate>Fri, 04 Apr 2025 12:23:52 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32362205" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/49dadc91-c3de-476c-9a6e-1c2caf6e0cee/episode.mp3" />
  <itunes:title><![CDATA[Why the US Needs to Lead AI Innovation – Congressman Jake Auchincloss Speaks Out | Regulating AI Podcast ]]></itunes:title>
  <itunes:duration>33:42</itunes:duration>
  <itunes:summary><![CDATA[<p>🎙 How AI Will Shape Public Policy and National Security – A Conversation with Congressman Jake Auchincloss&nbsp;</p><p><br></p><p>Join host Sanjay Puri in this insightful episode of the <strong>RegulatingAI Podcast</strong> as he sits down with <strong>Congressman Jake Auchincloss</strong> to discuss the intersection of AI, public policy, and national security. Discover why AI regulation matters and how the US can stay ahead in the global AI race.&nbsp;</p><p><br></p><p>&nbsp;</p><p> ✅ <strong>Key topics discussed:</strong>&nbsp;</p><ul><li>Why Congressman Auchincloss used AI-generated speech in Congress&nbsp;</li><li>How AI regulation should be industry-specific, not one-size-fits-all&nbsp;</li><li>Why the US should avoid following the EU’s approach to AI regulation&nbsp;</li><li>How AI will influence healthcare, defence, and financial services&nbsp;</li><li>The role of the US in leading global AI innovation&nbsp;</li></ul><p>👉 <strong>Watch Now:</strong> Don’t miss this important discussion on AI policy and leadership!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://auchincloss.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://auchincloss.house.gov/</a>&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify"><a href="https://x.com/RepAuchincloss" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepAuchincloss</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>01:30 – Why Use AI in Congress?&nbsp;</p><p>03:10 – Industry-Specific Regulation vs. Comprehensive Laws&nbsp;</p><p>06:00 – Critique of the EU AI Act&nbsp;</p><p>08:30 – Outcomes-Based Regulation Explained&nbsp;</p><p>12:00 – Gaps in Current U.S. Law&nbsp;</p><p>14:40 – Democratizing Access to AI&nbsp;</p><p>17:00 – Reforming Section 230&nbsp;</p><p>20:30 – Deepfake Legislation: The Intimate Privacy Protection Act&nbsp;</p><p>23:00 – The Three Pillars of AI Innovation&nbsp;</p><p>26:30 – The Rise of ‘Acquihires’ &amp; Antitrust Loopholes&nbsp;</p><p>30:00 – National Security &amp; the China Challenge&nbsp;</p><p>33:45 – AI &amp; Energy: The Nuclear Opportunity&nbsp;</p><p>36:30 – AI + Robotics = Future Defense&nbsp;</p><p>40:00 – Export Controls Aren’t Enough&nbsp;</p><p>43:00 – Rebuilding Global Trade Leadership&nbsp;</p><p>46:00 – AI Policy in the Next Congress&nbsp;</p><p>48:20 – The Deepfake Bill &amp; Bipartisan Momentum&nbsp;</p><p>50:30 – Keeping Up with AI’s Pace&nbsp;</p><p>53:00 – Open Source vs. Proprietary AI&nbsp;</p><p>54:00 – Final Advice: Support Local News&nbsp;</p><p>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🎙 How AI Will Shape Public Policy and National Security – A Conversation with Congressman Jake Auchincloss&nbsp;</p><p><br></p><p>Join host Sanjay Puri in this insightful episode of the <strong>RegulatingAI Podcast</strong> as he sits down with <strong>Congressman Jake Auchincloss</strong> to discuss the intersection of AI, public policy, and national security. Discover why AI regulation matters and how the US can stay ahead in the global AI race.&nbsp;</p><p><br></p><p>&nbsp;</p><p> ✅ <strong>Key topics discussed:</strong>&nbsp;</p><ul><li>Why Congressman Auchincloss used AI-generated speech in Congress&nbsp;</li><li>How AI regulation should be industry-specific, not one-size-fits-all&nbsp;</li><li>Why the US should avoid following the EU’s approach to AI regulation&nbsp;</li><li>How AI will influence healthcare, defence, and financial services&nbsp;</li><li>The role of the US in leading global AI innovation&nbsp;</li></ul><p>👉 <strong>Watch Now:</strong> Don’t miss this important discussion on AI policy and leadership!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://auchincloss.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://auchincloss.house.gov/</a>&nbsp;&nbsp;</p><p><br></p><p class="ql-align-justify"><a href="https://x.com/RepAuchincloss" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepAuchincloss</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Podcast Episode Highlights&nbsp;</p><p>01:30 – Why Use AI in Congress?&nbsp;</p><p>03:10 – Industry-Specific Regulation vs. Comprehensive Laws&nbsp;</p><p>06:00 – Critique of the EU AI Act&nbsp;</p><p>08:30 – Outcomes-Based Regulation Explained&nbsp;</p><p>12:00 – Gaps in Current U.S. Law&nbsp;</p><p>14:40 – Democratizing Access to AI&nbsp;</p><p>17:00 – Reforming Section 230&nbsp;</p><p>20:30 – Deepfake Legislation: The Intimate Privacy Protection Act&nbsp;</p><p>23:00 – The Three Pillars of AI Innovation&nbsp;</p><p>26:30 – The Rise of ‘Acquihires’ &amp; Antitrust Loopholes&nbsp;</p><p>30:00 – National Security &amp; the China Challenge&nbsp;</p><p>33:45 – AI &amp; Energy: The Nuclear Opportunity&nbsp;</p><p>36:30 – AI + Robotics = Future Defense&nbsp;</p><p>40:00 – Export Controls Aren’t Enough&nbsp;</p><p>43:00 – Rebuilding Global Trade Leadership&nbsp;</p><p>46:00 – AI Policy in the Next Congress&nbsp;</p><p>48:20 – The Deepfake Bill &amp; Bipartisan Momentum&nbsp;</p><p>50:30 – Keeping Up with AI’s Pace&nbsp;</p><p>53:00 – Open Source vs. Proprietary AI&nbsp;</p><p>54:00 – Final Advice: Support Local News&nbsp;</p><p>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🎙 How AI Will Shape Public Policy and National Security – A Conversation with Congressman Jake Auchincloss Join host Sanjay Puri in this insightful episode of the RegulatingAI Podcast as he sits down with Congressman Jake Auchincloss to discuss the...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[ebee31d2-3607-4494-8da1-802553e8bef2]]></guid>
  <title><![CDATA[Congressman Gabe Amo on AI Policy and the Future of Responsible Regulation | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>🎙 RegulatingAI Podcast | Congressman Gabe Amo on Public Service and AI Regulation&nbsp;</p><p> In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Congressman Gabe Amo to discuss his journey in public service and how it connects with AI regulation. Congressman Amo shares valuable insights on:&nbsp;</p><ul><li>The importance of fair and responsible AI policies&nbsp;</li><li>Challenges in RegulatingAI while ensuring innovation&nbsp;</li><li>His experience in Rhode Island and Washington, DC, shaping AI governance&nbsp;</li></ul><p>🎧 <strong>Watch now</strong> to learn how Congressman Amo is working to balance technological growth with ethical guidelines.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://amo.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);">https://amo.house.gov/about</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/9a465257-d832-4b00-aa8b-988600181268/f473895da7.jpg" />
  <pubDate>Mon, 31 Mar 2025 10:46:35 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="30409082" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9a465257-d832-4b00-aa8b-988600181268/episode.mp3" />
  <itunes:title><![CDATA[Congressman Gabe Amo on AI Policy and the Future of Responsible Regulation | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>31:40</itunes:duration>
  <itunes:summary><![CDATA[<p>🎙 RegulatingAI Podcast | Congressman Gabe Amo on Public Service and AI Regulation&nbsp;</p><p> In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Congressman Gabe Amo to discuss his journey in public service and how it connects with AI regulation. Congressman Amo shares valuable insights on:&nbsp;</p><ul><li>The importance of fair and responsible AI policies&nbsp;</li><li>Challenges in RegulatingAI while ensuring innovation&nbsp;</li><li>His experience in Rhode Island and Washington, DC, shaping AI governance&nbsp;</li></ul><p>🎧 <strong>Watch now</strong> to learn how Congressman Amo is working to balance technological growth with ethical guidelines.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://amo.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);">https://amo.house.gov/about</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🎙 RegulatingAI Podcast | Congressman Gabe Amo on Public Service and AI Regulation&nbsp;</p><p> In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Congressman Gabe Amo to discuss his journey in public service and how it connects with AI regulation. Congressman Amo shares valuable insights on:&nbsp;</p><ul><li>The importance of fair and responsible AI policies&nbsp;</li><li>Challenges in RegulatingAI while ensuring innovation&nbsp;</li><li>His experience in Rhode Island and Washington, DC, shaping AI governance&nbsp;</li></ul><p>🎧 <strong>Watch now</strong> to learn how Congressman Amo is working to balance technological growth with ethical guidelines.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://amo.house.gov/about" target="_blank" style="color: rgb(70, 120, 134);">https://amo.house.gov/about</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🎙 RegulatingAI Podcast | Congressman Gabe Amo on Public Service and AI Regulation  In this episode of the RegulatingAI Podcast, host Sanjay Puri welcomes Congressman Gabe Amo to discuss his journey in public service and how it connects with AI regu...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[0d3862fe-4168-4a56-a96c-e509d14dcbd8]]></guid>
  <title><![CDATA[Bipartisan AI Regulation and the Future of AI with Congressman Ted Lieu | Regulating AI Podcast ]]></title>
  <description><![CDATA[<p> In this episode of the <strong>RegulatingAI Podcast</strong>, Sanjay Puri sits down with <strong>Congressman Ted Lieu</strong> to explore how bipartisan efforts are shaping the future of AI regulation. Congressman Lieu shares valuable insights on:&nbsp;</p><p><br></p><ul><li>Why bipartisan collaboration is essential for AI governance&nbsp;</li><li>Striking a balance between innovation and responsible regulation&nbsp;</li><li>The challenges and opportunities of AI in national security and privacy&nbsp;</li></ul><p>📺 <strong>Watch now</strong> and discover how AI policy decisions today will impact the future!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://lieu.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://lieu.house.gov/</a>&nbsp;</p><p class="ql-align-justify"><a href="https://x.com/RepTedLieu" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepTedLieu</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Introduction&nbsp;</p><p>01:15 – Why AI Regulation is Urgent Now&nbsp;</p><p>03:30 – National Security &amp; AI&nbsp;</p><p>06:10 – Balancing Innovation with Guardrails&nbsp;</p><p>08:30 – Who Should Be Regulated?&nbsp;</p><p>10:45 – The Role of Congress in a Fast-Moving Field&nbsp;</p><p>13:00 – The Need for a Federal AI Agency&nbsp;</p><p>15:20 – AI &amp; Democratic Values&nbsp;</p><p>17:45 – Risks of Deepfakes &amp; Misinformation&nbsp;</p><p>20:15 – Ensuring Equity &amp; Preventing Bias&nbsp;</p><p>22:30 – The Role of Public Engagement in AI Governance&nbsp;</p><p>24:30 – Final Thoughts &amp; A Call to Action&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d3f903cb-a2dc-4559-af7e-8366706be316/4c4bcf72cd.jpg" />
  <pubDate>Fri, 28 Mar 2025 12:25:18 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="34523472" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d3f903cb-a2dc-4559-af7e-8366706be316/episode.mp3" />
  <itunes:title><![CDATA[Bipartisan AI Regulation and the Future of AI with Congressman Ted Lieu | Regulating AI Podcast ]]></itunes:title>
  <itunes:duration>35:57</itunes:duration>
  <itunes:summary><![CDATA[<p> In this episode of the <strong>RegulatingAI Podcast</strong>, Sanjay Puri sits down with <strong>Congressman Ted Lieu</strong> to explore how bipartisan efforts are shaping the future of AI regulation. Congressman Lieu shares valuable insights on:&nbsp;</p><p><br></p><ul><li>Why bipartisan collaboration is essential for AI governance&nbsp;</li><li>Striking a balance between innovation and responsible regulation&nbsp;</li><li>The challenges and opportunities of AI in national security and privacy&nbsp;</li></ul><p>📺 <strong>Watch now</strong> and discover how AI policy decisions today will impact the future!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://lieu.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://lieu.house.gov/</a>&nbsp;</p><p class="ql-align-justify"><a href="https://x.com/RepTedLieu" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepTedLieu</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Introduction&nbsp;</p><p>01:15 – Why AI Regulation is Urgent Now&nbsp;</p><p>03:30 – National Security &amp; AI&nbsp;</p><p>06:10 – Balancing Innovation with Guardrails&nbsp;</p><p>08:30 – Who Should Be Regulated?&nbsp;</p><p>10:45 – The Role of Congress in a Fast-Moving Field&nbsp;</p><p>13:00 – The Need for a Federal AI Agency&nbsp;</p><p>15:20 – AI &amp; Democratic Values&nbsp;</p><p>17:45 – Risks of Deepfakes &amp; Misinformation&nbsp;</p><p>20:15 – Ensuring Equity &amp; Preventing Bias&nbsp;</p><p>22:30 – The Role of Public Engagement in AI Governance&nbsp;</p><p>24:30 – Final Thoughts &amp; A Call to Action&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p> In this episode of the <strong>RegulatingAI Podcast</strong>, Sanjay Puri sits down with <strong>Congressman Ted Lieu</strong> to explore how bipartisan efforts are shaping the future of AI regulation. Congressman Lieu shares valuable insights on:&nbsp;</p><p><br></p><ul><li>Why bipartisan collaboration is essential for AI governance&nbsp;</li><li>Striking a balance between innovation and responsible regulation&nbsp;</li><li>The challenges and opportunities of AI in national security and privacy&nbsp;</li></ul><p>📺 <strong>Watch now</strong> and discover how AI policy decisions today will impact the future!&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://lieu.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://lieu.house.gov/</a>&nbsp;</p><p class="ql-align-justify"><a href="https://x.com/RepTedLieu" target="_blank" style="color: rgb(70, 120, 134);">https://x.com/RepTedLieu</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 – Introduction&nbsp;</p><p>01:15 – Why AI Regulation is Urgent Now&nbsp;</p><p>03:30 – National Security &amp; AI&nbsp;</p><p>06:10 – Balancing Innovation with Guardrails&nbsp;</p><p>08:30 – Who Should Be Regulated?&nbsp;</p><p>10:45 – The Role of Congress in a Fast-Moving Field&nbsp;</p><p>13:00 – The Need for a Federal AI Agency&nbsp;</p><p>15:20 – AI &amp; Democratic Values&nbsp;</p><p>17:45 – Risks of Deepfakes &amp; Misinformation&nbsp;</p><p>20:15 – Ensuring Equity &amp; Preventing Bias&nbsp;</p><p>22:30 – The Role of Public Engagement in AI Governance&nbsp;</p><p>24:30 – Final Thoughts &amp; A Call to Action&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[ In this episode of the RegulatingAI Podcast, Sanjay Puri sits down with Congressman Ted Lieu to explore how bipartisan efforts are shaping the future of AI regulation. Congressman Lieu shares valuable insights on: Why bipartisan collaboration is e...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a0f93f43-4e82-4bb8-ab60-c1031c029868]]></guid>
  <title><![CDATA[The Intersection of AI, Regulation, and Economic Growth with David Schweikert | Regulating AI Podcast ]]></title>
  <description><![CDATA[<p>🌍 As AI reshapes the global landscape, <strong>how should governments respond?</strong> In this thought-provoking episode of <em>Regulating AI</em>, Sanjay Puri engages in a deep dive with <strong>Congressman David Schweikert</strong> on:&nbsp;</p><p><br></p><ul><li>The <strong>critical role of AI regulation</strong> in national security &amp; economic resilience&nbsp;</li><li>How AI can <strong>bridge gaps in public policy</strong> rather than widen them&nbsp;</li><li>The <strong>biggest ethical concerns</strong> surrounding AI implementation&nbsp;</li><li>The <strong>future of AI governance in the U.S. and beyond</strong>&nbsp;</li></ul><p>🚀 <strong>Stay ahead of the curve—hit play and join the conversation!</strong>&nbsp;</p><p><br></p><p>👍 <strong>Like, comment, and subscribe for more AI policy insights!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/david-schweikert-54ab0218/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/david-schweikert-54ab0218/</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ab3d00b3-2095-4e11-a57f-f59db031bc3c/41680eac63.jpg" />
  <pubDate>Wed, 26 Mar 2025 09:54:47 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="47493582" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ab3d00b3-2095-4e11-a57f-f59db031bc3c/episode.mp3" />
  <itunes:title><![CDATA[The Intersection of AI, Regulation, and Economic Growth with David Schweikert | Regulating AI Podcast ]]></itunes:title>
  <itunes:duration>49:28</itunes:duration>
  <itunes:summary><![CDATA[<p>🌍 As AI reshapes the global landscape, <strong>how should governments respond?</strong> In this thought-provoking episode of <em>Regulating AI</em>, Sanjay Puri engages in a deep dive with <strong>Congressman David Schweikert</strong> on:&nbsp;</p><p><br></p><ul><li>The <strong>critical role of AI regulation</strong> in national security &amp; economic resilience&nbsp;</li><li>How AI can <strong>bridge gaps in public policy</strong> rather than widen them&nbsp;</li><li>The <strong>biggest ethical concerns</strong> surrounding AI implementation&nbsp;</li><li>The <strong>future of AI governance in the U.S. and beyond</strong>&nbsp;</li></ul><p>🚀 <strong>Stay ahead of the curve—hit play and join the conversation!</strong>&nbsp;</p><p><br></p><p>👍 <strong>Like, comment, and subscribe for more AI policy insights!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/david-schweikert-54ab0218/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/david-schweikert-54ab0218/</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🌍 As AI reshapes the global landscape, <strong>how should governments respond?</strong> In this thought-provoking episode of <em>Regulating AI</em>, Sanjay Puri engages in a deep dive with <strong>Congressman David Schweikert</strong> on:&nbsp;</p><p><br></p><ul><li>The <strong>critical role of AI regulation</strong> in national security &amp; economic resilience&nbsp;</li><li>How AI can <strong>bridge gaps in public policy</strong> rather than widen them&nbsp;</li><li>The <strong>biggest ethical concerns</strong> surrounding AI implementation&nbsp;</li><li>The <strong>future of AI governance in the U.S. and beyond</strong>&nbsp;</li></ul><p>🚀 <strong>Stay ahead of the curve—hit play and join the conversation!</strong>&nbsp;</p><p><br></p><p>👍 <strong>Like, comment, and subscribe for more AI policy insights!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/david-schweikert-54ab0218/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/david-schweikert-54ab0218/</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🌍 As AI reshapes the global landscape, how should governments respond? In this thought-provoking episode of Regulating AI, Sanjay Puri engages in a deep dive with Congressman David Schweikert on: The critical role of AI regulation in national secur...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[06ea8b38-3b7e-498b-9a06-be2158816057]]></guid>
  <title><![CDATA[AI and International Law Governing Armed Conflict: Challenges and Future Implications by Jonathan Horowitz | Regulating AI Podcast ]]></title>
  <description><![CDATA[<p>In this episode of <em>RegulatingAI</em>, host Sanjay Puri sits down with Jonathan Horowitz, Legal Advisor at the International Committee of the Red Cross, to explore the complex legal landscape surrounding AI regulation in situations of armed conflicts. Jonathan shares his deep insights into how AI is intersecting with international law and war.&nbsp;</p><p><br></p><p>Key Highlights: &nbsp;</p><p><br></p><ul><li>The growing influence of AI on global legal frameworks &nbsp;</li><li>&nbsp;How the laws that govern armed conflict are shaping AI governance &nbsp;</li><li>Challenges faced by governments and institutions in RegulatingAI &nbsp;</li><li>&nbsp;Why balancing innovation with legal and ethical AI development is critical &nbsp;</li></ul><p>👉 Watch now to uncover expert insights on the future of AI regulation! &nbsp;</p><p><br></p><p><strong>Resources Mentioned: </strong>&nbsp;</p><p><br></p><p><a href="https://www.linkedin.com/in/jonathan-horowitz-b78b6026/" target="_blank" style="color: rgb(15, 158, 213);">https://www.linkedin.com/in/jonathan-horowitz-b78b6026/</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>ICRC Position Paper: Artificial intelligence and machine learning in armed conflict: A human-centred approach | International Review of the Red Cross: <a href="https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913" target="_blank" style="color: rgb(15, 158, 213);">https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>What you need to know about artificial intelligence and armed conflict: <a href="https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict" target="_blank" style="color: rgb(15, 158, 213);">https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>Expert Consultation report – Artificial intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed conflicts: Current Developments and Potential Implications:<span style="color: rgb(15, 158, 213);"> </span><a href="https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html" target="_blank" style="color: rgb(15, 158, 213);">https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>Decisions, Decisions, Decisions: computation and Artificial Intelligence in military decision-making: <a href="https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html" target="_blank" style="color: rgb(15, 158, 213);">https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/94314ea5-a0d7-4dc1-b712-b5966c87c5a1/9d7b08e35c.jpg" />
  <pubDate>Tue, 25 Mar 2025 14:47:35 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="57888644" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/94314ea5-a0d7-4dc1-b712-b5966c87c5a1/episode.mp3" />
  <itunes:title><![CDATA[AI and International Law Governing Armed Conflict: Challenges and Future Implications by Jonathan Horowitz | Regulating AI Podcast ]]></itunes:title>
  <itunes:duration>1:00:18</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of <em>RegulatingAI</em>, host Sanjay Puri sits down with Jonathan Horowitz, Legal Advisor at the International Committee of the Red Cross, to explore the complex legal landscape surrounding AI regulation in situations of armed conflicts. Jonathan shares his deep insights into how AI is intersecting with international law and war.&nbsp;</p><p><br></p><p>Key Highlights: &nbsp;</p><p><br></p><ul><li>The growing influence of AI on global legal frameworks &nbsp;</li><li>&nbsp;How the laws that govern armed conflict are shaping AI governance &nbsp;</li><li>Challenges faced by governments and institutions in RegulatingAI &nbsp;</li><li>&nbsp;Why balancing innovation with legal and ethical AI development is critical &nbsp;</li></ul><p>👉 Watch now to uncover expert insights on the future of AI regulation! &nbsp;</p><p><br></p><p><strong>Resources Mentioned: </strong>&nbsp;</p><p><br></p><p><a href="https://www.linkedin.com/in/jonathan-horowitz-b78b6026/" target="_blank" style="color: rgb(15, 158, 213);">https://www.linkedin.com/in/jonathan-horowitz-b78b6026/</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>ICRC Position Paper: Artificial intelligence and machine learning in armed conflict: A human-centred approach | International Review of the Red Cross: <a href="https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913" target="_blank" style="color: rgb(15, 158, 213);">https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>What you need to know about artificial intelligence and armed conflict: <a href="https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict" target="_blank" style="color: rgb(15, 158, 213);">https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>Expert Consultation report – Artificial intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed conflicts: Current Developments and Potential Implications:<span style="color: rgb(15, 158, 213);"> </span><a href="https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html" target="_blank" style="color: rgb(15, 158, 213);">https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>Decisions, Decisions, Decisions: computation and Artificial Intelligence in military decision-making: <a href="https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html" target="_blank" style="color: rgb(15, 158, 213);">https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of <em>RegulatingAI</em>, host Sanjay Puri sits down with Jonathan Horowitz, Legal Advisor at the International Committee of the Red Cross, to explore the complex legal landscape surrounding AI regulation in situations of armed conflicts. Jonathan shares his deep insights into how AI is intersecting with international law and war.&nbsp;</p><p><br></p><p>Key Highlights: &nbsp;</p><p><br></p><ul><li>The growing influence of AI on global legal frameworks &nbsp;</li><li>&nbsp;How the laws that govern armed conflict are shaping AI governance &nbsp;</li><li>Challenges faced by governments and institutions in RegulatingAI &nbsp;</li><li>&nbsp;Why balancing innovation with legal and ethical AI development is critical &nbsp;</li></ul><p>👉 Watch now to uncover expert insights on the future of AI regulation! &nbsp;</p><p><br></p><p><strong>Resources Mentioned: </strong>&nbsp;</p><p><br></p><p><a href="https://www.linkedin.com/in/jonathan-horowitz-b78b6026/" target="_blank" style="color: rgb(15, 158, 213);">https://www.linkedin.com/in/jonathan-horowitz-b78b6026/</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>ICRC Position Paper: Artificial intelligence and machine learning in armed conflict: A human-centred approach | International Review of the Red Cross: <a href="https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913" target="_blank" style="color: rgb(15, 158, 213);">https://international-review.icrc.org/articles/ai-and-machine-learning-in-armed-conflict-a-human-centred-approach-913</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>What you need to know about artificial intelligence and armed conflict: <a href="https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict" target="_blank" style="color: rgb(15, 158, 213);">https://www.icrc.org/en/document/what-you-need-know-about-artificial-intelligence-armed-conflict</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>Expert Consultation report – Artificial intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed conflicts: Current Developments and Potential Implications:<span style="color: rgb(15, 158, 213);"> </span><a href="https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html" target="_blank" style="color: rgb(15, 158, 213);">https://shop.icrc.org/expert-consultation-report-artificial-intelligence-and-related-technologies-in-military-decision-making-on-the-use-of-force-in-armed-conflicts-current-developments-and-potential-implications-pdf-en.html</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p><p><br></p><p>Decisions, Decisions, Decisions: computation and Artificial Intelligence in military decision-making: <a href="https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html" target="_blank" style="color: rgb(15, 158, 213);">https://shop.icrc.org/decisions-decisions-decisions-computation-and-artificial-intelligence-in-military-decision-making-pdf-en.html</a><span style="color: rgb(15, 158, 213);"> &nbsp;</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI, host Sanjay Puri sits down with Jonathan Horowitz, Legal Advisor at the International Committee of the Red Cross, to explore the complex legal landscape surrounding AI regulation in situations of armed conflicts. Jo...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[738690b6-ec1e-441c-83aa-bcec9857ae32]]></guid>
  <title><![CDATA[The Future of Responsible AI with Dr. Richard Benjamins | RegulatingAI Podcast]]></title>
  <description><![CDATA[<p>AI is shaping the future, but <strong>who ensures it remains responsible and ethical?</strong>&nbsp;</p><p><br></p><p>In this compelling conversation, <strong>Dr. Richard Benjamins</strong> shares insights on:&nbsp;</p><p><br></p><p>✔️ His work in AI ethics and policy advocacy&nbsp;</p><p> ✔️ How companies like Telefonica approach responsible AI&nbsp;</p><p> ✔️ The role of international regulatory bodies in AI governance&nbsp;</p><p> ✔️ Key challenges in enforcing AI compliance across industries&nbsp;</p><p>🌍 As AI adoption accelerates, <strong>ensuring ethical oversight is more critical than ever</strong>. This episode provides essential insights for business leaders, policymakers, and AI enthusiasts.&nbsp;</p><p><br></p><p>📢 <strong>Tune in to gain expert perspectives on responsible AI!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/richard-benjamins/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/richard-benjamins/</a><span style="color: rgb(70, 120, 134);">&nbsp;</span></p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 - Podcast Episode Highlights&nbsp;</p><p>02:00 - Richard’s Journey into AI Ethics&nbsp;</p><p>05:15 - Defining Responsible AI&nbsp;</p><p>08:30 - Global AI Regulation: Differences &amp; Challenges&nbsp;</p><p>12:50 - Need for International Collaboration in AI Regulation&nbsp;</p><p>17:20 - AI Ethics Boards &amp; Their Role in Companies&nbsp;</p><p>22:10 - Who Should Implement AI Ethics in a Company?&nbsp;</p><p>26:40 - Addressing Bias &amp; Privacy in AI Development&nbsp;</p><p>31:50 - AI’s Impact on Smaller Languages &amp; Cultures&nbsp;</p><p>36:20 - The Monopoly Risk in AI&nbsp;</p><p>41:00 - Ethical Principles for AI Development&nbsp;</p><p>46:00 - Future-Proofing AI Regulation&nbsp;</p><p>50:10 - AI’s Disruptive Impact on Jobs &amp; Workforce&nbsp;</p><p>55:20 - Final Advice for Policymakers &amp; Researchers&nbsp;</p><p>58:00 - Closing Remarks&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/262c55be-a7b4-4738-b5cf-2dbb75d86376/f3ac191eb6.jpg" />
  <pubDate>Fri, 21 Mar 2025 05:44:02 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="44471319" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/262c55be-a7b4-4738-b5cf-2dbb75d86376/episode.mp3" />
  <itunes:title><![CDATA[The Future of Responsible AI with Dr. Richard Benjamins | RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>46:19</itunes:duration>
  <itunes:summary><![CDATA[<p>AI is shaping the future, but <strong>who ensures it remains responsible and ethical?</strong>&nbsp;</p><p><br></p><p>In this compelling conversation, <strong>Dr. Richard Benjamins</strong> shares insights on:&nbsp;</p><p><br></p><p>✔️ His work in AI ethics and policy advocacy&nbsp;</p><p> ✔️ How companies like Telefonica approach responsible AI&nbsp;</p><p> ✔️ The role of international regulatory bodies in AI governance&nbsp;</p><p> ✔️ Key challenges in enforcing AI compliance across industries&nbsp;</p><p>🌍 As AI adoption accelerates, <strong>ensuring ethical oversight is more critical than ever</strong>. This episode provides essential insights for business leaders, policymakers, and AI enthusiasts.&nbsp;</p><p><br></p><p>📢 <strong>Tune in to gain expert perspectives on responsible AI!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/richard-benjamins/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/richard-benjamins/</a><span style="color: rgb(70, 120, 134);">&nbsp;</span></p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 - Podcast Episode Highlights&nbsp;</p><p>02:00 - Richard’s Journey into AI Ethics&nbsp;</p><p>05:15 - Defining Responsible AI&nbsp;</p><p>08:30 - Global AI Regulation: Differences &amp; Challenges&nbsp;</p><p>12:50 - Need for International Collaboration in AI Regulation&nbsp;</p><p>17:20 - AI Ethics Boards &amp; Their Role in Companies&nbsp;</p><p>22:10 - Who Should Implement AI Ethics in a Company?&nbsp;</p><p>26:40 - Addressing Bias &amp; Privacy in AI Development&nbsp;</p><p>31:50 - AI’s Impact on Smaller Languages &amp; Cultures&nbsp;</p><p>36:20 - The Monopoly Risk in AI&nbsp;</p><p>41:00 - Ethical Principles for AI Development&nbsp;</p><p>46:00 - Future-Proofing AI Regulation&nbsp;</p><p>50:10 - AI’s Disruptive Impact on Jobs &amp; Workforce&nbsp;</p><p>55:20 - Final Advice for Policymakers &amp; Researchers&nbsp;</p><p>58:00 - Closing Remarks&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>AI is shaping the future, but <strong>who ensures it remains responsible and ethical?</strong>&nbsp;</p><p><br></p><p>In this compelling conversation, <strong>Dr. Richard Benjamins</strong> shares insights on:&nbsp;</p><p><br></p><p>✔️ His work in AI ethics and policy advocacy&nbsp;</p><p> ✔️ How companies like Telefonica approach responsible AI&nbsp;</p><p> ✔️ The role of international regulatory bodies in AI governance&nbsp;</p><p> ✔️ Key challenges in enforcing AI compliance across industries&nbsp;</p><p>🌍 As AI adoption accelerates, <strong>ensuring ethical oversight is more critical than ever</strong>. This episode provides essential insights for business leaders, policymakers, and AI enthusiasts.&nbsp;</p><p><br></p><p>📢 <strong>Tune in to gain expert perspectives on responsible AI!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/richard-benjamins/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/richard-benjamins/</a><span style="color: rgb(70, 120, 134);">&nbsp;</span></p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00 - Podcast Episode Highlights&nbsp;</p><p>02:00 - Richard’s Journey into AI Ethics&nbsp;</p><p>05:15 - Defining Responsible AI&nbsp;</p><p>08:30 - Global AI Regulation: Differences &amp; Challenges&nbsp;</p><p>12:50 - Need for International Collaboration in AI Regulation&nbsp;</p><p>17:20 - AI Ethics Boards &amp; Their Role in Companies&nbsp;</p><p>22:10 - Who Should Implement AI Ethics in a Company?&nbsp;</p><p>26:40 - Addressing Bias &amp; Privacy in AI Development&nbsp;</p><p>31:50 - AI’s Impact on Smaller Languages &amp; Cultures&nbsp;</p><p>36:20 - The Monopoly Risk in AI&nbsp;</p><p>41:00 - Ethical Principles for AI Development&nbsp;</p><p>46:00 - Future-Proofing AI Regulation&nbsp;</p><p>50:10 - AI’s Disruptive Impact on Jobs &amp; Workforce&nbsp;</p><p>55:20 - Final Advice for Policymakers &amp; Researchers&nbsp;</p><p>58:00 - Closing Remarks&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI is shaping the future, but who ensures it remains responsible and ethical? In this compelling conversation, Dr. Richard Benjamins shares insights on: ✔️ His work in AI ethics and policy advocacy  ✔️ How companies like Telefonica approach respons...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1ab9d3fa-b6f2-4201-a87f-8dc19e1888d5]]></guid>
  <title><![CDATA[The Future of AI in Higher Education and Scientific Research with Nicholas Dirks | RegulatingAI Podcast  ]]></title>
  <description><![CDATA[<p>Artificial Intelligence is reshaping the landscape of higher education and research. In this episode of the RegulatingAI Podcast, <strong>Sanjay Puri sits down with Nicholas Dirks, President of the New York Academy of Sciences</strong>, to discuss the profound impact of AI on academia and scientific exploration.&nbsp;</p><p><br></p><p>🔍 <strong>Topics Covered:</strong>&nbsp;</p><p><br></p><ul><li>How AI is revolutionizing research methodologies and knowledge dissemination&nbsp;</li><li>The ethical dilemmas of integrating AI into education and scientific discovery&nbsp;</li><li>Challenges in RegulatingAI while maintaining academic freedom&nbsp;</li><li>The role of AI in shaping the future workforce and redefining critical thinking skills&nbsp;</li></ul><p>&nbsp;</p><p>📢 <strong>Listen now and discover how AI is influencing the future of learning and research!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.nicholasbdirks.com/" target="_blank" style="color: rgb(70, 120, 134);">https://www.nicholasbdirks.com/</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/nicholas-dirks-84a1ab149/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/nicholas-dirks-84a1ab149/</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/601acf0a-e37d-42da-b56a-231d41e87816/aafa01e431.jpg" />
  <pubDate>Mon, 17 Mar 2025 10:55:44 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="54845066" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/601acf0a-e37d-42da-b56a-231d41e87816/episode.mp3" />
  <itunes:title><![CDATA[The Future of AI in Higher Education and Scientific Research with Nicholas Dirks | RegulatingAI Podcast  ]]></itunes:title>
  <itunes:duration>57:07</itunes:duration>
  <itunes:summary><![CDATA[<p>Artificial Intelligence is reshaping the landscape of higher education and research. In this episode of the RegulatingAI Podcast, <strong>Sanjay Puri sits down with Nicholas Dirks, President of the New York Academy of Sciences</strong>, to discuss the profound impact of AI on academia and scientific exploration.&nbsp;</p><p><br></p><p>🔍 <strong>Topics Covered:</strong>&nbsp;</p><p><br></p><ul><li>How AI is revolutionizing research methodologies and knowledge dissemination&nbsp;</li><li>The ethical dilemmas of integrating AI into education and scientific discovery&nbsp;</li><li>Challenges in RegulatingAI while maintaining academic freedom&nbsp;</li><li>The role of AI in shaping the future workforce and redefining critical thinking skills&nbsp;</li></ul><p>&nbsp;</p><p>📢 <strong>Listen now and discover how AI is influencing the future of learning and research!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.nicholasbdirks.com/" target="_blank" style="color: rgb(70, 120, 134);">https://www.nicholasbdirks.com/</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/nicholas-dirks-84a1ab149/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/nicholas-dirks-84a1ab149/</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Artificial Intelligence is reshaping the landscape of higher education and research. In this episode of the RegulatingAI Podcast, <strong>Sanjay Puri sits down with Nicholas Dirks, President of the New York Academy of Sciences</strong>, to discuss the profound impact of AI on academia and scientific exploration.&nbsp;</p><p><br></p><p>🔍 <strong>Topics Covered:</strong>&nbsp;</p><p><br></p><ul><li>How AI is revolutionizing research methodologies and knowledge dissemination&nbsp;</li><li>The ethical dilemmas of integrating AI into education and scientific discovery&nbsp;</li><li>Challenges in RegulatingAI while maintaining academic freedom&nbsp;</li><li>The role of AI in shaping the future workforce and redefining critical thinking skills&nbsp;</li></ul><p>&nbsp;</p><p>📢 <strong>Listen now and discover how AI is influencing the future of learning and research!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.nicholasbdirks.com/" target="_blank" style="color: rgb(70, 120, 134);">https://www.nicholasbdirks.com/</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/nicholas-dirks-84a1ab149/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/nicholas-dirks-84a1ab149/</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Artificial Intelligence is reshaping the landscape of higher education and research. In this episode of the RegulatingAI Podcast, Sanjay Puri sits down with Nicholas Dirks, President of the New York Academy of Sciences, to discuss the profound impa...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2730e2dc-1976-460b-aac7-5eeb9e186294]]></guid>
  <title><![CDATA[The Future of AI Policy with US Congresswoman Suzan DelBene | The RegulatingAI Podcast]]></title>
  <description><![CDATA[<p class="ql-align-justify">📢 <strong>AI is advancing faster than ever—but can policy keep up?</strong>&nbsp;</p><p class="ql-align-justify">In this episode of <strong>Regulating AI Podcast</strong>, US Congresswoman <strong>Suzan DelBene</strong> discusses the future of AI governance and how policymakers are shaping the landscape.&nbsp;</p><p class="ql-align-justify">🔹 <strong>Key discussion points:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The <strong>current state of AI regulation</strong> and where it’s headed.&nbsp;</li><li class="ql-align-justify">How AI policies impact industries, businesses, and consumers.&nbsp;</li><li class="ql-align-justify">The <strong>delicate balance</strong> between fostering AI innovation and enforcing ethical safeguards.&nbsp;</li><li class="ql-align-justify">The role of <strong>transparency, accountability, and fairness</strong> in AI legislation.&nbsp;</li><li class="ql-align-justify">Why collaboration between <strong>tech leaders, government, and global organizations</strong> is crucial for responsible AI growth.&nbsp;</li></ul><p class="ql-align-justify">🎥 <strong>Join the conversation and stay ahead of AI regulations! Watch now!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://delbene.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://delbene.house.gov/</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/suzan-delbene-752a174/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/suzan-delbene-752a174/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00:00 - Podcast Episode Highlights&nbsp;</p><p>00:01:13 - Rep. DelBene’s Journey to Congress&nbsp;</p><p>00:03:20 - Why Privacy is the Foundation for AI Regulation&nbsp;</p><p>00:06:56 - Challenges in Passing U.S. Privacy Laws&nbsp;</p><p>00:09:02 - AI Bias &amp; Discrimination Risks&nbsp;</p><p>00:11:09 - Congress’s Role in AI Guardrails&nbsp;</p><p>00:14:21 - Balancing Innovation &amp; Regulation&nbsp;</p><p>00:16:56 - International AI Policy Leadership&nbsp;</p><p>00:19:04 - Centering Marginalized Communities&nbsp;</p><p>00:20:52 - Preventing AI Monopolies&nbsp;</p><p>00:24:30 - Workforce Readiness &amp; Education&nbsp;</p><p>00:27:35 - Closing Thoughts&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/8dad96b1-c113-4d73-ac85-b685f2e7dcd0/e1641b9605.jpg" />
  <pubDate>Sat, 15 Mar 2025 12:58:13 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="27253072" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/8dad96b1-c113-4d73-ac85-b685f2e7dcd0/episode.mp3" />
  <itunes:title><![CDATA[The Future of AI Policy with US Congresswoman Suzan DelBene | The RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>28:23</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">📢 <strong>AI is advancing faster than ever—but can policy keep up?</strong>&nbsp;</p><p class="ql-align-justify">In this episode of <strong>Regulating AI Podcast</strong>, US Congresswoman <strong>Suzan DelBene</strong> discusses the future of AI governance and how policymakers are shaping the landscape.&nbsp;</p><p class="ql-align-justify">🔹 <strong>Key discussion points:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The <strong>current state of AI regulation</strong> and where it’s headed.&nbsp;</li><li class="ql-align-justify">How AI policies impact industries, businesses, and consumers.&nbsp;</li><li class="ql-align-justify">The <strong>delicate balance</strong> between fostering AI innovation and enforcing ethical safeguards.&nbsp;</li><li class="ql-align-justify">The role of <strong>transparency, accountability, and fairness</strong> in AI legislation.&nbsp;</li><li class="ql-align-justify">Why collaboration between <strong>tech leaders, government, and global organizations</strong> is crucial for responsible AI growth.&nbsp;</li></ul><p class="ql-align-justify">🎥 <strong>Join the conversation and stay ahead of AI regulations! Watch now!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://delbene.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://delbene.house.gov/</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/suzan-delbene-752a174/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/suzan-delbene-752a174/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00:00 - Podcast Episode Highlights&nbsp;</p><p>00:01:13 - Rep. DelBene’s Journey to Congress&nbsp;</p><p>00:03:20 - Why Privacy is the Foundation for AI Regulation&nbsp;</p><p>00:06:56 - Challenges in Passing U.S. Privacy Laws&nbsp;</p><p>00:09:02 - AI Bias &amp; Discrimination Risks&nbsp;</p><p>00:11:09 - Congress’s Role in AI Guardrails&nbsp;</p><p>00:14:21 - Balancing Innovation &amp; Regulation&nbsp;</p><p>00:16:56 - International AI Policy Leadership&nbsp;</p><p>00:19:04 - Centering Marginalized Communities&nbsp;</p><p>00:20:52 - Preventing AI Monopolies&nbsp;</p><p>00:24:30 - Workforce Readiness &amp; Education&nbsp;</p><p>00:27:35 - Closing Thoughts&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">📢 <strong>AI is advancing faster than ever—but can policy keep up?</strong>&nbsp;</p><p class="ql-align-justify">In this episode of <strong>Regulating AI Podcast</strong>, US Congresswoman <strong>Suzan DelBene</strong> discusses the future of AI governance and how policymakers are shaping the landscape.&nbsp;</p><p class="ql-align-justify">🔹 <strong>Key discussion points:</strong>&nbsp;</p><p><br></p><ul><li class="ql-align-justify">The <strong>current state of AI regulation</strong> and where it’s headed.&nbsp;</li><li class="ql-align-justify">How AI policies impact industries, businesses, and consumers.&nbsp;</li><li class="ql-align-justify">The <strong>delicate balance</strong> between fostering AI innovation and enforcing ethical safeguards.&nbsp;</li><li class="ql-align-justify">The role of <strong>transparency, accountability, and fairness</strong> in AI legislation.&nbsp;</li><li class="ql-align-justify">Why collaboration between <strong>tech leaders, government, and global organizations</strong> is crucial for responsible AI growth.&nbsp;</li></ul><p class="ql-align-justify">🎥 <strong>Join the conversation and stay ahead of AI regulations! Watch now!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://delbene.house.gov/" target="_blank" style="color: rgb(70, 120, 134);">https://delbene.house.gov/</a>&nbsp;&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/suzan-delbene-752a174/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/suzan-delbene-752a174/</a>&nbsp;</p><p class="ql-align-justify">&nbsp;</p><p><strong style="color: rgb(0, 112, 192);">⏱️ Timestamps:</strong><span style="color: rgb(0, 112, 192);">&nbsp;</span></p><p>00:00:00 - Podcast Episode Highlights&nbsp;</p><p>00:01:13 - Rep. DelBene’s Journey to Congress&nbsp;</p><p>00:03:20 - Why Privacy is the Foundation for AI Regulation&nbsp;</p><p>00:06:56 - Challenges in Passing U.S. Privacy Laws&nbsp;</p><p>00:09:02 - AI Bias &amp; Discrimination Risks&nbsp;</p><p>00:11:09 - Congress’s Role in AI Guardrails&nbsp;</p><p>00:14:21 - Balancing Innovation &amp; Regulation&nbsp;</p><p>00:16:56 - International AI Policy Leadership&nbsp;</p><p>00:19:04 - Centering Marginalized Communities&nbsp;</p><p>00:20:52 - Preventing AI Monopolies&nbsp;</p><p>00:24:30 - Workforce Readiness &amp; Education&nbsp;</p><p>00:27:35 - Closing Thoughts&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[📢 AI is advancing faster than ever—but can policy keep up? In this episode of Regulating AI Podcast, US Congresswoman Suzan DelBene discusses the future of AI governance and how policymakers are shaping the landscape. 🔹 Key discussion points: The c...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b4993427-315e-4783-946d-1b1e58940e00]]></guid>
  <title><![CDATA[Understanding why AI Governance is a Business Imperative Ft. Emre Kazim, Holistic AI | Regulating AI Podcast ]]></title>
  <description><![CDATA[<p><em>How do enterprises scale AI responsibly?</em> Join us as <strong>Sanjay Puri</strong> sits down with <strong>Emre Kazim, Co-CEO of Holistic AI</strong>, to explore the critical role of <strong>AI governance</strong> in building <strong>trustworthy AI systems</strong>.&nbsp;</p><p><br></p><p>🔹 <strong>Key Topics Discussed:</strong>&nbsp;</p><p>✔️ Why AI <strong>governance</strong> is more than just <strong>compliance</strong>&nbsp;</p><p>✔️ How <strong>trust and accountability</strong> impact AI adoption&nbsp;</p><p>✔️ The <strong>biggest risks enterprises face</strong> with AI deployment&nbsp;</p><p>✔️ AI <strong>policy and governance models</strong> across different regions&nbsp;</p><p>✔️ How businesses can scale AI while maintaining <strong>safety and oversight</strong>&nbsp;</p><p>💡 <strong>Holistic AI’s Vision:</strong> Enabling enterprises to <strong>adopt AI with confidence</strong> through governance frameworks that align with business needs.&nbsp;</p><p><br></p><p>🔴 <strong>Watch now</strong> and gain insights into how governance can shape the <strong>future of AI!</strong>&nbsp;</p><p><br></p><p>🔔 Don’t forget to <strong>LIKE, SHARE, and SUBSCRIBE</strong> for more expert conversations on AI governance.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/emre-kazim-21784b21" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/emre-kazim-21784b21</a>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/3a8002c8-e974-4d1f-ab8d-ddd5d0107f23/ffc6dd9b31.jpg" />
  <pubDate>Wed, 12 Mar 2025 10:17:28 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="46901751" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/3a8002c8-e974-4d1f-ab8d-ddd5d0107f23/episode.mp3" />
  <itunes:title><![CDATA[Understanding why AI Governance is a Business Imperative Ft. Emre Kazim, Holistic AI | Regulating AI Podcast ]]></itunes:title>
  <itunes:duration>48:51</itunes:duration>
  <itunes:summary><![CDATA[<p><em>How do enterprises scale AI responsibly?</em> Join us as <strong>Sanjay Puri</strong> sits down with <strong>Emre Kazim, Co-CEO of Holistic AI</strong>, to explore the critical role of <strong>AI governance</strong> in building <strong>trustworthy AI systems</strong>.&nbsp;</p><p><br></p><p>🔹 <strong>Key Topics Discussed:</strong>&nbsp;</p><p>✔️ Why AI <strong>governance</strong> is more than just <strong>compliance</strong>&nbsp;</p><p>✔️ How <strong>trust and accountability</strong> impact AI adoption&nbsp;</p><p>✔️ The <strong>biggest risks enterprises face</strong> with AI deployment&nbsp;</p><p>✔️ AI <strong>policy and governance models</strong> across different regions&nbsp;</p><p>✔️ How businesses can scale AI while maintaining <strong>safety and oversight</strong>&nbsp;</p><p>💡 <strong>Holistic AI’s Vision:</strong> Enabling enterprises to <strong>adopt AI with confidence</strong> through governance frameworks that align with business needs.&nbsp;</p><p><br></p><p>🔴 <strong>Watch now</strong> and gain insights into how governance can shape the <strong>future of AI!</strong>&nbsp;</p><p><br></p><p>🔔 Don’t forget to <strong>LIKE, SHARE, and SUBSCRIBE</strong> for more expert conversations on AI governance.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/emre-kazim-21784b21" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/emre-kazim-21784b21</a>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><em>How do enterprises scale AI responsibly?</em> Join us as <strong>Sanjay Puri</strong> sits down with <strong>Emre Kazim, Co-CEO of Holistic AI</strong>, to explore the critical role of <strong>AI governance</strong> in building <strong>trustworthy AI systems</strong>.&nbsp;</p><p><br></p><p>🔹 <strong>Key Topics Discussed:</strong>&nbsp;</p><p>✔️ Why AI <strong>governance</strong> is more than just <strong>compliance</strong>&nbsp;</p><p>✔️ How <strong>trust and accountability</strong> impact AI adoption&nbsp;</p><p>✔️ The <strong>biggest risks enterprises face</strong> with AI deployment&nbsp;</p><p>✔️ AI <strong>policy and governance models</strong> across different regions&nbsp;</p><p>✔️ How businesses can scale AI while maintaining <strong>safety and oversight</strong>&nbsp;</p><p>💡 <strong>Holistic AI’s Vision:</strong> Enabling enterprises to <strong>adopt AI with confidence</strong> through governance frameworks that align with business needs.&nbsp;</p><p><br></p><p>🔴 <strong>Watch now</strong> and gain insights into how governance can shape the <strong>future of AI!</strong>&nbsp;</p><p><br></p><p>🔔 Don’t forget to <strong>LIKE, SHARE, and SUBSCRIBE</strong> for more expert conversations on AI governance.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p><a href="https://www.linkedin.com/in/emre-kazim-21784b21" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/emre-kazim-21784b21</a>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[How do enterprises scale AI responsibly? Join us as Sanjay Puri sits down with Emre Kazim, Co-CEO of Holistic AI, to explore the critical role of AI governance in building trustworthy AI systems. 🔹 Key Topics Discussed: ✔️ Why AI governance is more...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[c85ada30-1482-41a6-ba26-af2463270d6d]]></guid>
  <title><![CDATA[nnovation vs. Regulation: Balancing AI Growth with Ethics | Generative AI Summit]]></title>
  <description><![CDATA[<p>How do we balance AI innovation with responsible regulation? 🤖⚖️</p><p>&nbsp;</p><p>Sanjay Puri, Chairman &amp; Founder of Knowledge Networks, joined an expert panel at the Generative AI Summit in Washington, D.C., hosted by the AI Accelerator Institute, to discuss "Innovation vs. Regulation – Ethically Balancing Rapid Development with a Safety-First Approach."</p><p>&nbsp;</p><p>In this engaging conversation, industry leaders Daniel Fenton, Zorina Alliata, and Zachary Hanif shared insights on:</p><p>✅ Why AI innovation and regulation must go hand in hand.</p><p>✅ How collaboration between policymakers, industry leaders, and AI practitioners is key to responsible AI.</p><p>✅ Why ethical compliance is not a roadblock but a catalyst for sustainable AI development.</p><p>&nbsp;</p><p>A big thank you to the AI Accelerator Institute, our incredible panelists, and everyone who joined the discussion! Let’s keep pushing the conversation forward on responsible AI innovation. 🚀💡</p><p>&nbsp;</p><p>🔔 Subscribe for more expert discussions on AI governance, ethics, and innovation!</p>]]></description>
  <pubDate>Mon, 10 Mar 2025 10:00:31 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="23348079" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9a9975dd-ad72-49bd-82d9-a8614018328f/episode.mp3" />
  <itunes:title><![CDATA[nnovation vs. Regulation: Balancing AI Growth with Ethics | Generative AI Summit]]></itunes:title>
  <itunes:duration>24:19</itunes:duration>
  <itunes:summary><![CDATA[<p>How do we balance AI innovation with responsible regulation? 🤖⚖️</p><p>&nbsp;</p><p>Sanjay Puri, Chairman &amp; Founder of Knowledge Networks, joined an expert panel at the Generative AI Summit in Washington, D.C., hosted by the AI Accelerator Institute, to discuss "Innovation vs. Regulation – Ethically Balancing Rapid Development with a Safety-First Approach."</p><p>&nbsp;</p><p>In this engaging conversation, industry leaders Daniel Fenton, Zorina Alliata, and Zachary Hanif shared insights on:</p><p>✅ Why AI innovation and regulation must go hand in hand.</p><p>✅ How collaboration between policymakers, industry leaders, and AI practitioners is key to responsible AI.</p><p>✅ Why ethical compliance is not a roadblock but a catalyst for sustainable AI development.</p><p>&nbsp;</p><p>A big thank you to the AI Accelerator Institute, our incredible panelists, and everyone who joined the discussion! Let’s keep pushing the conversation forward on responsible AI innovation. 🚀💡</p><p>&nbsp;</p><p>🔔 Subscribe for more expert discussions on AI governance, ethics, and innovation!</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>How do we balance AI innovation with responsible regulation? 🤖⚖️</p><p>&nbsp;</p><p>Sanjay Puri, Chairman &amp; Founder of Knowledge Networks, joined an expert panel at the Generative AI Summit in Washington, D.C., hosted by the AI Accelerator Institute, to discuss "Innovation vs. Regulation – Ethically Balancing Rapid Development with a Safety-First Approach."</p><p>&nbsp;</p><p>In this engaging conversation, industry leaders Daniel Fenton, Zorina Alliata, and Zachary Hanif shared insights on:</p><p>✅ Why AI innovation and regulation must go hand in hand.</p><p>✅ How collaboration between policymakers, industry leaders, and AI practitioners is key to responsible AI.</p><p>✅ Why ethical compliance is not a roadblock but a catalyst for sustainable AI development.</p><p>&nbsp;</p><p>A big thank you to the AI Accelerator Institute, our incredible panelists, and everyone who joined the discussion! Let’s keep pushing the conversation forward on responsible AI innovation. 🚀💡</p><p>&nbsp;</p><p>🔔 Subscribe for more expert discussions on AI governance, ethics, and innovation!</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[How do we balance AI innovation with responsible regulation? 🤖⚖️ Sanjay Puri, Chairman & Founder of Knowledge Networks, joined an expert panel at the Generative AI Summit in Washington, D.C., hosted by the AI Accelerator Institute, to discuss "Inno...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2d467b2f-dc32-4c54-bc1a-0b3e2dcd461c]]></guid>
  <title><![CDATA[Balancing AI Governance & Innovation: Lessons from the EU  Ft. Lucilla Sioli | RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p>🌍 <strong>How do you regulate AI without stifling innovation?</strong>&nbsp;</p><p><br></p><p>Many believe regulation slows down technological progress, but according to Lucilla Sioli, this is a false dilemma. The EU AI Act is designed to support both innovation and governance, ensuring that AI systems remain safe, reliable, and beneficial for all.&nbsp;</p><p><br></p><p>📢 <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ Why regulation and innovation are not conflicting forces but complementary.&nbsp;</p><p>✅ How AI regulation creates trust, leading to broader adoption and investment.&nbsp;</p><p>✅ The role of AI sandboxes in allowing startups to test AI applications in a controlled environment.&nbsp;</p><p>✅ The AI Pact’s growing global interest—why even U.S. and Korean companies are voluntarily aligning with EU AI standards.&nbsp;</p><p>✅ What businesses can learn from the EU’s risk-based approach to AI regulation.&nbsp;</p><p>🔍 <strong>Lucilla explains how the EU’s structured, risk-based framework ensures AI development remains competitive while prioritizing safety.</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/lucilla-sioli-b944392/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/lucilla-sioli-b944392/</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/dd53b4f4-c372-47a2-8cba-408c204f20f0/4d60b027da.jpg" />
  <pubDate>Fri, 07 Mar 2025 10:38:10 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="41260974" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/dd53b4f4-c372-47a2-8cba-408c204f20f0/episode.mp3" />
  <itunes:title><![CDATA[Balancing AI Governance & Innovation: Lessons from the EU  Ft. Lucilla Sioli | RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>42:58</itunes:duration>
  <itunes:summary><![CDATA[<p>🌍 <strong>How do you regulate AI without stifling innovation?</strong>&nbsp;</p><p><br></p><p>Many believe regulation slows down technological progress, but according to Lucilla Sioli, this is a false dilemma. The EU AI Act is designed to support both innovation and governance, ensuring that AI systems remain safe, reliable, and beneficial for all.&nbsp;</p><p><br></p><p>📢 <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ Why regulation and innovation are not conflicting forces but complementary.&nbsp;</p><p>✅ How AI regulation creates trust, leading to broader adoption and investment.&nbsp;</p><p>✅ The role of AI sandboxes in allowing startups to test AI applications in a controlled environment.&nbsp;</p><p>✅ The AI Pact’s growing global interest—why even U.S. and Korean companies are voluntarily aligning with EU AI standards.&nbsp;</p><p>✅ What businesses can learn from the EU’s risk-based approach to AI regulation.&nbsp;</p><p>🔍 <strong>Lucilla explains how the EU’s structured, risk-based framework ensures AI development remains competitive while prioritizing safety.</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/lucilla-sioli-b944392/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/lucilla-sioli-b944392/</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🌍 <strong>How do you regulate AI without stifling innovation?</strong>&nbsp;</p><p><br></p><p>Many believe regulation slows down technological progress, but according to Lucilla Sioli, this is a false dilemma. The EU AI Act is designed to support both innovation and governance, ensuring that AI systems remain safe, reliable, and beneficial for all.&nbsp;</p><p><br></p><p>📢 <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ Why regulation and innovation are not conflicting forces but complementary.&nbsp;</p><p>✅ How AI regulation creates trust, leading to broader adoption and investment.&nbsp;</p><p>✅ The role of AI sandboxes in allowing startups to test AI applications in a controlled environment.&nbsp;</p><p>✅ The AI Pact’s growing global interest—why even U.S. and Korean companies are voluntarily aligning with EU AI standards.&nbsp;</p><p>✅ What businesses can learn from the EU’s risk-based approach to AI regulation.&nbsp;</p><p>🔍 <strong>Lucilla explains how the EU’s structured, risk-based framework ensures AI development remains competitive while prioritizing safety.</strong>&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/lucilla-sioli-b944392/" target="_blank" style="color: rgb(70, 120, 134);">https://www.linkedin.com/in/lucilla-sioli-b944392/</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🌍 How do you regulate AI without stifling innovation? Many believe regulation slows down technological progress, but according to Lucilla Sioli, this is a false dilemma. The EU AI Act is designed to support both innovation and governance, ensuring ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a291aa2d-23b9-4712-b339-c835ee1716a1]]></guid>
  <title><![CDATA[UNESCO’s AI Ethics Framework Ft. Prof. Emma Ruttkamp-Bloem | The RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔹 Key insights: </span></p><p><span style="color: rgb(13, 13, 13);">~ The role of ethics in shaping AI policy and governance </span></p><p><span style="color: rgb(13, 13, 13);">~ How UNESCO’s AI recommendations are influencing global regulations </span></p><p><span style="color: rgb(13, 13, 13);">~ Balancing innovation with responsible AI development </span></p><p><span style="color: rgb(13, 13, 13);">~ The ethical dilemmas of AI in decision-making </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/ </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">⏱️ Timestamps: </span></p><p><span style="color: rgb(13, 13, 13);">00:00 -  Podcast Highlights </span></p><p><span style="color: rgb(13, 13, 13);">02:21 - What inspired your journey into AI ethics? </span></p><p><span style="color: rgb(13, 13, 13);">05:13 - What is the significance of UNESCO's AI ethics recommendation? </span></p><p><span style="color: rgb(13, 13, 13);">11:10 - What are the key ethical challenges in AI development? </span></p><p><span style="color: rgb(13, 13, 13);">16:01 - How can the Global South's voice be amplified in AI policy? </span></p><p><span style="color: rgb(13, 13, 13);">19:24 - What are the ethical concerns of AI in military use? </span></p><p><span style="color: rgb(13, 13, 13);">22:06 - Why is data sovereignty important, and what role do data centres play? </span></p><p><span style="color: rgb(13, 13, 13);">27:29 - What insights have you gained on AI governance in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">33:10 - Can online education help address training and connectivity in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">39:27 - What role should international organisations play in AI governance? </span></p><p><span style="color: rgb(13, 13, 13);">42:08 - Will Africa develop its AI regulations or rely on others like the EU? </span></p><p><span style="color: rgb(13, 13, 13);">45:56 - How do you see human-technology relations evolving with AI? </span></p><p><span style="color: rgb(13, 13, 13);">47:07 - (Lightning Round) </span></p><p><span style="color: rgb(13, 13, 13);">50:45 - What gives you hope for AI governance, and what are your main concerns? </span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/37749128-df60-4238-b02e-d7129a4dcc74/6b947b1c52.jpg" />
  <pubDate>Fri, 07 Mar 2025 02:42:19 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="51628452" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/37749128-df60-4238-b02e-d7129a4dcc74/episode.mp3" />
  <itunes:title><![CDATA[UNESCO’s AI Ethics Framework Ft. Prof. Emma Ruttkamp-Bloem | The RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>53:46</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔹 Key insights: </span></p><p><span style="color: rgb(13, 13, 13);">~ The role of ethics in shaping AI policy and governance </span></p><p><span style="color: rgb(13, 13, 13);">~ How UNESCO’s AI recommendations are influencing global regulations </span></p><p><span style="color: rgb(13, 13, 13);">~ Balancing innovation with responsible AI development </span></p><p><span style="color: rgb(13, 13, 13);">~ The ethical dilemmas of AI in decision-making </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/ </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">⏱️ Timestamps: </span></p><p><span style="color: rgb(13, 13, 13);">00:00 -  Podcast Highlights </span></p><p><span style="color: rgb(13, 13, 13);">02:21 - What inspired your journey into AI ethics? </span></p><p><span style="color: rgb(13, 13, 13);">05:13 - What is the significance of UNESCO's AI ethics recommendation? </span></p><p><span style="color: rgb(13, 13, 13);">11:10 - What are the key ethical challenges in AI development? </span></p><p><span style="color: rgb(13, 13, 13);">16:01 - How can the Global South's voice be amplified in AI policy? </span></p><p><span style="color: rgb(13, 13, 13);">19:24 - What are the ethical concerns of AI in military use? </span></p><p><span style="color: rgb(13, 13, 13);">22:06 - Why is data sovereignty important, and what role do data centres play? </span></p><p><span style="color: rgb(13, 13, 13);">27:29 - What insights have you gained on AI governance in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">33:10 - Can online education help address training and connectivity in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">39:27 - What role should international organisations play in AI governance? </span></p><p><span style="color: rgb(13, 13, 13);">42:08 - Will Africa develop its AI regulations or rely on others like the EU? </span></p><p><span style="color: rgb(13, 13, 13);">45:56 - How do you see human-technology relations evolving with AI? </span></p><p><span style="color: rgb(13, 13, 13);">47:07 - (Lightning Round) </span></p><p><span style="color: rgb(13, 13, 13);">50:45 - What gives you hope for AI governance, and what are your main concerns? </span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔹 Key insights: </span></p><p><span style="color: rgb(13, 13, 13);">~ The role of ethics in shaping AI policy and governance </span></p><p><span style="color: rgb(13, 13, 13);">~ How UNESCO’s AI recommendations are influencing global regulations </span></p><p><span style="color: rgb(13, 13, 13);">~ Balancing innovation with responsible AI development </span></p><p><span style="color: rgb(13, 13, 13);">~ The ethical dilemmas of AI in decision-making </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/ </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">⏱️ Timestamps: </span></p><p><span style="color: rgb(13, 13, 13);">00:00 -  Podcast Highlights </span></p><p><span style="color: rgb(13, 13, 13);">02:21 - What inspired your journey into AI ethics? </span></p><p><span style="color: rgb(13, 13, 13);">05:13 - What is the significance of UNESCO's AI ethics recommendation? </span></p><p><span style="color: rgb(13, 13, 13);">11:10 - What are the key ethical challenges in AI development? </span></p><p><span style="color: rgb(13, 13, 13);">16:01 - How can the Global South's voice be amplified in AI policy? </span></p><p><span style="color: rgb(13, 13, 13);">19:24 - What are the ethical concerns of AI in military use? </span></p><p><span style="color: rgb(13, 13, 13);">22:06 - Why is data sovereignty important, and what role do data centres play? </span></p><p><span style="color: rgb(13, 13, 13);">27:29 - What insights have you gained on AI governance in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">33:10 - Can online education help address training and connectivity in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">39:27 - What role should international organisations play in AI governance? </span></p><p><span style="color: rgb(13, 13, 13);">42:08 - Will Africa develop its AI regulations or rely on others like the EU? </span></p><p><span style="color: rgb(13, 13, 13);">45:56 - How do you see human-technology relations evolving with AI? </span></p><p><span style="color: rgb(13, 13, 13);">47:07 - (Lightning Round) </span></p><p><span style="color: rgb(13, 13, 13);">50:45 - What gives you hope for AI governance, and what are your main concerns? </span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. 🔹 Key insigh...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[78b5194c-85dd-435c-87e2-1af6e64ab181]]></guid>
  <title><![CDATA[UNESCO’s AI Ethics Framework Ft. Prof. Emma Ruttkamp-Bloem | The RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔹 Key insights: </span></p><p><span style="color: rgb(13, 13, 13);">~ The role of ethics in shaping AI policy and governance </span></p><p><span style="color: rgb(13, 13, 13);">~ How UNESCO’s AI recommendations are influencing global regulations </span></p><p><span style="color: rgb(13, 13, 13);">~ Balancing innovation with responsible AI development </span></p><p><span style="color: rgb(13, 13, 13);">~ The ethical dilemmas of AI in decision-making </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/ </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">⏱️ Timestamps: </span></p><p><span style="color: rgb(13, 13, 13);">00:00 -  Podcast Highlights </span></p><p><span style="color: rgb(13, 13, 13);">02:21 - What inspired your journey into AI ethics? </span></p><p><span style="color: rgb(13, 13, 13);">05:13 - What is the significance of UNESCO's AI ethics recommendation? </span></p><p><span style="color: rgb(13, 13, 13);">11:10 - What are the key ethical challenges in AI development? </span></p><p><span style="color: rgb(13, 13, 13);">16:01 - How can the Global South's voice be amplified in AI policy? </span></p><p><span style="color: rgb(13, 13, 13);">19:24 - What are the ethical concerns of AI in military use? </span></p><p><span style="color: rgb(13, 13, 13);">22:06 - Why is data sovereignty important, and what role do data centres play? </span></p><p><span style="color: rgb(13, 13, 13);">27:29 - What insights have you gained on AI governance in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">33:10 - Can online education help address training and connectivity in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">39:27 - What role should international organisations play in AI governance? </span></p><p><span style="color: rgb(13, 13, 13);">42:08 - Will Africa develop its AI regulations or rely on others like the EU? </span></p><p><span style="color: rgb(13, 13, 13);">45:56 - How do you see human-technology relations evolving with AI? </span></p><p><span style="color: rgb(13, 13, 13);">47:07 - (Lightning Round) </span></p><p><span style="color: rgb(13, 13, 13);">50:45 - What gives you hope for AI governance, and what are your main concerns? </span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/41a7dc87-cbef-4865-a978-69e2318c1697/95b6c7c6d6.jpg" />
  <pubDate>Fri, 07 Mar 2025 02:42:19 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="51628452" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/41a7dc87-cbef-4865-a978-69e2318c1697/episode.mp3" />
  <itunes:title><![CDATA[UNESCO’s AI Ethics Framework Ft. Prof. Emma Ruttkamp-Bloem | The RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>53:46</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔹 Key insights: </span></p><p><span style="color: rgb(13, 13, 13);">~ The role of ethics in shaping AI policy and governance </span></p><p><span style="color: rgb(13, 13, 13);">~ How UNESCO’s AI recommendations are influencing global regulations </span></p><p><span style="color: rgb(13, 13, 13);">~ Balancing innovation with responsible AI development </span></p><p><span style="color: rgb(13, 13, 13);">~ The ethical dilemmas of AI in decision-making </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/ </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">⏱️ Timestamps: </span></p><p><span style="color: rgb(13, 13, 13);">00:00 -  Podcast Highlights </span></p><p><span style="color: rgb(13, 13, 13);">02:21 - What inspired your journey into AI ethics? </span></p><p><span style="color: rgb(13, 13, 13);">05:13 - What is the significance of UNESCO's AI ethics recommendation? </span></p><p><span style="color: rgb(13, 13, 13);">11:10 - What are the key ethical challenges in AI development? </span></p><p><span style="color: rgb(13, 13, 13);">16:01 - How can the Global South's voice be amplified in AI policy? </span></p><p><span style="color: rgb(13, 13, 13);">19:24 - What are the ethical concerns of AI in military use? </span></p><p><span style="color: rgb(13, 13, 13);">22:06 - Why is data sovereignty important, and what role do data centres play? </span></p><p><span style="color: rgb(13, 13, 13);">27:29 - What insights have you gained on AI governance in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">33:10 - Can online education help address training and connectivity in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">39:27 - What role should international organisations play in AI governance? </span></p><p><span style="color: rgb(13, 13, 13);">42:08 - Will Africa develop its AI regulations or rely on others like the EU? </span></p><p><span style="color: rgb(13, 13, 13);">45:56 - How do you see human-technology relations evolving with AI? </span></p><p><span style="color: rgb(13, 13, 13);">47:07 - (Lightning Round) </span></p><p><span style="color: rgb(13, 13, 13);">50:45 - What gives you hope for AI governance, and what are your main concerns? </span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔹 Key insights: </span></p><p><span style="color: rgb(13, 13, 13);">~ The role of ethics in shaping AI policy and governance </span></p><p><span style="color: rgb(13, 13, 13);">~ How UNESCO’s AI recommendations are influencing global regulations </span></p><p><span style="color: rgb(13, 13, 13);">~ Balancing innovation with responsible AI development </span></p><p><span style="color: rgb(13, 13, 13);">~ The ethical dilemmas of AI in decision-making </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources Mentioned: </span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/emma-ruttkamp-bloem-19400248/ </span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">⏱️ Timestamps: </span></p><p><span style="color: rgb(13, 13, 13);">00:00 -  Podcast Highlights </span></p><p><span style="color: rgb(13, 13, 13);">02:21 - What inspired your journey into AI ethics? </span></p><p><span style="color: rgb(13, 13, 13);">05:13 - What is the significance of UNESCO's AI ethics recommendation? </span></p><p><span style="color: rgb(13, 13, 13);">11:10 - What are the key ethical challenges in AI development? </span></p><p><span style="color: rgb(13, 13, 13);">16:01 - How can the Global South's voice be amplified in AI policy? </span></p><p><span style="color: rgb(13, 13, 13);">19:24 - What are the ethical concerns of AI in military use? </span></p><p><span style="color: rgb(13, 13, 13);">22:06 - Why is data sovereignty important, and what role do data centres play? </span></p><p><span style="color: rgb(13, 13, 13);">27:29 - What insights have you gained on AI governance in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">33:10 - Can online education help address training and connectivity in Africa? </span></p><p><span style="color: rgb(13, 13, 13);">39:27 - What role should international organisations play in AI governance? </span></p><p><span style="color: rgb(13, 13, 13);">42:08 - Will Africa develop its AI regulations or rely on others like the EU? </span></p><p><span style="color: rgb(13, 13, 13);">45:56 - How do you see human-technology relations evolving with AI? </span></p><p><span style="color: rgb(13, 13, 13);">47:07 - (Lightning Round) </span></p><p><span style="color: rgb(13, 13, 13);">50:45 - What gives you hope for AI governance, and what are your main concerns? </span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI is shaping our future, but who ensures its ethical use? In this episode of Regulating AI, Sanjay Puri sits down with Prof. Emma Ruttkamp-Bloem, Chair of UNESCO’s World Commission on the Ethics of Scientific Knowledge and Technology. 🔹 Key insigh...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[07ef08bd-6f01-4d63-b5c7-cf40510a9269]]></guid>
  <title><![CDATA[The Future of AI & Public Policy: A Deep Dive with Prof. Ramayya Krishnan | The RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of the <strong>RegulatingAI Podcast, </strong>we have<strong> Prof. Ramayya Krishnan</strong>, Dean of Heinz College at Carnegie Mellon University and a distinguished voice in AI policy and governance.&nbsp;</p><p><br></p><p class="ql-align-justify">With expertise in technology, public policy, and societal impact, Prof. Krishnan explores the delicate balance between <strong>AI innovation and regulation</strong>, the role of AI in <strong>shaping public policy</strong>, and the <strong>governance frameworks needed to ensure responsible AI deployment</strong>.&nbsp;</p><p><br></p><p>🎙️ <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ <strong>Regulating AI Without Killing Innovation</strong> – How can policymakers create guardrails without slowing down progress?&nbsp;</p><p>✅ <strong>AI &amp; Public Trust</strong> – Why ethical AI frameworks are essential for societal acceptance.&nbsp;</p><p>✅ <strong>Bridging the AI Policy Gap</strong> – How global collaboration can create effective AI governance models.&nbsp;</p><p>✅ <strong>The Role of Universities in AI Governance</strong> – How academia contributes to responsible AI development.&nbsp;</p><p>🔔 <strong>Watch now to gain exclusive insights from one of AI policy’s leading experts!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/ramayya-krishnan-2b3a6812" target="_blank" style="color: rgb(0, 112, 192);">www.linkedin.com/in/ramayya-krishnan-2b3a6812</a><span style="color: rgb(0, 112, 192);">&nbsp;</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/dab018cc-388e-47be-9b13-17ef0bc0efb7/502a71464f.jpg" />
  <pubDate>Fri, 21 Feb 2025 11:43:22 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="55345363" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/dab018cc-388e-47be-9b13-17ef0bc0efb7/episode.mp3" />
  <itunes:title><![CDATA[The Future of AI & Public Policy: A Deep Dive with Prof. Ramayya Krishnan | The RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>57:39</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of the <strong>RegulatingAI Podcast, </strong>we have<strong> Prof. Ramayya Krishnan</strong>, Dean of Heinz College at Carnegie Mellon University and a distinguished voice in AI policy and governance.&nbsp;</p><p><br></p><p class="ql-align-justify">With expertise in technology, public policy, and societal impact, Prof. Krishnan explores the delicate balance between <strong>AI innovation and regulation</strong>, the role of AI in <strong>shaping public policy</strong>, and the <strong>governance frameworks needed to ensure responsible AI deployment</strong>.&nbsp;</p><p><br></p><p>🎙️ <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ <strong>Regulating AI Without Killing Innovation</strong> – How can policymakers create guardrails without slowing down progress?&nbsp;</p><p>✅ <strong>AI &amp; Public Trust</strong> – Why ethical AI frameworks are essential for societal acceptance.&nbsp;</p><p>✅ <strong>Bridging the AI Policy Gap</strong> – How global collaboration can create effective AI governance models.&nbsp;</p><p>✅ <strong>The Role of Universities in AI Governance</strong> – How academia contributes to responsible AI development.&nbsp;</p><p>🔔 <strong>Watch now to gain exclusive insights from one of AI policy’s leading experts!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/ramayya-krishnan-2b3a6812" target="_blank" style="color: rgb(0, 112, 192);">www.linkedin.com/in/ramayya-krishnan-2b3a6812</a><span style="color: rgb(0, 112, 192);">&nbsp;</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of the <strong>RegulatingAI Podcast, </strong>we have<strong> Prof. Ramayya Krishnan</strong>, Dean of Heinz College at Carnegie Mellon University and a distinguished voice in AI policy and governance.&nbsp;</p><p><br></p><p class="ql-align-justify">With expertise in technology, public policy, and societal impact, Prof. Krishnan explores the delicate balance between <strong>AI innovation and regulation</strong>, the role of AI in <strong>shaping public policy</strong>, and the <strong>governance frameworks needed to ensure responsible AI deployment</strong>.&nbsp;</p><p><br></p><p>🎙️ <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ <strong>Regulating AI Without Killing Innovation</strong> – How can policymakers create guardrails without slowing down progress?&nbsp;</p><p>✅ <strong>AI &amp; Public Trust</strong> – Why ethical AI frameworks are essential for societal acceptance.&nbsp;</p><p>✅ <strong>Bridging the AI Policy Gap</strong> – How global collaboration can create effective AI governance models.&nbsp;</p><p>✅ <strong>The Role of Universities in AI Governance</strong> – How academia contributes to responsible AI development.&nbsp;</p><p>🔔 <strong>Watch now to gain exclusive insights from one of AI policy’s leading experts!</strong>&nbsp;</p><p><br></p><p class="ql-align-justify"><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://www.linkedin.com/in/ramayya-krishnan-2b3a6812" target="_blank" style="color: rgb(0, 112, 192);">www.linkedin.com/in/ramayya-krishnan-2b3a6812</a><span style="color: rgb(0, 112, 192);">&nbsp;</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI Podcast, we have Prof. Ramayya Krishnan, Dean of Heinz College at Carnegie Mellon University and a distinguished voice in AI policy and governance. With expertise in technology, public policy, and societal impact...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[6c7dceff-49fb-43a2-a664-8c8bf4954529]]></guid>
  <title><![CDATA[How AI’s Rapid Growth is Outpacing Regulation Ft. Brian Schmidt | The RegulatingAI Podcast ]]></title>
  <description><![CDATA[<p class="ql-align-justify">Listen our latest Podcast on the<strong> RegulatingAI Podcast </strong>with<strong> Nobel Laureate Brian Schmidt </strong>as he talks about the AI’s regulatory landscape, and the lessons science can teach us about responsible innovation.&nbsp;</p><p><br></p><p class="ql-align-justify">In this episode, he gives his insights on:&nbsp;</p><p class="ql-align-justify">🔹 The current state of AI regulation—why it’s lagging behind innovation.&nbsp;</p><p class="ql-align-justify">🔹 The ethical dilemmas of self-regulation in tech.&nbsp;</p><p class="ql-align-justify">🔹 Lessons from physics on managing disruptive technologies.&nbsp;</p><p class="ql-align-justify">🔹 Australia’s regulatory stance on AI vs. global approaches.&nbsp;</p><p class="ql-align-justify">🔹 The balance between innovation, accountability, and legal oversight.&nbsp;</p><p>Brian highlights the urgent need for AI governance that is both adaptive and forward-thinking, ensuring that technology serves humanity rather than outpacing our ability to regulate it.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://en.wikipedia.org/wiki/Brian_Schmidt" target="_blank" style="color: rgb(70, 120, 134);">https://en.wikipedia.org/wiki/Brian_Schmidt</a>&nbsp;</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/083faccd-ee4f-49e1-9127-65bfe2ee35c4/5e47d64bc7.jpg" />
  <pubDate>Tue, 18 Feb 2025 13:49:39 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="54889787" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/083faccd-ee4f-49e1-9127-65bfe2ee35c4/episode.mp3" />
  <itunes:title><![CDATA[How AI’s Rapid Growth is Outpacing Regulation Ft. Brian Schmidt | The RegulatingAI Podcast ]]></itunes:title>
  <itunes:duration>57:10</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">Listen our latest Podcast on the<strong> RegulatingAI Podcast </strong>with<strong> Nobel Laureate Brian Schmidt </strong>as he talks about the AI’s regulatory landscape, and the lessons science can teach us about responsible innovation.&nbsp;</p><p><br></p><p class="ql-align-justify">In this episode, he gives his insights on:&nbsp;</p><p class="ql-align-justify">🔹 The current state of AI regulation—why it’s lagging behind innovation.&nbsp;</p><p class="ql-align-justify">🔹 The ethical dilemmas of self-regulation in tech.&nbsp;</p><p class="ql-align-justify">🔹 Lessons from physics on managing disruptive technologies.&nbsp;</p><p class="ql-align-justify">🔹 Australia’s regulatory stance on AI vs. global approaches.&nbsp;</p><p class="ql-align-justify">🔹 The balance between innovation, accountability, and legal oversight.&nbsp;</p><p>Brian highlights the urgent need for AI governance that is both adaptive and forward-thinking, ensuring that technology serves humanity rather than outpacing our ability to regulate it.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://en.wikipedia.org/wiki/Brian_Schmidt" target="_blank" style="color: rgb(70, 120, 134);">https://en.wikipedia.org/wiki/Brian_Schmidt</a>&nbsp;</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">Listen our latest Podcast on the<strong> RegulatingAI Podcast </strong>with<strong> Nobel Laureate Brian Schmidt </strong>as he talks about the AI’s regulatory landscape, and the lessons science can teach us about responsible innovation.&nbsp;</p><p><br></p><p class="ql-align-justify">In this episode, he gives his insights on:&nbsp;</p><p class="ql-align-justify">🔹 The current state of AI regulation—why it’s lagging behind innovation.&nbsp;</p><p class="ql-align-justify">🔹 The ethical dilemmas of self-regulation in tech.&nbsp;</p><p class="ql-align-justify">🔹 Lessons from physics on managing disruptive technologies.&nbsp;</p><p class="ql-align-justify">🔹 Australia’s regulatory stance on AI vs. global approaches.&nbsp;</p><p class="ql-align-justify">🔹 The balance between innovation, accountability, and legal oversight.&nbsp;</p><p>Brian highlights the urgent need for AI governance that is both adaptive and forward-thinking, ensuring that technology serves humanity rather than outpacing our ability to regulate it.&nbsp;</p><p><br></p><p><strong>Resources Mentioned:</strong>&nbsp;</p><p class="ql-align-justify"><a href="https://en.wikipedia.org/wiki/Brian_Schmidt" target="_blank" style="color: rgb(70, 120, 134);">https://en.wikipedia.org/wiki/Brian_Schmidt</a>&nbsp;</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Listen our latest Podcast on the RegulatingAI Podcast with Nobel Laureate Brian Schmidt as he talks about the AI’s regulatory landscape, and the lessons science can teach us about responsible innovation. In this episode, he gives his insights on: 🔹...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f3429961-f342-4295-8f0b-b251bc6029fa]]></guid>
  <title><![CDATA[Ethical Considerations in Gen AI & Data Science | Big Data Expo Global, Olympia, London]]></title>
  <description><![CDATA[<p>At Big Data Expo Global, industry leaders came together to tackle one of the most pressing topics in AI today—Ethical Considerations in Gen AI and Data Science: Navigating Complex Terrain.</p><p><br></p><p>Sanjay Puri, Founder &amp; Chairman of Knowledge Networks, joined an esteemed panel of experts to explore the challenges and responsibilities in building ethical AI systems. The discussion featured:</p><p><br></p><p>🔹 Shairil Yahya – Legal Compliance Technology &amp; Solution Director, Philips</p><p>🔹 Emily Yang – Head of Human-Centred AI and Innovation, Standard Chartered</p><p>🔹 Larry Orimoloye – Principal Architect AI/ML - Field CTO, Snowflake</p><p>🔹 Sanjay Puri – Founder and Chairman, Knowledge Networks Group</p><p>🔹 Chandrashekhar Kachole – Chief Technology Officer</p><p>🔹 Saber Fallah – Professor of Safe AI and Autonomy, University of Surrey</p><p><br></p><p>How can we navigate the complexities of ethical AI while fostering innovation? Share your thoughts in the comments!</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/62427727-fae8-44d7-996f-4d0b4ba79bc9/91c0271a97.jpg" />
  <pubDate>Fri, 14 Feb 2025 12:56:33 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="33003773" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/62427727-fae8-44d7-996f-4d0b4ba79bc9/episode.mp3" />
  <itunes:title><![CDATA[Ethical Considerations in Gen AI & Data Science | Big Data Expo Global, Olympia, London]]></itunes:title>
  <itunes:duration>34:22</itunes:duration>
  <itunes:summary><![CDATA[<p>At Big Data Expo Global, industry leaders came together to tackle one of the most pressing topics in AI today—Ethical Considerations in Gen AI and Data Science: Navigating Complex Terrain.</p><p><br></p><p>Sanjay Puri, Founder &amp; Chairman of Knowledge Networks, joined an esteemed panel of experts to explore the challenges and responsibilities in building ethical AI systems. The discussion featured:</p><p><br></p><p>🔹 Shairil Yahya – Legal Compliance Technology &amp; Solution Director, Philips</p><p>🔹 Emily Yang – Head of Human-Centred AI and Innovation, Standard Chartered</p><p>🔹 Larry Orimoloye – Principal Architect AI/ML - Field CTO, Snowflake</p><p>🔹 Sanjay Puri – Founder and Chairman, Knowledge Networks Group</p><p>🔹 Chandrashekhar Kachole – Chief Technology Officer</p><p>🔹 Saber Fallah – Professor of Safe AI and Autonomy, University of Surrey</p><p><br></p><p>How can we navigate the complexities of ethical AI while fostering innovation? Share your thoughts in the comments!</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>At Big Data Expo Global, industry leaders came together to tackle one of the most pressing topics in AI today—Ethical Considerations in Gen AI and Data Science: Navigating Complex Terrain.</p><p><br></p><p>Sanjay Puri, Founder &amp; Chairman of Knowledge Networks, joined an esteemed panel of experts to explore the challenges and responsibilities in building ethical AI systems. The discussion featured:</p><p><br></p><p>🔹 Shairil Yahya – Legal Compliance Technology &amp; Solution Director, Philips</p><p>🔹 Emily Yang – Head of Human-Centred AI and Innovation, Standard Chartered</p><p>🔹 Larry Orimoloye – Principal Architect AI/ML - Field CTO, Snowflake</p><p>🔹 Sanjay Puri – Founder and Chairman, Knowledge Networks Group</p><p>🔹 Chandrashekhar Kachole – Chief Technology Officer</p><p>🔹 Saber Fallah – Professor of Safe AI and Autonomy, University of Surrey</p><p><br></p><p>How can we navigate the complexities of ethical AI while fostering innovation? Share your thoughts in the comments!</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[At Big Data Expo Global, industry leaders came together to tackle one of the most pressing topics in AI today—Ethical Considerations in Gen AI and Data Science: Navigating Complex Terrain.Sanjay Puri, Founder & Chairman of Knowledge Networks, joine...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[3c345094-f770-4e71-9327-147a9b8e9ed7]]></guid>
  <title><![CDATA[The Big Debate: Ethics & Regulation – Navigating the Future of AI Governance]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">Watch the latest RegulatingAI Podcast from DeepFest 2025 featuring Mia Dand, Abir Habbal, Aadil Jaleel Choudhry, and Sanjay Puri (Moderator)! 🎙</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">In this insightful session, the panelists discuss the challenges and urgency of establishing responsible AI governance in a rapidly evolving landscape. Dive into a conversation on AI ethics, accountability, and regulation that highlights the need for thoughtful, global solutions.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔍 The Panelists Shared Their Thoughts On:</span></p><p><span style="color: rgb(13, 13, 13);">✅ The role of ethical guidelines in shaping AI innovation</span></p><p><span style="color: rgb(13, 13, 13);">✅ How transparency and accountability are critical for AI's future</span></p><p><span style="color: rgb(13, 13, 13);">✅ Addressing biases and maintaining fairness in AI models</span></p><p><span style="color: rgb(13, 13, 13);">✅ Navigating the complex relationship between regulation and innovation</span></p><p><span style="color: rgb(13, 13, 13);">✅ Global alignment for ethical AI practices across industries</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">A huge thank you to our Sanjay Puri (Moderator), all our esteemed panelists and the DeepFest community for organising this essential conversation. The dialogue around AI regulation continues to grow, and these insights are helping shape a more ethical and accountable AI future.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/0332c806-eb1f-4367-a024-c3cebff38c2a/974d289cc8.jpg" />
  <pubDate>Thu, 13 Feb 2025 14:10:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="37770597" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/0332c806-eb1f-4367-a024-c3cebff38c2a/episode.mp3" />
  <itunes:title><![CDATA[The Big Debate: Ethics & Regulation – Navigating the Future of AI Governance]]></itunes:title>
  <itunes:duration>39:20</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">Watch the latest RegulatingAI Podcast from DeepFest 2025 featuring Mia Dand, Abir Habbal, Aadil Jaleel Choudhry, and Sanjay Puri (Moderator)! 🎙</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">In this insightful session, the panelists discuss the challenges and urgency of establishing responsible AI governance in a rapidly evolving landscape. Dive into a conversation on AI ethics, accountability, and regulation that highlights the need for thoughtful, global solutions.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔍 The Panelists Shared Their Thoughts On:</span></p><p><span style="color: rgb(13, 13, 13);">✅ The role of ethical guidelines in shaping AI innovation</span></p><p><span style="color: rgb(13, 13, 13);">✅ How transparency and accountability are critical for AI's future</span></p><p><span style="color: rgb(13, 13, 13);">✅ Addressing biases and maintaining fairness in AI models</span></p><p><span style="color: rgb(13, 13, 13);">✅ Navigating the complex relationship between regulation and innovation</span></p><p><span style="color: rgb(13, 13, 13);">✅ Global alignment for ethical AI practices across industries</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">A huge thank you to our Sanjay Puri (Moderator), all our esteemed panelists and the DeepFest community for organising this essential conversation. The dialogue around AI regulation continues to grow, and these insights are helping shape a more ethical and accountable AI future.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">Watch the latest RegulatingAI Podcast from DeepFest 2025 featuring Mia Dand, Abir Habbal, Aadil Jaleel Choudhry, and Sanjay Puri (Moderator)! 🎙</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">In this insightful session, the panelists discuss the challenges and urgency of establishing responsible AI governance in a rapidly evolving landscape. Dive into a conversation on AI ethics, accountability, and regulation that highlights the need for thoughtful, global solutions.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">🔍 The Panelists Shared Their Thoughts On:</span></p><p><span style="color: rgb(13, 13, 13);">✅ The role of ethical guidelines in shaping AI innovation</span></p><p><span style="color: rgb(13, 13, 13);">✅ How transparency and accountability are critical for AI's future</span></p><p><span style="color: rgb(13, 13, 13);">✅ Addressing biases and maintaining fairness in AI models</span></p><p><span style="color: rgb(13, 13, 13);">✅ Navigating the complex relationship between regulation and innovation</span></p><p><span style="color: rgb(13, 13, 13);">✅ Global alignment for ethical AI practices across industries</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">A huge thank you to our Sanjay Puri (Moderator), all our esteemed panelists and the DeepFest community for organising this essential conversation. The dialogue around AI regulation continues to grow, and these insights are helping shape a more ethical and accountable AI future.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Watch the latest RegulatingAI Podcast from DeepFest 2025 featuring Mia Dand, Abir Habbal, Aadil Jaleel Choudhry, and Sanjay Puri (Moderator)! 🎙In this insightful session, the panelists discuss the challenges and urgency of establishing responsible ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[aefc0984-5f32-4403-af7d-a24bcd9fcc6c]]></guid>
  <title><![CDATA[Panel Discussion: Exploring Government AI Readiness: Challenges, Strategies & Global Perspectives | LEAP 2025 ]]></title>
  <description><![CDATA[<p>Governments worldwide are navigating the complexities of AI adoption, from policy development to ethical considerations. In this insightful panel discussion, Margarete Schramboeck (Former Minister of Economy of Austria, Board Member &amp; Advisor, Aramco Digital), Lama Arabiat (Director of AI &amp; Advanced Technologies, Ministry of Digital Economy and Entrepreneurship, Jordan), and Abdullah AlThawad (Takamol) share their expertise on how nations can create AI frameworks that align with national priorities while fostering innovation, data governance, and international collaboration.&nbsp;</p><p>&nbsp;</p><p>Moderated by Sanjay Puri, Founder &amp; Chairman, Knowledge Networks, this conversation highlights the key challenges and opportunities in government AI readiness.&nbsp;</p><p>&nbsp;</p><p>🔹 What are the biggest hurdles in AI policy implementation?&nbsp;</p><p>🔹 How can governments balance innovation with responsible AI governance?&nbsp;</p><p>🔹 What role does international cooperation play in AI adoption?&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/28daf8c4-7759-4d8d-a5db-6dd838abbee6/94e4d1c85f.jpg" />
  <pubDate>Thu, 13 Feb 2025 13:52:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38644132" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/28daf8c4-7759-4d8d-a5db-6dd838abbee6/episode.mp3" />
  <itunes:title><![CDATA[Panel Discussion: Exploring Government AI Readiness: Challenges, Strategies & Global Perspectives | LEAP 2025 ]]></itunes:title>
  <itunes:duration>40:15</itunes:duration>
  <itunes:summary><![CDATA[<p>Governments worldwide are navigating the complexities of AI adoption, from policy development to ethical considerations. In this insightful panel discussion, Margarete Schramboeck (Former Minister of Economy of Austria, Board Member &amp; Advisor, Aramco Digital), Lama Arabiat (Director of AI &amp; Advanced Technologies, Ministry of Digital Economy and Entrepreneurship, Jordan), and Abdullah AlThawad (Takamol) share their expertise on how nations can create AI frameworks that align with national priorities while fostering innovation, data governance, and international collaboration.&nbsp;</p><p>&nbsp;</p><p>Moderated by Sanjay Puri, Founder &amp; Chairman, Knowledge Networks, this conversation highlights the key challenges and opportunities in government AI readiness.&nbsp;</p><p>&nbsp;</p><p>🔹 What are the biggest hurdles in AI policy implementation?&nbsp;</p><p>🔹 How can governments balance innovation with responsible AI governance?&nbsp;</p><p>🔹 What role does international cooperation play in AI adoption?&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Governments worldwide are navigating the complexities of AI adoption, from policy development to ethical considerations. In this insightful panel discussion, Margarete Schramboeck (Former Minister of Economy of Austria, Board Member &amp; Advisor, Aramco Digital), Lama Arabiat (Director of AI &amp; Advanced Technologies, Ministry of Digital Economy and Entrepreneurship, Jordan), and Abdullah AlThawad (Takamol) share their expertise on how nations can create AI frameworks that align with national priorities while fostering innovation, data governance, and international collaboration.&nbsp;</p><p>&nbsp;</p><p>Moderated by Sanjay Puri, Founder &amp; Chairman, Knowledge Networks, this conversation highlights the key challenges and opportunities in government AI readiness.&nbsp;</p><p>&nbsp;</p><p>🔹 What are the biggest hurdles in AI policy implementation?&nbsp;</p><p>🔹 How can governments balance innovation with responsible AI governance?&nbsp;</p><p>🔹 What role does international cooperation play in AI adoption?&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Governments worldwide are navigating the complexities of AI adoption, from policy development to ethical considerations. In this insightful panel discussion, Margarete Schramboeck (Former Minister of Economy of Austria, Board Member & Advisor, Aram...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[be91b33d-7725-45de-822d-5f17ce50a788]]></guid>
  <title><![CDATA[Ethics, AI, & Medicine: The Future of Healthcare with Dr. Dominique Monlezun ]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode of <strong>RegulatingAI Podcast</strong>, <strong>Dr. Dominique J. Monlezun</strong>—the world’s first <strong>triple-doctorate-trained physician, data scientist, and AI ethicist</strong>—shares his extraordinary journey from a small farming town to shaping global AI healthcare policies.&nbsp;</p><p class="ql-align-justify">With a background spanning <strong>cardio-oncology, public health, and AI ethics</strong>, Dr. Monlezun brings a <strong>unique, multidisciplinary approach</strong> to healthcare innovation. He discusses <strong>how AI can bridge health disparities</strong>, <strong>why AI literacy is essential</strong>, and <strong>the ethical challenges of AI-driven medicine</strong>.</p><p class="ql-align-justify">&nbsp;</p><p>🎙️ <strong>Key Takeaways</strong>&nbsp;</p><p>✅ <strong>AI Isn’t Replacing Doctors – Doctors Who Use AI Are</strong> – Why upskilling in AI is crucial for healthcare professionals.&nbsp;</p><p>✅ <strong>Bridging the AI Adoption Gap</strong> – The divide between healthcare systems adopting AI and those falling behind.&nbsp;</p><p>✅ <strong>AI Ethics &amp; Patient Trust</strong> – The role of AI in healthcare and ensuring responsible governance.&nbsp;</p><p>✅ <strong>Global Collaboration in AI</strong> – Why AI development must be inclusive, ethical, and internationally cooperative.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/19b3f6ab-7d39-4629-be77-519a12920a0f/250682e9fc.jpg" />
  <pubDate>Tue, 11 Feb 2025 15:29:14 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="53385552" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/19b3f6ab-7d39-4629-be77-519a12920a0f/episode.mp3" />
  <itunes:title><![CDATA[Ethics, AI, & Medicine: The Future of Healthcare with Dr. Dominique Monlezun ]]></itunes:title>
  <itunes:duration>55:36</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode of <strong>RegulatingAI Podcast</strong>, <strong>Dr. Dominique J. Monlezun</strong>—the world’s first <strong>triple-doctorate-trained physician, data scientist, and AI ethicist</strong>—shares his extraordinary journey from a small farming town to shaping global AI healthcare policies.&nbsp;</p><p class="ql-align-justify">With a background spanning <strong>cardio-oncology, public health, and AI ethics</strong>, Dr. Monlezun brings a <strong>unique, multidisciplinary approach</strong> to healthcare innovation. He discusses <strong>how AI can bridge health disparities</strong>, <strong>why AI literacy is essential</strong>, and <strong>the ethical challenges of AI-driven medicine</strong>.</p><p class="ql-align-justify">&nbsp;</p><p>🎙️ <strong>Key Takeaways</strong>&nbsp;</p><p>✅ <strong>AI Isn’t Replacing Doctors – Doctors Who Use AI Are</strong> – Why upskilling in AI is crucial for healthcare professionals.&nbsp;</p><p>✅ <strong>Bridging the AI Adoption Gap</strong> – The divide between healthcare systems adopting AI and those falling behind.&nbsp;</p><p>✅ <strong>AI Ethics &amp; Patient Trust</strong> – The role of AI in healthcare and ensuring responsible governance.&nbsp;</p><p>✅ <strong>Global Collaboration in AI</strong> – Why AI development must be inclusive, ethical, and internationally cooperative.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode of <strong>RegulatingAI Podcast</strong>, <strong>Dr. Dominique J. Monlezun</strong>—the world’s first <strong>triple-doctorate-trained physician, data scientist, and AI ethicist</strong>—shares his extraordinary journey from a small farming town to shaping global AI healthcare policies.&nbsp;</p><p class="ql-align-justify">With a background spanning <strong>cardio-oncology, public health, and AI ethics</strong>, Dr. Monlezun brings a <strong>unique, multidisciplinary approach</strong> to healthcare innovation. He discusses <strong>how AI can bridge health disparities</strong>, <strong>why AI literacy is essential</strong>, and <strong>the ethical challenges of AI-driven medicine</strong>.</p><p class="ql-align-justify">&nbsp;</p><p>🎙️ <strong>Key Takeaways</strong>&nbsp;</p><p>✅ <strong>AI Isn’t Replacing Doctors – Doctors Who Use AI Are</strong> – Why upskilling in AI is crucial for healthcare professionals.&nbsp;</p><p>✅ <strong>Bridging the AI Adoption Gap</strong> – The divide between healthcare systems adopting AI and those falling behind.&nbsp;</p><p>✅ <strong>AI Ethics &amp; Patient Trust</strong> – The role of AI in healthcare and ensuring responsible governance.&nbsp;</p><p>✅ <strong>Global Collaboration in AI</strong> – Why AI development must be inclusive, ethical, and internationally cooperative.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI Podcast, Dr. Dominique J. Monlezun—the world’s first triple-doctorate-trained physician, data scientist, and AI ethicist—shares his extraordinary journey from a small farming town to shaping global AI healthcare poli...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[5bcf17b9-160b-4947-9cac-e51c0682c0f1]]></guid>
  <title><![CDATA[AI Compliance, Privacy & Bias – Can We Fix It? | Ft. Shachar Schnapp, PVML ]]></title>
  <description><![CDATA[<p>Watch the latest RegulatingAI Podcast at <strong>AI Big Data Expo, London</strong> featuring <strong>Shachar Schnap, Co-Founder &amp; CEO at PVML</strong>! 🎙&nbsp;</p><p>He discusses how AI can navigate global compliance challenges, mitigate bias, and enhance data privacy with cutting-edge techniques. Don't miss this deep dive into the evolving AI landscape!&nbsp;</p><p><br></p><p>🔍 <strong>Key Topics:</strong>&nbsp;</p><p>✅ The role of <strong>differential privacy</strong> in AI compliance&nbsp;</p><p>✅ How <strong>retrieval-augmented generation (RAG)</strong> minimizes AI hallucinations&nbsp;</p><p>✅ The <strong>bias problem</strong> in AI models—can it ever be solved?&nbsp;</p><p>✅ The <strong>limitations of synthetic data</strong> in analysis and decision-making&nbsp;</p><p>✅ The <strong>rise of open-source AI models</strong> and their regulatory challenges&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/3afb9a66-1ddd-479e-84a8-c46940fa0450/edaeaf05fd.jpg" />
  <pubDate>Tue, 11 Feb 2025 07:19:15 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="15318666" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/3afb9a66-1ddd-479e-84a8-c46940fa0450/episode.mp3" />
  <itunes:title><![CDATA[AI Compliance, Privacy & Bias – Can We Fix It? | Ft. Shachar Schnapp, PVML ]]></itunes:title>
  <itunes:duration>15:57</itunes:duration>
  <itunes:summary><![CDATA[<p>Watch the latest RegulatingAI Podcast at <strong>AI Big Data Expo, London</strong> featuring <strong>Shachar Schnap, Co-Founder &amp; CEO at PVML</strong>! 🎙&nbsp;</p><p>He discusses how AI can navigate global compliance challenges, mitigate bias, and enhance data privacy with cutting-edge techniques. Don't miss this deep dive into the evolving AI landscape!&nbsp;</p><p><br></p><p>🔍 <strong>Key Topics:</strong>&nbsp;</p><p>✅ The role of <strong>differential privacy</strong> in AI compliance&nbsp;</p><p>✅ How <strong>retrieval-augmented generation (RAG)</strong> minimizes AI hallucinations&nbsp;</p><p>✅ The <strong>bias problem</strong> in AI models—can it ever be solved?&nbsp;</p><p>✅ The <strong>limitations of synthetic data</strong> in analysis and decision-making&nbsp;</p><p>✅ The <strong>rise of open-source AI models</strong> and their regulatory challenges&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Watch the latest RegulatingAI Podcast at <strong>AI Big Data Expo, London</strong> featuring <strong>Shachar Schnap, Co-Founder &amp; CEO at PVML</strong>! 🎙&nbsp;</p><p>He discusses how AI can navigate global compliance challenges, mitigate bias, and enhance data privacy with cutting-edge techniques. Don't miss this deep dive into the evolving AI landscape!&nbsp;</p><p><br></p><p>🔍 <strong>Key Topics:</strong>&nbsp;</p><p>✅ The role of <strong>differential privacy</strong> in AI compliance&nbsp;</p><p>✅ How <strong>retrieval-augmented generation (RAG)</strong> minimizes AI hallucinations&nbsp;</p><p>✅ The <strong>bias problem</strong> in AI models—can it ever be solved?&nbsp;</p><p>✅ The <strong>limitations of synthetic data</strong> in analysis and decision-making&nbsp;</p><p>✅ The <strong>rise of open-source AI models</strong> and their regulatory challenges&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Watch the latest RegulatingAI Podcast at AI Big Data Expo, London featuring Shachar Schnap, Co-Founder & CEO at PVML! 🎙 He discusses how AI can navigate global compliance challenges, mitigate bias, and enhance data privacy with cutting-edge techniq...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[fc2aac94-3aa7-448d-a934-fcfddfd8c5ca]]></guid>
  <title><![CDATA[AI in Healthcare: Innovations, Challenges, and the Path to Commercialization Ft. Dr. Ahmed Serag ]]></title>
  <description><![CDATA[<p><strong>Listen our latest Podcast on the RegulatingAI Podcast </strong>with<strong> Dr. Ahmed Serag, Professor, Founder and Director, AI Innovation Lab at Cornell University,</strong> as he discusses the future of AI in medicine.&nbsp;</p><p>Dr. Serag shares insights on:&nbsp;</p><p>✅ The challenges of using AI in healthcare—only 20% of global medical data is AI-ready.&nbsp;</p><p>✅ The role of synthetic data in medical AI—how it can protect privacy while accelerating research.&nbsp;</p><p>✅ Why AI is reshaping drug discovery and clinical trials, cutting timelines from years to months.&nbsp;</p><p>✅ The need for global standards in medical data privacy and AI-driven diagnostics.&nbsp;</p><p>✅ How AI is assisting, not replacing, doctors—and why physicians who embrace AI will lead the future.&nbsp;</p><p><br></p><p>Dr. Serag highlights how AI is revolutionizing medicine, from radiology to digital twins, and the critical role of collaboration between researchers, policymakers, and clinicians.&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/9c5aea9b-362e-4b4c-862b-54366958e753/facb6e2297.jpg" />
  <pubDate>Mon, 10 Feb 2025 14:53:11 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="19270888" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9c5aea9b-362e-4b4c-862b-54366958e753/episode.mp3" />
  <itunes:title><![CDATA[AI in Healthcare: Innovations, Challenges, and the Path to Commercialization Ft. Dr. Ahmed Serag ]]></itunes:title>
  <itunes:duration>20:04</itunes:duration>
  <itunes:summary><![CDATA[<p><strong>Listen our latest Podcast on the RegulatingAI Podcast </strong>with<strong> Dr. Ahmed Serag, Professor, Founder and Director, AI Innovation Lab at Cornell University,</strong> as he discusses the future of AI in medicine.&nbsp;</p><p>Dr. Serag shares insights on:&nbsp;</p><p>✅ The challenges of using AI in healthcare—only 20% of global medical data is AI-ready.&nbsp;</p><p>✅ The role of synthetic data in medical AI—how it can protect privacy while accelerating research.&nbsp;</p><p>✅ Why AI is reshaping drug discovery and clinical trials, cutting timelines from years to months.&nbsp;</p><p>✅ The need for global standards in medical data privacy and AI-driven diagnostics.&nbsp;</p><p>✅ How AI is assisting, not replacing, doctors—and why physicians who embrace AI will lead the future.&nbsp;</p><p><br></p><p>Dr. Serag highlights how AI is revolutionizing medicine, from radiology to digital twins, and the critical role of collaboration between researchers, policymakers, and clinicians.&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong>Listen our latest Podcast on the RegulatingAI Podcast </strong>with<strong> Dr. Ahmed Serag, Professor, Founder and Director, AI Innovation Lab at Cornell University,</strong> as he discusses the future of AI in medicine.&nbsp;</p><p>Dr. Serag shares insights on:&nbsp;</p><p>✅ The challenges of using AI in healthcare—only 20% of global medical data is AI-ready.&nbsp;</p><p>✅ The role of synthetic data in medical AI—how it can protect privacy while accelerating research.&nbsp;</p><p>✅ Why AI is reshaping drug discovery and clinical trials, cutting timelines from years to months.&nbsp;</p><p>✅ The need for global standards in medical data privacy and AI-driven diagnostics.&nbsp;</p><p>✅ How AI is assisting, not replacing, doctors—and why physicians who embrace AI will lead the future.&nbsp;</p><p><br></p><p>Dr. Serag highlights how AI is revolutionizing medicine, from radiology to digital twins, and the critical role of collaboration between researchers, policymakers, and clinicians.&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Listen our latest Podcast on the RegulatingAI Podcast with Dr. Ahmed Serag, Professor, Founder and Director, AI Innovation Lab at Cornell University, as he discusses the future of AI in medicine. Dr. Serag shares insights on: ✅ The challenges of us...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[38f6dbba-6d14-48fa-9a0f-67cb730a9a51]]></guid>
  <title><![CDATA[Impact of AI and Digital Transformation in HealthTech and Healthcare Ft. Amr Metwally ]]></title>
  <description><![CDATA[<p>Listen our latest Podcast on the <strong>RegulatingAI Podcast </strong>with<strong> Amr Metwally, Director of Innovation </strong>at<strong> Hamad Medical Corporation, </strong>as he discusses AI’s transformative role in healthcare.</p><p>&nbsp;</p><p>He gave insights on:&nbsp;</p><p>✅ How AI is reshaping medical diagnostics, from radiology to oncology.&nbsp;</p><p>✅ The role of AI in expanding healthcare access in underserved regions.&nbsp;</p><p>✅ The ethical and regulatory challenges of patient data privacy.&nbsp;</p><p>✅ Qatar’s approach to AI governance and healthcare innovation.&nbsp;</p><p>✅ Why AI won’t replace doctors—but doctors using AI will outperform those who don’t.&nbsp;</p><p><br></p><p>Amr highlights how AI is not just an add-on but a fundamental shift in how healthcare is delivered, urging leaders to embrace innovation while ensuring ethical safeguards.&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e18eeced-fb3f-4a7d-911d-a44f2bc9270f/a62d45a0f1.jpg" />
  <pubDate>Mon, 10 Feb 2025 14:35:59 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="20560710" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e18eeced-fb3f-4a7d-911d-a44f2bc9270f/episode.mp3" />
  <itunes:title><![CDATA[Impact of AI and Digital Transformation in HealthTech and Healthcare Ft. Amr Metwally ]]></itunes:title>
  <itunes:duration>21:25</itunes:duration>
  <itunes:summary><![CDATA[<p>Listen our latest Podcast on the <strong>RegulatingAI Podcast </strong>with<strong> Amr Metwally, Director of Innovation </strong>at<strong> Hamad Medical Corporation, </strong>as he discusses AI’s transformative role in healthcare.</p><p>&nbsp;</p><p>He gave insights on:&nbsp;</p><p>✅ How AI is reshaping medical diagnostics, from radiology to oncology.&nbsp;</p><p>✅ The role of AI in expanding healthcare access in underserved regions.&nbsp;</p><p>✅ The ethical and regulatory challenges of patient data privacy.&nbsp;</p><p>✅ Qatar’s approach to AI governance and healthcare innovation.&nbsp;</p><p>✅ Why AI won’t replace doctors—but doctors using AI will outperform those who don’t.&nbsp;</p><p><br></p><p>Amr highlights how AI is not just an add-on but a fundamental shift in how healthcare is delivered, urging leaders to embrace innovation while ensuring ethical safeguards.&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Listen our latest Podcast on the <strong>RegulatingAI Podcast </strong>with<strong> Amr Metwally, Director of Innovation </strong>at<strong> Hamad Medical Corporation, </strong>as he discusses AI’s transformative role in healthcare.</p><p>&nbsp;</p><p>He gave insights on:&nbsp;</p><p>✅ How AI is reshaping medical diagnostics, from radiology to oncology.&nbsp;</p><p>✅ The role of AI in expanding healthcare access in underserved regions.&nbsp;</p><p>✅ The ethical and regulatory challenges of patient data privacy.&nbsp;</p><p>✅ Qatar’s approach to AI governance and healthcare innovation.&nbsp;</p><p>✅ Why AI won’t replace doctors—but doctors using AI will outperform those who don’t.&nbsp;</p><p><br></p><p>Amr highlights how AI is not just an add-on but a fundamental shift in how healthcare is delivered, urging leaders to embrace innovation while ensuring ethical safeguards.&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Listen our latest Podcast on the RegulatingAI Podcast with Amr Metwally, Director of Innovation at Hamad Medical Corporation, as he discusses AI’s transformative role in healthcare. He gave insights on: ✅ How AI is reshaping medical diagnostics, fr...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[8bf0e4e6-ec13-4bbb-bb64-95cce9c0d1d9]]></guid>
  <title><![CDATA[Preparing for Q-Day: Quantum Resilience and the AI-Quantum Revolution Ft. Areiel Wolanow ]]></title>
  <description><![CDATA[<p><strong>Listen our latest Podcast on the RegulatingAI Podcast with Areiel Wolanow, Founder &amp; Managing Director </strong>of<strong> Finserv Experts, </strong>as he breaks down AI adoption, business transformation, and quantum computing.&nbsp;</p><p>He gave insights on:&nbsp;</p><p>✅ Why 90% of companies investing in AI won’t see ROI—and how to be in the winning 10%.&nbsp;</p><p>✅ The critical need for new business and operating models to fully leverage AI.&nbsp;</p><p>✅ How organizations can ensure AI governance, compliance, and accountability.&nbsp;</p><p>✅ Whether companies should hire a Chief AI Officer and what the role should actually entail.&nbsp;</p><p>✅ The intersection of AI and quantum computing—how it can supercharge machine learning and data security.&nbsp;</p><p>Areiel highlights why AI-driven transformation isn’t just about tech—it’s about strategic leadership, ethical frameworks, and a deep understanding of business impact.&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/07b8e237-4e1f-4653-9d85-4325686efac5/765ebb277d.jpg" />
  <pubDate>Mon, 10 Feb 2025 14:33:21 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="13546101" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/07b8e237-4e1f-4653-9d85-4325686efac5/episode.mp3" />
  <itunes:title><![CDATA[Preparing for Q-Day: Quantum Resilience and the AI-Quantum Revolution Ft. Areiel Wolanow ]]></itunes:title>
  <itunes:duration>14:06</itunes:duration>
  <itunes:summary><![CDATA[<p><strong>Listen our latest Podcast on the RegulatingAI Podcast with Areiel Wolanow, Founder &amp; Managing Director </strong>of<strong> Finserv Experts, </strong>as he breaks down AI adoption, business transformation, and quantum computing.&nbsp;</p><p>He gave insights on:&nbsp;</p><p>✅ Why 90% of companies investing in AI won’t see ROI—and how to be in the winning 10%.&nbsp;</p><p>✅ The critical need for new business and operating models to fully leverage AI.&nbsp;</p><p>✅ How organizations can ensure AI governance, compliance, and accountability.&nbsp;</p><p>✅ Whether companies should hire a Chief AI Officer and what the role should actually entail.&nbsp;</p><p>✅ The intersection of AI and quantum computing—how it can supercharge machine learning and data security.&nbsp;</p><p>Areiel highlights why AI-driven transformation isn’t just about tech—it’s about strategic leadership, ethical frameworks, and a deep understanding of business impact.&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><strong>Listen our latest Podcast on the RegulatingAI Podcast with Areiel Wolanow, Founder &amp; Managing Director </strong>of<strong> Finserv Experts, </strong>as he breaks down AI adoption, business transformation, and quantum computing.&nbsp;</p><p>He gave insights on:&nbsp;</p><p>✅ Why 90% of companies investing in AI won’t see ROI—and how to be in the winning 10%.&nbsp;</p><p>✅ The critical need for new business and operating models to fully leverage AI.&nbsp;</p><p>✅ How organizations can ensure AI governance, compliance, and accountability.&nbsp;</p><p>✅ Whether companies should hire a Chief AI Officer and what the role should actually entail.&nbsp;</p><p>✅ The intersection of AI and quantum computing—how it can supercharge machine learning and data security.&nbsp;</p><p>Areiel highlights why AI-driven transformation isn’t just about tech—it’s about strategic leadership, ethical frameworks, and a deep understanding of business impact.&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Listen our latest Podcast on the RegulatingAI Podcast with Areiel Wolanow, Founder & Managing Director of Finserv Experts, as he breaks down AI adoption, business transformation, and quantum computing. He gave insights on: ✅ Why 90% of companies in...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[cf5d16df-cdf4-490b-9797-33818e478de5]]></guid>
  <title><![CDATA[Higher Education and Skills | Ft. Dr Mandy Crawford-Lee FRSA]]></title>
  <description><![CDATA[<p>Welcome to the <em>Regulating AI</em> podcast, coming to you live from the <strong>Big Data &amp; TechX4 Conference in London (Feb 6, 2025)</strong>!</p><p>In this episode, we sit down with <strong>Dr. Mandy Crawford-Lee</strong>, CEO of the <strong>University Vocational Awards Council (UVAC)</strong>, to discuss how AI is reshaping education, workforce development, and policy frameworks.</p><h3>🔍 <strong>Key Topics Covered:</strong></h3><p>✅ How AI is transforming education through personalization</p><p>✅ The role of AI in workforce evolution &amp; reskilling initiatives</p><p>✅ Policy &amp; regulation challenges in AI adoption</p><p>✅ The digital divide—who benefits and who gets left behind?</p><p>✅ The future of AI in higher education &amp; beyond</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/c74a407c-8019-4595-9154-854da4948b2d/b7dc79a38d.jpg" />
  <pubDate>Sat, 08 Feb 2025 01:59:30 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="16325947" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/c74a407c-8019-4595-9154-854da4948b2d/episode.mp3" />
  <itunes:title><![CDATA[Higher Education and Skills | Ft. Dr Mandy Crawford-Lee FRSA]]></itunes:title>
  <itunes:duration>17:00</itunes:duration>
  <itunes:summary><![CDATA[<p>Welcome to the <em>Regulating AI</em> podcast, coming to you live from the <strong>Big Data &amp; TechX4 Conference in London (Feb 6, 2025)</strong>!</p><p>In this episode, we sit down with <strong>Dr. Mandy Crawford-Lee</strong>, CEO of the <strong>University Vocational Awards Council (UVAC)</strong>, to discuss how AI is reshaping education, workforce development, and policy frameworks.</p><h3>🔍 <strong>Key Topics Covered:</strong></h3><p>✅ How AI is transforming education through personalization</p><p>✅ The role of AI in workforce evolution &amp; reskilling initiatives</p><p>✅ Policy &amp; regulation challenges in AI adoption</p><p>✅ The digital divide—who benefits and who gets left behind?</p><p>✅ The future of AI in higher education &amp; beyond</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Welcome to the <em>Regulating AI</em> podcast, coming to you live from the <strong>Big Data &amp; TechX4 Conference in London (Feb 6, 2025)</strong>!</p><p>In this episode, we sit down with <strong>Dr. Mandy Crawford-Lee</strong>, CEO of the <strong>University Vocational Awards Council (UVAC)</strong>, to discuss how AI is reshaping education, workforce development, and policy frameworks.</p><h3>🔍 <strong>Key Topics Covered:</strong></h3><p>✅ How AI is transforming education through personalization</p><p>✅ The role of AI in workforce evolution &amp; reskilling initiatives</p><p>✅ Policy &amp; regulation challenges in AI adoption</p><p>✅ The digital divide—who benefits and who gets left behind?</p><p>✅ The future of AI in higher education &amp; beyond</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Welcome to the Regulating AI podcast, coming to you live from the Big Data & TechX4 Conference in London (Feb 6, 2025)!In this episode, we sit down with Dr. Mandy Crawford-Lee, CEO of the University Vocational Awards Council (UVAC), to discuss how ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[380574df-bf9f-4f35-a525-d1ca2793d110]]></guid>
  <title><![CDATA[Adoption of AI Across the Economy | Ft. Tim Cook, Founder of AIConfident]]></title>
  <description><![CDATA[<p>In this episode, <strong>Tim Cook</strong>, Founder of <strong>AIConfident</strong>, explores the <strong>Adoption of AI Across the Economy</strong>. He discusses how AI is revolutionizing various industries and what lies ahead for its integration into the global economy. Watch this insightful discussion on AI’s growing impact and the future of innovation.&nbsp;</p><p><br></p><p>🎙️ <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ <strong>AI Integration at Scale</strong> – How businesses are leveraging AI to drive efficiency, innovation, and competitive advantage.&nbsp;</p><p>✅ <strong>Bridging the AI Adoption Gap</strong> – Addressing challenges in AI implementation across sectors and strategies for overcoming them.&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ebea2873-952c-43b6-8eef-242fa1974fe6/03629ff370.jpg" />
  <pubDate>Sat, 08 Feb 2025 01:57:04 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="16311319" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ebea2873-952c-43b6-8eef-242fa1974fe6/episode.mp3" />
  <itunes:title><![CDATA[Adoption of AI Across the Economy | Ft. Tim Cook, Founder of AIConfident]]></itunes:title>
  <itunes:duration>16:59</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode, <strong>Tim Cook</strong>, Founder of <strong>AIConfident</strong>, explores the <strong>Adoption of AI Across the Economy</strong>. He discusses how AI is revolutionizing various industries and what lies ahead for its integration into the global economy. Watch this insightful discussion on AI’s growing impact and the future of innovation.&nbsp;</p><p><br></p><p>🎙️ <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ <strong>AI Integration at Scale</strong> – How businesses are leveraging AI to drive efficiency, innovation, and competitive advantage.&nbsp;</p><p>✅ <strong>Bridging the AI Adoption Gap</strong> – Addressing challenges in AI implementation across sectors and strategies for overcoming them.&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode, <strong>Tim Cook</strong>, Founder of <strong>AIConfident</strong>, explores the <strong>Adoption of AI Across the Economy</strong>. He discusses how AI is revolutionizing various industries and what lies ahead for its integration into the global economy. Watch this insightful discussion on AI’s growing impact and the future of innovation.&nbsp;</p><p><br></p><p>🎙️ <strong>Key Takeaways:</strong>&nbsp;</p><p>✅ <strong>AI Integration at Scale</strong> – How businesses are leveraging AI to drive efficiency, innovation, and competitive advantage.&nbsp;</p><p>✅ <strong>Bridging the AI Adoption Gap</strong> – Addressing challenges in AI implementation across sectors and strategies for overcoming them.&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, Tim Cook, Founder of AIConfident, explores the Adoption of AI Across the Economy. He discusses how AI is revolutionizing various industries and what lies ahead for its integration into the global economy. Watch this insightful disc...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a7be1452-b0c9-4b2b-b835-799e967d6376]]></guid>
  <title><![CDATA[The Future of AI & Global Power: Anja Manuel on Deep Seek R1 & US-China Relations ]]></title>
  <description><![CDATA[<p>🚨 China’s Deep Seek R1 AI model is raising big questions about global security and AI dominance. 🚨&nbsp;</p><p><br></p><p>In this eye-opening conversation on the RegulatingAI Podcast, <strong>Anja Manuel</strong>—foreign policy expert, advisor, and former diplomat—joins <strong>Sanjay Puri</strong> to discuss on:&nbsp;</p><p><br></p><ul><li>How does <strong>Deep Seek R1 compare to U.S. AI models?</strong>&nbsp;</li><li>Did <strong>China bypass U.S. export controls</strong> to access restricted chips?&nbsp;</li><li>Are we heading toward a <strong>global AI divide</strong> with separate U.S.-China AI ecosystems?&nbsp;</li><li>What <strong>national security risks</strong> does AI pose, from cyber warfare to AI-powered weapons?&nbsp;</li></ul><p><strong>AI isn’t just about innovation—it’s about power.</strong> Watch the latest episode to stay informed on the future of AI governance and global competition.&nbsp;</p><p><br></p><p>👉 Subscribe to <strong>RegulatingAI</strong> for cutting-edge discussions on AI policy and strategy!&nbsp;</p><p><br></p><p>&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d99044f5-1a13-4806-9292-3e7a491abb13/95c19e6761.jpg" />
  <pubDate>Thu, 06 Feb 2025 15:13:16 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="17018924" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d99044f5-1a13-4806-9292-3e7a491abb13/episode.mp3" />
  <itunes:title><![CDATA[The Future of AI & Global Power: Anja Manuel on Deep Seek R1 & US-China Relations ]]></itunes:title>
  <itunes:duration>17:43</itunes:duration>
  <itunes:summary><![CDATA[<p>🚨 China’s Deep Seek R1 AI model is raising big questions about global security and AI dominance. 🚨&nbsp;</p><p><br></p><p>In this eye-opening conversation on the RegulatingAI Podcast, <strong>Anja Manuel</strong>—foreign policy expert, advisor, and former diplomat—joins <strong>Sanjay Puri</strong> to discuss on:&nbsp;</p><p><br></p><ul><li>How does <strong>Deep Seek R1 compare to U.S. AI models?</strong>&nbsp;</li><li>Did <strong>China bypass U.S. export controls</strong> to access restricted chips?&nbsp;</li><li>Are we heading toward a <strong>global AI divide</strong> with separate U.S.-China AI ecosystems?&nbsp;</li><li>What <strong>national security risks</strong> does AI pose, from cyber warfare to AI-powered weapons?&nbsp;</li></ul><p><strong>AI isn’t just about innovation—it’s about power.</strong> Watch the latest episode to stay informed on the future of AI governance and global competition.&nbsp;</p><p><br></p><p>👉 Subscribe to <strong>RegulatingAI</strong> for cutting-edge discussions on AI policy and strategy!&nbsp;</p><p><br></p><p>&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>🚨 China’s Deep Seek R1 AI model is raising big questions about global security and AI dominance. 🚨&nbsp;</p><p><br></p><p>In this eye-opening conversation on the RegulatingAI Podcast, <strong>Anja Manuel</strong>—foreign policy expert, advisor, and former diplomat—joins <strong>Sanjay Puri</strong> to discuss on:&nbsp;</p><p><br></p><ul><li>How does <strong>Deep Seek R1 compare to U.S. AI models?</strong>&nbsp;</li><li>Did <strong>China bypass U.S. export controls</strong> to access restricted chips?&nbsp;</li><li>Are we heading toward a <strong>global AI divide</strong> with separate U.S.-China AI ecosystems?&nbsp;</li><li>What <strong>national security risks</strong> does AI pose, from cyber warfare to AI-powered weapons?&nbsp;</li></ul><p><strong>AI isn’t just about innovation—it’s about power.</strong> Watch the latest episode to stay informed on the future of AI governance and global competition.&nbsp;</p><p><br></p><p>👉 Subscribe to <strong>RegulatingAI</strong> for cutting-edge discussions on AI policy and strategy!&nbsp;</p><p><br></p><p>&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[🚨 China’s Deep Seek R1 AI model is raising big questions about global security and AI dominance. 🚨 In this eye-opening conversation on the RegulatingAI Podcast, Anja Manuel—foreign policy expert, advisor, and former diplomat—joins Sanjay Puri to di...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[398ee11d-09b9-4062-acc7-4b47dcc66370]]></guid>
  <title><![CDATA[Role of AI Skills in Future of Work with Isa Mutlib | The RegulatingAI Podcast]]></title>
  <description><![CDATA[<p>Listen our latest Podcast on the RegulatingAI Podcast with Isa Mutlib, Founder of Portland AI talk on the Role of AI Skills in Future of Work.</p><p><br></p><p>She gave her insights on:</p><p><br></p><p>✅ How AI is disrupting the global workforce and what it means for policymakers, businesses, and employees.</p><p>✅ The speed of digital transformation and why it's different from previous technological revolutions.</p><p>✅ AI’s impact on developing countries—can AI be a great equalizer in global innovation?</p><p>✅ The need for open-source AI vs. concerns around security and control.</p><p>✅ Practical advice for professionals worried about AI replacing their jobs—how to upskill and stay ahead.</p><p><br></p><p>Isa highlights how AI presents both challenges and opportunities, urging leaders and individuals to embrace change rather than resist it.</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/486627d7-b954-4350-91c2-52b3e8ddbff5/b79581d92e.jpg" />
  <pubDate>Thu, 06 Feb 2025 14:06:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="12328168" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/486627d7-b954-4350-91c2-52b3e8ddbff5/episode.mp3" />
  <itunes:title><![CDATA[Role of AI Skills in Future of Work with Isa Mutlib | The RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>12:50</itunes:duration>
  <itunes:summary><![CDATA[<p>Listen our latest Podcast on the RegulatingAI Podcast with Isa Mutlib, Founder of Portland AI talk on the Role of AI Skills in Future of Work.</p><p><br></p><p>She gave her insights on:</p><p><br></p><p>✅ How AI is disrupting the global workforce and what it means for policymakers, businesses, and employees.</p><p>✅ The speed of digital transformation and why it's different from previous technological revolutions.</p><p>✅ AI’s impact on developing countries—can AI be a great equalizer in global innovation?</p><p>✅ The need for open-source AI vs. concerns around security and control.</p><p>✅ Practical advice for professionals worried about AI replacing their jobs—how to upskill and stay ahead.</p><p><br></p><p>Isa highlights how AI presents both challenges and opportunities, urging leaders and individuals to embrace change rather than resist it.</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Listen our latest Podcast on the RegulatingAI Podcast with Isa Mutlib, Founder of Portland AI talk on the Role of AI Skills in Future of Work.</p><p><br></p><p>She gave her insights on:</p><p><br></p><p>✅ How AI is disrupting the global workforce and what it means for policymakers, businesses, and employees.</p><p>✅ The speed of digital transformation and why it's different from previous technological revolutions.</p><p>✅ AI’s impact on developing countries—can AI be a great equalizer in global innovation?</p><p>✅ The need for open-source AI vs. concerns around security and control.</p><p>✅ Practical advice for professionals worried about AI replacing their jobs—how to upskill and stay ahead.</p><p><br></p><p>Isa highlights how AI presents both challenges and opportunities, urging leaders and individuals to embrace change rather than resist it.</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Listen our latest Podcast on the RegulatingAI Podcast with Isa Mutlib, Founder of Portland AI talk on the Role of AI Skills in Future of Work.She gave her insights on:✅ How AI is disrupting the global workforce and what it means for policymakers, b...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[322e5201-15d1-4be6-b1bc-7159db485f56]]></guid>
  <title><![CDATA[AI in Education: Augmented Debate & AI Pluralism Ft. Prof Anand Rao ]]></title>
  <description><![CDATA[<p>Join <strong>Sanjay Puri</strong>, Founder &amp; Chairman of <strong>Knowledge Networks</strong>, as he talks about the future of AI in education with P. Anand Rao at <strong>AI Big Data Global, Olympia, London</strong>.&nbsp;</p><p>&nbsp;</p><p>In this episode on <strong><em>The RegulatingAI Podcast</em></strong>, we explore innovative approaches to preparing students for an AI-driven world through debate-based learning and AI pluralism.&nbsp;</p><p>Our guest, <strong>Prof Anand Rao</strong>, <em>Professor of Communication &amp; Digital Studies and Director, Center for AI and the Liberal Arts, University of Mary Washington</em>, shares insights on:&nbsp;</p><p>🔹 <strong>Augmented Debate-Centered Instruction</strong> – A transformative model that uses debate to develop essential skills like critical thinking, collaboration, and research.&nbsp;</p><p>🔹 <strong>AI Pluralism</strong> – A novel approach promoting diversity in AI agents to improve transparency, contestability, and bias mitigation.&nbsp;</p><p><br></p><p>Discover how integrating these strategies can enhance education, foster critical thinking, and address key challenges in AI development.&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ecc419a8-b91e-4bcc-92be-85fbf4471526/9bba78c914.jpg" />
  <pubDate>Thu, 06 Feb 2025 13:44:15 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="17553075" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ecc419a8-b91e-4bcc-92be-85fbf4471526/episode.mp3" />
  <itunes:title><![CDATA[AI in Education: Augmented Debate & AI Pluralism Ft. Prof Anand Rao ]]></itunes:title>
  <itunes:duration>18:17</itunes:duration>
  <itunes:summary><![CDATA[<p>Join <strong>Sanjay Puri</strong>, Founder &amp; Chairman of <strong>Knowledge Networks</strong>, as he talks about the future of AI in education with P. Anand Rao at <strong>AI Big Data Global, Olympia, London</strong>.&nbsp;</p><p>&nbsp;</p><p>In this episode on <strong><em>The RegulatingAI Podcast</em></strong>, we explore innovative approaches to preparing students for an AI-driven world through debate-based learning and AI pluralism.&nbsp;</p><p>Our guest, <strong>Prof Anand Rao</strong>, <em>Professor of Communication &amp; Digital Studies and Director, Center for AI and the Liberal Arts, University of Mary Washington</em>, shares insights on:&nbsp;</p><p>🔹 <strong>Augmented Debate-Centered Instruction</strong> – A transformative model that uses debate to develop essential skills like critical thinking, collaboration, and research.&nbsp;</p><p>🔹 <strong>AI Pluralism</strong> – A novel approach promoting diversity in AI agents to improve transparency, contestability, and bias mitigation.&nbsp;</p><p><br></p><p>Discover how integrating these strategies can enhance education, foster critical thinking, and address key challenges in AI development.&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Join <strong>Sanjay Puri</strong>, Founder &amp; Chairman of <strong>Knowledge Networks</strong>, as he talks about the future of AI in education with P. Anand Rao at <strong>AI Big Data Global, Olympia, London</strong>.&nbsp;</p><p>&nbsp;</p><p>In this episode on <strong><em>The RegulatingAI Podcast</em></strong>, we explore innovative approaches to preparing students for an AI-driven world through debate-based learning and AI pluralism.&nbsp;</p><p>Our guest, <strong>Prof Anand Rao</strong>, <em>Professor of Communication &amp; Digital Studies and Director, Center for AI and the Liberal Arts, University of Mary Washington</em>, shares insights on:&nbsp;</p><p>🔹 <strong>Augmented Debate-Centered Instruction</strong> – A transformative model that uses debate to develop essential skills like critical thinking, collaboration, and research.&nbsp;</p><p>🔹 <strong>AI Pluralism</strong> – A novel approach promoting diversity in AI agents to improve transparency, contestability, and bias mitigation.&nbsp;</p><p><br></p><p>Discover how integrating these strategies can enhance education, foster critical thinking, and address key challenges in AI development.&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join Sanjay Puri, Founder & Chairman of Knowledge Networks, as he talks about the future of AI in education with P. Anand Rao at AI Big Data Global, Olympia, London.  In this episode on The RegulatingAI Podcast, we explore innovative approaches to ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b0777dc4-4d50-4f16-92bc-bbf3afa06702]]></guid>
  <title><![CDATA[Future of AI in Israel: Innovation, Regulation & Ethical Challenges Ft. Ron Gafni]]></title>
  <description><![CDATA[<p>In this episode of the Regulating AI Podcast, recorded LIVE at the <strong>AI Big Data Global Expo 2025, with our</strong> guest <strong>Ron Gafni, Co-founder &amp; Chairman, G-Foresight. </strong>He talked about AI Revelation in Management Tools, how AI is evolving in Israel—from its rapid advancements to the crucial need for regulations and governance.&nbsp;</p><p><br></p><p>Ron shares insights on:&nbsp;</p><p>✅ The AI revolution and its impact on management and decision-making&nbsp;</p><p>✅ Israel’s AI leadership in Défense, agriculture, and national security&nbsp;</p><p>✅ The role of AI regulations, privacy, and ethical considerations&nbsp;</p><p>✅ How companies should educate their boards and executives on AI adoption&nbsp;</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/9ca15c50-1afe-4e32-8939-561a93a0ae1d/e5ce243541.jpg" />
  <pubDate>Thu, 06 Feb 2025 13:33:25 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="10995296" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9ca15c50-1afe-4e32-8939-561a93a0ae1d/episode.mp3" />
  <itunes:title><![CDATA[Future of AI in Israel: Innovation, Regulation & Ethical Challenges Ft. Ron Gafni]]></itunes:title>
  <itunes:duration>11:27</itunes:duration>
  <itunes:summary><![CDATA[<p>In this episode of the Regulating AI Podcast, recorded LIVE at the <strong>AI Big Data Global Expo 2025, with our</strong> guest <strong>Ron Gafni, Co-founder &amp; Chairman, G-Foresight. </strong>He talked about AI Revelation in Management Tools, how AI is evolving in Israel—from its rapid advancements to the crucial need for regulations and governance.&nbsp;</p><p><br></p><p>Ron shares insights on:&nbsp;</p><p>✅ The AI revolution and its impact on management and decision-making&nbsp;</p><p>✅ Israel’s AI leadership in Défense, agriculture, and national security&nbsp;</p><p>✅ The role of AI regulations, privacy, and ethical considerations&nbsp;</p><p>✅ How companies should educate their boards and executives on AI adoption&nbsp;</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>In this episode of the Regulating AI Podcast, recorded LIVE at the <strong>AI Big Data Global Expo 2025, with our</strong> guest <strong>Ron Gafni, Co-founder &amp; Chairman, G-Foresight. </strong>He talked about AI Revelation in Management Tools, how AI is evolving in Israel—from its rapid advancements to the crucial need for regulations and governance.&nbsp;</p><p><br></p><p>Ron shares insights on:&nbsp;</p><p>✅ The AI revolution and its impact on management and decision-making&nbsp;</p><p>✅ Israel’s AI leadership in Défense, agriculture, and national security&nbsp;</p><p>✅ The role of AI regulations, privacy, and ethical considerations&nbsp;</p><p>✅ How companies should educate their boards and executives on AI adoption&nbsp;</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the Regulating AI Podcast, recorded LIVE at the AI Big Data Global Expo 2025, with our guest Ron Gafni, Co-founder & Chairman, G-Foresight. He talked about AI Revelation in Management Tools, how AI is evolving in Israel—from its ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b3cef13f-1a02-4f10-954b-83580d94f63d]]></guid>
  <title><![CDATA[Exploring AI Ethics with Francesca Rossi: Insights from IBM's Global Leader | RegulatingAI Podcast]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">Join us in this insightful episode of the RegulatingAI Podcast as we sit down with Francesca Rossi, IBM Fellow and Global Leader for AI Ethics. Based at IBM's T.J. Watson Research Lab in New York, Francesca shares her expertise on cutting-edge AI topics, including constraint reasoning, multi-agent systems, neuro-symbolic AI, and value alignment. With over 220 published works and leadership roles in renowned AI organizations like AAAI and EurAI, Francesca provides a thought-provoking perspective on ethical AI, governance, and the future of artificial intelligence. Don't miss this fascinating conversation!</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/francesca-rossi-34b8b95/</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">About Regulating AI:</span></p><p><span style="color: rgb(13, 13, 13);">RegulatingAI is a dedicated non-profit organization designed for experts, mentors, and users of artificial intelligence (AI) with a keen interest in exploring the intersection of AI and regulation. We aim to unite individuals with diverse expertise and backgrounds, fostering collaboration to collectively advance the understanding and implementation of AI regulations.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">About your host</span></p><p><span style="color: rgb(13, 13, 13);">Sanjay Puri is a recognized authority on US-India relations. He serves as the Chairman of the US-India Political Action Committee (USINPAC), a national, bipartisan political action committee representing Indian-Americans.  He is also the founder of the Alliance for US India Business (AUSIB), an organization dedicated to strengthening economic ties between the US and India.  He is also a successful technology entrepreneur, mentor, and investor.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">#AIRegulation #regulatingaipodcast #innovationinAIRegulation</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Streaming On: </span></p><p><span style="color: rgb(13, 13, 13);">Apple Podcast: https://podcasts.apple.com/us/podcast/regulating-ai-innovate-responsibly/id1714410167</span></p><p><span style="color: rgb(13, 13, 13);">Spotify: https://open.spotify.com/show/3ZkXYPINugnegkORcBCrYo?si=a7ad672e8e194bea</span></p><p><span style="color: rgb(13, 13, 13);">YouTube: https://www.youtube.com/@The_Regulating_AI_Podcast</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Join our fastest-growing AI Community:</span></p><p><span style="color: rgb(13, 13, 13);">Instagram: https://www.instagram.com/regulating_ai/</span></p><p><span style="color: rgb(13, 13, 13);">Twitter: https://twitter.com/RegulatingAI</span></p><p><span style="color: rgb(13, 13, 13);">LinkedIn: https://www.linkedin.com/company/regulating-ai</span></p><p><span style="color: rgb(13, 13, 13);">Facebook: https://www.facebook.com/RegulatingAI</span></p><p><span style="color: rgb(13, 13, 13);">Read Our Blogs, News &amp; Updates:https://regulatingai.org/</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Join the Conversation:</span></p><p><span style="color: rgb(13, 13, 13);">Leave your thoughts and questions in the comments below. We'd love to hear from you!</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/72a06f86-ec00-496b-9079-537fba5d22d2/ae2a7deb58.jpg" />
  <pubDate>Thu, 30 Jan 2025 09:35:47 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="31405497" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/72a06f86-ec00-496b-9079-537fba5d22d2/episode.mp3" />
  <itunes:title><![CDATA[Exploring AI Ethics with Francesca Rossi: Insights from IBM's Global Leader | RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>32:42</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">Join us in this insightful episode of the RegulatingAI Podcast as we sit down with Francesca Rossi, IBM Fellow and Global Leader for AI Ethics. Based at IBM's T.J. Watson Research Lab in New York, Francesca shares her expertise on cutting-edge AI topics, including constraint reasoning, multi-agent systems, neuro-symbolic AI, and value alignment. With over 220 published works and leadership roles in renowned AI organizations like AAAI and EurAI, Francesca provides a thought-provoking perspective on ethical AI, governance, and the future of artificial intelligence. Don't miss this fascinating conversation!</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/francesca-rossi-34b8b95/</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">About Regulating AI:</span></p><p><span style="color: rgb(13, 13, 13);">RegulatingAI is a dedicated non-profit organization designed for experts, mentors, and users of artificial intelligence (AI) with a keen interest in exploring the intersection of AI and regulation. We aim to unite individuals with diverse expertise and backgrounds, fostering collaboration to collectively advance the understanding and implementation of AI regulations.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">About your host</span></p><p><span style="color: rgb(13, 13, 13);">Sanjay Puri is a recognized authority on US-India relations. He serves as the Chairman of the US-India Political Action Committee (USINPAC), a national, bipartisan political action committee representing Indian-Americans.  He is also the founder of the Alliance for US India Business (AUSIB), an organization dedicated to strengthening economic ties between the US and India.  He is also a successful technology entrepreneur, mentor, and investor.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">#AIRegulation #regulatingaipodcast #innovationinAIRegulation</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Streaming On: </span></p><p><span style="color: rgb(13, 13, 13);">Apple Podcast: https://podcasts.apple.com/us/podcast/regulating-ai-innovate-responsibly/id1714410167</span></p><p><span style="color: rgb(13, 13, 13);">Spotify: https://open.spotify.com/show/3ZkXYPINugnegkORcBCrYo?si=a7ad672e8e194bea</span></p><p><span style="color: rgb(13, 13, 13);">YouTube: https://www.youtube.com/@The_Regulating_AI_Podcast</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Join our fastest-growing AI Community:</span></p><p><span style="color: rgb(13, 13, 13);">Instagram: https://www.instagram.com/regulating_ai/</span></p><p><span style="color: rgb(13, 13, 13);">Twitter: https://twitter.com/RegulatingAI</span></p><p><span style="color: rgb(13, 13, 13);">LinkedIn: https://www.linkedin.com/company/regulating-ai</span></p><p><span style="color: rgb(13, 13, 13);">Facebook: https://www.facebook.com/RegulatingAI</span></p><p><span style="color: rgb(13, 13, 13);">Read Our Blogs, News &amp; Updates:https://regulatingai.org/</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Join the Conversation:</span></p><p><span style="color: rgb(13, 13, 13);">Leave your thoughts and questions in the comments below. We'd love to hear from you!</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">Join us in this insightful episode of the RegulatingAI Podcast as we sit down with Francesca Rossi, IBM Fellow and Global Leader for AI Ethics. Based at IBM's T.J. Watson Research Lab in New York, Francesca shares her expertise on cutting-edge AI topics, including constraint reasoning, multi-agent systems, neuro-symbolic AI, and value alignment. With over 220 published works and leadership roles in renowned AI organizations like AAAI and EurAI, Francesca provides a thought-provoking perspective on ethical AI, governance, and the future of artificial intelligence. Don't miss this fascinating conversation!</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/francesca-rossi-34b8b95/</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">About Regulating AI:</span></p><p><span style="color: rgb(13, 13, 13);">RegulatingAI is a dedicated non-profit organization designed for experts, mentors, and users of artificial intelligence (AI) with a keen interest in exploring the intersection of AI and regulation. We aim to unite individuals with diverse expertise and backgrounds, fostering collaboration to collectively advance the understanding and implementation of AI regulations.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">About your host</span></p><p><span style="color: rgb(13, 13, 13);">Sanjay Puri is a recognized authority on US-India relations. He serves as the Chairman of the US-India Political Action Committee (USINPAC), a national, bipartisan political action committee representing Indian-Americans.  He is also the founder of the Alliance for US India Business (AUSIB), an organization dedicated to strengthening economic ties between the US and India.  He is also a successful technology entrepreneur, mentor, and investor.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">#AIRegulation #regulatingaipodcast #innovationinAIRegulation</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Streaming On: </span></p><p><span style="color: rgb(13, 13, 13);">Apple Podcast: https://podcasts.apple.com/us/podcast/regulating-ai-innovate-responsibly/id1714410167</span></p><p><span style="color: rgb(13, 13, 13);">Spotify: https://open.spotify.com/show/3ZkXYPINugnegkORcBCrYo?si=a7ad672e8e194bea</span></p><p><span style="color: rgb(13, 13, 13);">YouTube: https://www.youtube.com/@The_Regulating_AI_Podcast</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Join our fastest-growing AI Community:</span></p><p><span style="color: rgb(13, 13, 13);">Instagram: https://www.instagram.com/regulating_ai/</span></p><p><span style="color: rgb(13, 13, 13);">Twitter: https://twitter.com/RegulatingAI</span></p><p><span style="color: rgb(13, 13, 13);">LinkedIn: https://www.linkedin.com/company/regulating-ai</span></p><p><span style="color: rgb(13, 13, 13);">Facebook: https://www.facebook.com/RegulatingAI</span></p><p><span style="color: rgb(13, 13, 13);">Read Our Blogs, News &amp; Updates:https://regulatingai.org/</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Join the Conversation:</span></p><p><span style="color: rgb(13, 13, 13);">Leave your thoughts and questions in the comments below. We'd love to hear from you!</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join us in this insightful episode of the RegulatingAI Podcast as we sit down with Francesca Rossi, IBM Fellow and Global Leader for AI Ethics. Based at IBM's T.J. Watson Research Lab in New York, Francesca shares her expertise on cutting-edge AI t...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>60</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f638758e-27c3-42e9-bb94-7fe19f288105]]></guid>
  <title><![CDATA[Tom Wheeler on AI's Future and Regulation Challenges | The RegulatingAI Podcast]]></title>
  <description><![CDATA[<p class="ql-align-justify">In this episode on RegulatingAI, Former FCC Chairman Tom Wheeler unpacks the complexities of AI regulation and governance. Drawing from his vast experience in telecommunications, Wheeler emphasizes the critical need for balanced oversight that fosters innovation without compromising fairness or safety.</p><p class="ql-align-justify"><strong><em>He has shared his thoughts on:</em></strong></p><ul><li class="ql-align-justify">The evolving landscape of AI governance and its societal impacts</li><li class="ql-align-justify">Why establishing both technical and behavioural standards is essential for effective AI oversight</li><li class="ql-align-justify">The importance of a multi-stakeholder approach to navigate AI's challenges and opportunities</li></ul><p><br></p><p>Resources:</p><p>"https://www.brookings.edu/people/tom-wheeler/</p><p>https://www.amazon.com/dp/B0C4FZ1QT4?ref_=cm_sw_r_cp_ud_dp_4VWY6H0X6YKWMRDBPSSD"</p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/0b12fa08-a552-465f-bf18-b2a010001c1b/8fe109341b.jpg" />
  <pubDate>Thu, 23 Jan 2025 13:14:24 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="49510235" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/0b12fa08-a552-465f-bf18-b2a010001c1b/episode.mp3" />
  <itunes:title><![CDATA[Tom Wheeler on AI's Future and Regulation Challenges | The RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>51:34</itunes:duration>
  <itunes:summary><![CDATA[<p class="ql-align-justify">In this episode on RegulatingAI, Former FCC Chairman Tom Wheeler unpacks the complexities of AI regulation and governance. Drawing from his vast experience in telecommunications, Wheeler emphasizes the critical need for balanced oversight that fosters innovation without compromising fairness or safety.</p><p class="ql-align-justify"><strong><em>He has shared his thoughts on:</em></strong></p><ul><li class="ql-align-justify">The evolving landscape of AI governance and its societal impacts</li><li class="ql-align-justify">Why establishing both technical and behavioural standards is essential for effective AI oversight</li><li class="ql-align-justify">The importance of a multi-stakeholder approach to navigate AI's challenges and opportunities</li></ul><p><br></p><p>Resources:</p><p>"https://www.brookings.edu/people/tom-wheeler/</p><p>https://www.amazon.com/dp/B0C4FZ1QT4?ref_=cm_sw_r_cp_ud_dp_4VWY6H0X6YKWMRDBPSSD"</p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p class="ql-align-justify">In this episode on RegulatingAI, Former FCC Chairman Tom Wheeler unpacks the complexities of AI regulation and governance. Drawing from his vast experience in telecommunications, Wheeler emphasizes the critical need for balanced oversight that fosters innovation without compromising fairness or safety.</p><p class="ql-align-justify"><strong><em>He has shared his thoughts on:</em></strong></p><ul><li class="ql-align-justify">The evolving landscape of AI governance and its societal impacts</li><li class="ql-align-justify">Why establishing both technical and behavioural standards is essential for effective AI oversight</li><li class="ql-align-justify">The importance of a multi-stakeholder approach to navigate AI's challenges and opportunities</li></ul><p><br></p><p>Resources:</p><p>"https://www.brookings.edu/people/tom-wheeler/</p><p>https://www.amazon.com/dp/B0C4FZ1QT4?ref_=cm_sw_r_cp_ud_dp_4VWY6H0X6YKWMRDBPSSD"</p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode on RegulatingAI, Former FCC Chairman Tom Wheeler unpacks the complexities of AI regulation and governance. Drawing from his vast experience in telecommunications, Wheeler emphasizes the critical need for balanced oversight that fost...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>59</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[03966e15-7405-48d7-97dc-f6b16a0086ab]]></guid>
  <title><![CDATA[Patrik Gayer on Open-Source AI and Global Policy | The RegulatingAI Podcast]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode on RegulatingAI, Patrik Gayer, Head of Global Affairs at Silo AI discusses the challenges and opportunities in regulating artificial intelligence. With his expertise in AI policy, Patrick provides a deep dive into creating fair, practical legislation that fosters innovation while addressing global concerns.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://hir.harvard.edu/the-eus-chance-to-lead-forging-a-global-regulatory-framework-for-artificial-intelligence-amidst-exponential-progress/</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/posts/harvard-ksr_volume-xxiii-activity-7147390411131482113-x16t?utm_source=share&amp;utm_medium=member_desktop"</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/patrikgayer/</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/a04dd676-4753-4084-af7a-4b847bc290cc/4c77091503.jpg" />
  <pubDate>Wed, 22 Jan 2025 13:38:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="53365072" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/a04dd676-4753-4084-af7a-4b847bc290cc/episode.mp3" />
  <itunes:title><![CDATA[Patrik Gayer on Open-Source AI and Global Policy | The RegulatingAI Podcast]]></itunes:title>
  <itunes:duration>55:35</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode on RegulatingAI, Patrik Gayer, Head of Global Affairs at Silo AI discusses the challenges and opportunities in regulating artificial intelligence. With his expertise in AI policy, Patrick provides a deep dive into creating fair, practical legislation that fosters innovation while addressing global concerns.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://hir.harvard.edu/the-eus-chance-to-lead-forging-a-global-regulatory-framework-for-artificial-intelligence-amidst-exponential-progress/</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/posts/harvard-ksr_volume-xxiii-activity-7147390411131482113-x16t?utm_source=share&amp;utm_medium=member_desktop"</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/patrikgayer/</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode on RegulatingAI, Patrik Gayer, Head of Global Affairs at Silo AI discusses the challenges and opportunities in regulating artificial intelligence. With his expertise in AI policy, Patrick provides a deep dive into creating fair, practical legislation that fosters innovation while addressing global concerns.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://hir.harvard.edu/the-eus-chance-to-lead-forging-a-global-regulatory-framework-for-artificial-intelligence-amidst-exponential-progress/</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/posts/harvard-ksr_volume-xxiii-activity-7147390411131482113-x16t?utm_source=share&amp;utm_medium=member_desktop"</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/patrikgayer/</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode on RegulatingAI, Patrik Gayer, Head of Global Affairs at Silo AI discusses the challenges and opportunities in regulating artificial intelligence. With his expertise in AI policy, Patrick provides a deep dive into creating fair, pra...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>58</itunes:episode>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a7b54e22-6667-42bb-bfc1-ff60a34f4a5b]]></guid>
  <title><![CDATA[Balancing Innovation & Safety: The Future of AI Regulation in America with Congressman Scott Franklin]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique background in the Navy, insurance, and agriculture, Rep. Franklin provides valuable insights into Congress’s role in the ever-evolving world of AI governance.</span></p><p><br></p><p><strong>Resources:</strong></p><p><a href="https://franklin.house.gov/about" target="_blank">https://franklin.house.gov/about</a></p><p><a href="https://en.wikipedia.org/wiki/Scott_Franklin_(politician)" target="_blank">https://en.wikipedia.org/wiki/Scott_Franklin_(politician)</a></p><p><a href="https://x.com/repfranklin" target="_blank">https://x.com/repfranklin</a></p><p><a href="https://x.com/repfranklin" target="_blank">https://www.linkedin.com/in/cscottfranklin/</a></p><p><a href="https://www.congress.gov/member/c-franklin/F000472" target="_blank">https://www.congress.gov/member/c-franklin/F000472</a></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/26f86f8b-22d4-45d9-af43-d90e444accfc/d0f836f272.jpg" />
  <pubDate>Tue, 07 Jan 2025 12:56:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32202963" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/26f86f8b-22d4-45d9-af43-d90e444accfc/episode.mp3" />
  <itunes:title><![CDATA[Balancing Innovation & Safety: The Future of AI Regulation in America with Congressman Scott Franklin]]></itunes:title>
  <itunes:duration>33:32</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique background in the Navy, insurance, and agriculture, Rep. Franklin provides valuable insights into Congress’s role in the ever-evolving world of AI governance.</span></p><p><br></p><p><strong>Resources:</strong></p><p><a href="https://franklin.house.gov/about" target="_blank">https://franklin.house.gov/about</a></p><p><a href="https://en.wikipedia.org/wiki/Scott_Franklin_(politician)" target="_blank">https://en.wikipedia.org/wiki/Scott_Franklin_(politician)</a></p><p><a href="https://x.com/repfranklin" target="_blank">https://x.com/repfranklin</a></p><p><a href="https://x.com/repfranklin" target="_blank">https://www.linkedin.com/in/cscottfranklin/</a></p><p><a href="https://www.congress.gov/member/c-franklin/F000472" target="_blank">https://www.congress.gov/member/c-franklin/F000472</a></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique background in the Navy, insurance, and agriculture, Rep. Franklin provides valuable insights into Congress’s role in the ever-evolving world of AI governance.</span></p><p><br></p><p><strong>Resources:</strong></p><p><a href="https://franklin.house.gov/about" target="_blank">https://franklin.house.gov/about</a></p><p><a href="https://en.wikipedia.org/wiki/Scott_Franklin_(politician)" target="_blank">https://en.wikipedia.org/wiki/Scott_Franklin_(politician)</a></p><p><a href="https://x.com/repfranklin" target="_blank">https://x.com/repfranklin</a></p><p><a href="https://x.com/repfranklin" target="_blank">https://www.linkedin.com/in/cscottfranklin/</a></p><p><a href="https://www.congress.gov/member/c-franklin/F000472" target="_blank">https://www.congress.gov/member/c-franklin/F000472</a></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of RegulatingAI Podcast, we’re joined by Congressman Scott Franklin from Florida’s 18th Congressional District, a member of the House AI Task Force, and a strong advocate for responsible AI regulation. Drawing on his unique backgrou...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>57</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2d457ee2-2120-4469-8d5e-006a400ce968]]></guid>
  <title><![CDATA[AI & Green Technology for Progress and Development | Roundtable Discussion Ft. Club de Madrid]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/deb1fccf-780c-4c16-9fe6-6cb0a0731230/3c94ec6af3.jpg" />
  <pubDate>Mon, 23 Dec 2024 02:23:49 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="3813085" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/deb1fccf-780c-4c16-9fe6-6cb0a0731230/episode.mp3" />
  <itunes:title><![CDATA[AI & Green Technology for Progress and Development | Roundtable Discussion Ft. Club de Madrid]]></itunes:title>
  <itunes:duration>3:58</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de Madrid, where over 130 leaders from 40+ countries gathered to explore the future of international cooperation and multilateralism.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Join us for an insightful discussion on the intersection of AI and Green Technology as drivers of global progress and sustainable development. This roundtable features highlights from the Imperial Springs International Forum 2024, hosted by Club de...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[e11380eb-2a10-44c5-916e-a9b59d2ed55f]]></guid>
  <title><![CDATA[The Fight for Fairness and Transparency in AI Systems with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with </span><a href="https://www.linkedin.com/in/faiza-patel-5a042816/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Faiza Patel</a><span style="background-color: transparent;">, Senior Director of the Liberty and National Security Program at the </span><a href="https://www.linkedin.com/company/brennan-center-for-justice/posts/?feedView=all" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center for Justice</a><span style="background-color: transparent;"> at </span><a href="https://www.linkedin.com/school/new-york-university-school-of-law/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NYU Law</a><span style="background-color: transparent;">. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:53) AI in national security, law enforcement and immigration contexts.</span></p><p><span style="background-color: transparent;">(05:00) The dangers of AI in government decisions, from immigration to surveillance.</span></p><p><span style="background-color: transparent;">(09:09) Long-standing issues with AI, including biased training data in facial recognition.</span></p><p><span style="background-color: transparent;">(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.</span></p><p><span style="background-color: transparent;">(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.</span></p><p><span style="background-color: transparent;">(20:25) How marginalized communities are disproportionately affected by AI.</span></p><p><span style="background-color: transparent;">(23:30) Companies developing AI must embed civil rights principles into their products.</span></p><p><span style="background-color: transparent;">(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.</span></p><p><span style="background-color: transparent;">(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.</span></p><p><span style="background-color: transparent;">(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/faiza-patel-5a042816/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Faiza Patel</a> -</p><p>https://www.linkedin.com/in/faiza-patel-5a042816/</p><p><br></p><p><a href="https://www.brennancenter.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center for Justice</a> -</p><p>https://www.brennancenter.org/</p><p><br></p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><br></p><p><a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Bill of Rights</a> -</p><p>https://www.whitehouse.gov/ostp/ai-bill-of-rights/</p><p><br></p><p><a href="https://www.brennancenter.org/experts/faiza-patel" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center - Faiza Patel</a> -</p><p>https://www.brennancenter.org/experts/faiza-patel</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Carve-Outs Undermine AI Regulations</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senate AI Hearings Highlight Increased Need for Regulation</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Perils and Promise of AI Regulation</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Advances in AI Increase Risks of Government Social Media Monitoring</a><span style="background-color: transparent;">&nbsp;- </span></p><p>https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/1255113d-e7b9-49a6-8597-ae267c738dc4/f95e344874.jpg" />
  <pubDate>Tue, 17 Dec 2024 08:40:48 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="37730095" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/1255113d-e7b9-49a6-8597-ae267c738dc4/episode.mp3" />
  <itunes:title><![CDATA[The Fight for Fairness and Transparency in AI Systems with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center for Justice]]></itunes:title>
  <itunes:duration>39:18</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with </span><a href="https://www.linkedin.com/in/faiza-patel-5a042816/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Faiza Patel</a><span style="background-color: transparent;">, Senior Director of the Liberty and National Security Program at the </span><a href="https://www.linkedin.com/company/brennan-center-for-justice/posts/?feedView=all" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center for Justice</a><span style="background-color: transparent;"> at </span><a href="https://www.linkedin.com/school/new-york-university-school-of-law/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NYU Law</a><span style="background-color: transparent;">. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:53) AI in national security, law enforcement and immigration contexts.</span></p><p><span style="background-color: transparent;">(05:00) The dangers of AI in government decisions, from immigration to surveillance.</span></p><p><span style="background-color: transparent;">(09:09) Long-standing issues with AI, including biased training data in facial recognition.</span></p><p><span style="background-color: transparent;">(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.</span></p><p><span style="background-color: transparent;">(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.</span></p><p><span style="background-color: transparent;">(20:25) How marginalized communities are disproportionately affected by AI.</span></p><p><span style="background-color: transparent;">(23:30) Companies developing AI must embed civil rights principles into their products.</span></p><p><span style="background-color: transparent;">(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.</span></p><p><span style="background-color: transparent;">(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.</span></p><p><span style="background-color: transparent;">(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/faiza-patel-5a042816/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Faiza Patel</a> -</p><p>https://www.linkedin.com/in/faiza-patel-5a042816/</p><p><br></p><p><a href="https://www.brennancenter.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center for Justice</a> -</p><p>https://www.brennancenter.org/</p><p><br></p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><br></p><p><a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Bill of Rights</a> -</p><p>https://www.whitehouse.gov/ostp/ai-bill-of-rights/</p><p><br></p><p><a href="https://www.brennancenter.org/experts/faiza-patel" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center - Faiza Patel</a> -</p><p>https://www.brennancenter.org/experts/faiza-patel</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Carve-Outs Undermine AI Regulations</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senate AI Hearings Highlight Increased Need for Regulation</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Perils and Promise of AI Regulation</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Advances in AI Increase Risks of Government Social Media Monitoring</a><span style="background-color: transparent;">&nbsp;- </span></p><p>https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with </span><a href="https://www.linkedin.com/in/faiza-patel-5a042816/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Faiza Patel</a><span style="background-color: transparent;">, Senior Director of the Liberty and National Security Program at the </span><a href="https://www.linkedin.com/company/brennan-center-for-justice/posts/?feedView=all" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center for Justice</a><span style="background-color: transparent;"> at </span><a href="https://www.linkedin.com/school/new-york-university-school-of-law/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NYU Law</a><span style="background-color: transparent;">. Together, we explore how AI can be regulated to ensure fairness, accountability and civil rights, especially in the context of national security and law enforcement.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:53) AI in national security, law enforcement and immigration contexts.</span></p><p><span style="background-color: transparent;">(05:00) The dangers of AI in government decisions, from immigration to surveillance.</span></p><p><span style="background-color: transparent;">(09:09) Long-standing issues with AI, including biased training data in facial recognition.</span></p><p><span style="background-color: transparent;">(12:55) The complexities of regulating AI-generated media, such as deepfakes, while protecting free speech.</span></p><p><span style="background-color: transparent;">(17:00) The need for transparency in AI systems and the importance of scrutinizing outputs.</span></p><p><span style="background-color: transparent;">(20:25) How marginalized communities are disproportionately affected by AI.</span></p><p><span style="background-color: transparent;">(23:30) Companies developing AI must embed civil rights principles into their products.</span></p><p><span style="background-color: transparent;">(26:45) Creating unbiased AI systems is a challenge, but necessary to avoid harm.</span></p><p><span style="background-color: transparent;">(29:58) The need for a dedicated regulatory body to oversee AI, especially in national security.</span></p><p><span style="background-color: transparent;">(34:00) AI’s potential impact on jobs and why policymakers need to prepare for labor disruption.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/faiza-patel-5a042816/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Faiza Patel</a> -</p><p>https://www.linkedin.com/in/faiza-patel-5a042816/</p><p><br></p><p><a href="https://www.brennancenter.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center for Justice</a> -</p><p>https://www.brennancenter.org/</p><p><br></p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><br></p><p><a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Bill of Rights</a> -</p><p>https://www.whitehouse.gov/ostp/ai-bill-of-rights/</p><p><br></p><p><a href="https://www.brennancenter.org/experts/faiza-patel" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Center - Faiza Patel</a> -</p><p>https://www.brennancenter.org/experts/faiza-patel</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Carve-Outs Undermine AI Regulations</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/national-security-carve-outs-undermine-ai-regulations</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senate AI Hearings Highlight Increased Need for Regulation</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/senate-ai-hearings-highlight-increased-need-regulation</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Perils and Promise of AI Regulation</a> -</p><p>https://www.brennancenter.org/our-work/analysis-opinion/perils-and-promise-ai-regulation</p><p><br></p><p><a href="https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Advances in AI Increase Risks of Government Social Media Monitoring</a><span style="background-color: transparent;">&nbsp;- </span></p><p>https://www.brennancenter.org/our-work/analysis-opinion/advances-ai-increase-risks-government-social-media-monitoring</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Artificial Intelligence has immense potential, but it also carries risks — particularly when it comes to civil liberties. In this episode, I speak with Faiza Patel, Senior Director of the Liberty and National Security Program at the Brennan Center ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>56</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[cb0ba402-88d7-4a20-9925-40bbce38315f]]></guid>
  <title><![CDATA[AI and Society: Balancing Innovation, Governance, and Democracy in a Rapidly Changing World]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p>https://x.com/boristadic58</p><p>https://clubmadrid.org/who/members/tadic-boris/</p><p>https://en.wikipedia.org/wiki/Boris_Tadi%C4%87</p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/64b1daa0-fbcd-464a-868b-7bdd9b3fd1a5/3780e4710e.jpg" />
  <pubDate>Thu, 12 Dec 2024 13:06:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="26889866" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/64b1daa0-fbcd-464a-868b-7bdd9b3fd1a5/episode.mp3" />
  <itunes:title><![CDATA[AI and Society: Balancing Innovation, Governance, and Democracy in a Rapidly Changing World]]></itunes:title>
  <itunes:duration>28:00</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p>https://x.com/boristadic58</p><p>https://clubmadrid.org/who/members/tadic-boris/</p><p>https://en.wikipedia.org/wiki/Boris_Tadi%C4%87</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relations at Imperial Springs International Forum 2024, Madrid, Spain. From its potential to revolutionise education and development to concerns about its effects on democracy and societal values, this conversation delves deep into the opportunities and challenges AI presents.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p>https://x.com/boristadic58</p><p>https://clubmadrid.org/who/members/tadic-boris/</p><p>https://en.wikipedia.org/wiki/Boris_Tadi%C4%87</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode of the RegulatingAI podcast, Sanjay Puri hosts an insightful discussion with Mr. Boris Tadić, former President of Serbia, to explore the profound implications of artificial intelligence (AI) on governance, society, and global relati...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[590cc41d-a332-468b-99c5-bced11ed817a]]></guid>
  <title><![CDATA[Shaping Tunisia’s Future: Technology, Education, and AI Governance in the Arab World]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p>https://www.linkedin.com/in/mehdi-jomaa-60a8333b/</p><p>https://x.com/Mehdi_Jomaa</p><p>https://www.facebook.com/M.mehdi.jomaa</p><p><span style="color: rgb(13, 13, 13);">In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/mehdi-jomaa-60a8333b/</span></p><p><span style="color: rgb(13, 13, 13);">https://x.com/Mehdi_Jomaa</span></p><p><span style="color: rgb(13, 13, 13);">https://www.facebook.com/M.mehdi.jomaa</span></p><p><span style="color: rgb(13, 13, 13);">https://clubmadrid.org/who/members/mehdi-jomaa/</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/7db65d05-1e46-4e3d-aea4-c299241007aa/55bf204e1a.jpg" />
  <pubDate>Wed, 11 Dec 2024 10:39:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="25336311" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/7db65d05-1e46-4e3d-aea4-c299241007aa/episode.mp3" />
  <itunes:title><![CDATA[Shaping Tunisia’s Future: Technology, Education, and AI Governance in the Arab World]]></itunes:title>
  <itunes:duration>26:23</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p>https://www.linkedin.com/in/mehdi-jomaa-60a8333b/</p><p>https://x.com/Mehdi_Jomaa</p><p>https://www.facebook.com/M.mehdi.jomaa</p><p><span style="color: rgb(13, 13, 13);">In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/mehdi-jomaa-60a8333b/</span></p><p><span style="color: rgb(13, 13, 13);">https://x.com/Mehdi_Jomaa</span></p><p><span style="color: rgb(13, 13, 13);">https://www.facebook.com/M.mehdi.jomaa</span></p><p><span style="color: rgb(13, 13, 13);">https://clubmadrid.org/who/members/mehdi-jomaa/</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p>https://www.linkedin.com/in/mehdi-jomaa-60a8333b/</p><p>https://x.com/Mehdi_Jomaa</p><p>https://www.facebook.com/M.mehdi.jomaa</p><p><span style="color: rgb(13, 13, 13);">In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the Middle East, Tunisia is positioned to become a key player in the global technological revolution, particularly in artificial intelligence.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Resources:</span></p><p><span style="color: rgb(13, 13, 13);">https://www.linkedin.com/in/mehdi-jomaa-60a8333b/</span></p><p><span style="color: rgb(13, 13, 13);">https://x.com/Mehdi_Jomaa</span></p><p><span style="color: rgb(13, 13, 13);">https://www.facebook.com/M.mehdi.jomaa</span></p><p><span style="color: rgb(13, 13, 13);">https://clubmadrid.org/who/members/mehdi-jomaa/</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this Episode, Tunisia’s Former Prime Minister, Mehdi Jomaa, shares his vision for the country’s potential to emerge as a leading technology hub in the Arab world and the Global South. With its strategic location bridging Africa, Europe, and the ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[21fb08be-5421-4052-addc-c318b0bf99ad]]></guid>
  <title><![CDATA[The Future of Open-Source AI and Its Global Implications with Professor S. Alex Yang, Professor of Management Science and Operations, London Business School]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/songayang/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor S. Alex Yang</a><span style="background-color: transparent;">, Professor of Management Science and Operations at the </span><a href="https://www.linkedin.com/school/london-business-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a><span style="background-color: transparent;">, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:12) Professor Yang’s early AI experiences and his value chain research.</span></p><p><span style="background-color: transparent;">(06:57) The biggest risks from AI, including existential risk and job displacement.</span></p><p><span style="background-color: transparent;">(11:42) The debate on AI nationalism and the preservation of cultural heritage.</span></p><p><span style="background-color: transparent;">(16:28) How China’s chip-making capacity could reshape AI competition.</span></p><p><span style="background-color: transparent;">(21:13) Open-source versus closed-source AI models and the risks involved.</span></p><p><span style="background-color: transparent;">(25:58) Why monitoring monopolies in AI is crucial for innovation.</span></p><p><span style="background-color: transparent;">(30:44) How content creators can benefit from AI and how copyright law is evolving.</span></p><p><span style="background-color: transparent;">(35:29) The importance of fair use standards for AI-generated content.</span></p><p><span style="background-color: transparent;">(40:14) Data aggregation and its future role in AI development.</span></p><p><span style="background-color: transparent;">(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/songayang/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor S. Alex Yang</a> -</p><p>https://www.linkedin.com/in/songayang/</p><p><br></p><p><a href="https://www.linkedin.com/school/london-business-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a> | LinkedIn -</p><p>https://www.linkedin.com/school/london-business-school/</p><p><br></p><p><a href="https://www.london.edu/?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=MC_BRBRAND_ppc_google&amp;sc_camp=760e17bef14a4b399386ef32e55393a8&amp;gad_source=1&amp;gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&amp;gclsrc=aw.ds" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a> | Website -</p><p>https://www.london.edu/?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=MC_BRBRAND_ppc_google&amp;sc_camp=760e17bef14a4b399386ef32e55393a8&amp;gad_source=1&amp;gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&amp;gclsrc=aw.ds</p><p><br></p><p><a href="https://worldcoin.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">WorldCoin</a> - </p><p>https://worldcoin.org/</p><p><br></p><p><a href="https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Case for Regulating Generative AI Through Common Law</a> -</p><p>https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02</p><p><br></p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Generative AI and Copyright: A Dynamic Perspective</a> -</p><p>https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/8ee0e95e-e323-48f5-bfb3-c5190dcba2be/ced4373ea3.jpg" />
  <pubDate>Tue, 03 Dec 2024 04:08:59 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="46383104" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/8ee0e95e-e323-48f5-bfb3-c5190dcba2be/episode.mp3" />
  <itunes:title><![CDATA[The Future of Open-Source AI and Its Global Implications with Professor S. Alex Yang, Professor of Management Science and Operations, London Business School]]></itunes:title>
  <itunes:duration>48:18</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/songayang/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor S. Alex Yang</a><span style="background-color: transparent;">, Professor of Management Science and Operations at the </span><a href="https://www.linkedin.com/school/london-business-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a><span style="background-color: transparent;">, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:12) Professor Yang’s early AI experiences and his value chain research.</span></p><p><span style="background-color: transparent;">(06:57) The biggest risks from AI, including existential risk and job displacement.</span></p><p><span style="background-color: transparent;">(11:42) The debate on AI nationalism and the preservation of cultural heritage.</span></p><p><span style="background-color: transparent;">(16:28) How China’s chip-making capacity could reshape AI competition.</span></p><p><span style="background-color: transparent;">(21:13) Open-source versus closed-source AI models and the risks involved.</span></p><p><span style="background-color: transparent;">(25:58) Why monitoring monopolies in AI is crucial for innovation.</span></p><p><span style="background-color: transparent;">(30:44) How content creators can benefit from AI and how copyright law is evolving.</span></p><p><span style="background-color: transparent;">(35:29) The importance of fair use standards for AI-generated content.</span></p><p><span style="background-color: transparent;">(40:14) Data aggregation and its future role in AI development.</span></p><p><span style="background-color: transparent;">(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/songayang/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor S. Alex Yang</a> -</p><p>https://www.linkedin.com/in/songayang/</p><p><br></p><p><a href="https://www.linkedin.com/school/london-business-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a> | LinkedIn -</p><p>https://www.linkedin.com/school/london-business-school/</p><p><br></p><p><a href="https://www.london.edu/?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=MC_BRBRAND_ppc_google&amp;sc_camp=760e17bef14a4b399386ef32e55393a8&amp;gad_source=1&amp;gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&amp;gclsrc=aw.ds" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a> | Website -</p><p>https://www.london.edu/?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=MC_BRBRAND_ppc_google&amp;sc_camp=760e17bef14a4b399386ef32e55393a8&amp;gad_source=1&amp;gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&amp;gclsrc=aw.ds</p><p><br></p><p><a href="https://worldcoin.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">WorldCoin</a> - </p><p>https://worldcoin.org/</p><p><br></p><p><a href="https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Case for Regulating Generative AI Through Common Law</a> -</p><p>https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02</p><p><br></p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Generative AI and Copyright: A Dynamic Perspective</a> -</p><p>https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/songayang/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor S. Alex Yang</a><span style="background-color: transparent;">, Professor of Management Science and Operations at the </span><a href="https://www.linkedin.com/school/london-business-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a><span style="background-color: transparent;">, to explore the complexities of regulating AI, the challenges of international collaboration, and the potential existential risks posed by AI development. With his extensive experience in AI and risk management, Professor Yang provides unique insights into the future of AI governance.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:12) Professor Yang’s early AI experiences and his value chain research.</span></p><p><span style="background-color: transparent;">(06:57) The biggest risks from AI, including existential risk and job displacement.</span></p><p><span style="background-color: transparent;">(11:42) The debate on AI nationalism and the preservation of cultural heritage.</span></p><p><span style="background-color: transparent;">(16:28) How China’s chip-making capacity could reshape AI competition.</span></p><p><span style="background-color: transparent;">(21:13) Open-source versus closed-source AI models and the risks involved.</span></p><p><span style="background-color: transparent;">(25:58) Why monitoring monopolies in AI is crucial for innovation.</span></p><p><span style="background-color: transparent;">(30:44) How content creators can benefit from AI and how copyright law is evolving.</span></p><p><span style="background-color: transparent;">(35:29) The importance of fair use standards for AI-generated content.</span></p><p><span style="background-color: transparent;">(40:14) Data aggregation and its future role in AI development.</span></p><p><span style="background-color: transparent;">(45:00) Professor Yang’s final thoughts on the need for agile, principle-based AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/songayang/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor S. Alex Yang</a> -</p><p>https://www.linkedin.com/in/songayang/</p><p><br></p><p><a href="https://www.linkedin.com/school/london-business-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a> | LinkedIn -</p><p>https://www.linkedin.com/school/london-business-school/</p><p><br></p><p><a href="https://www.london.edu/?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=MC_BRBRAND_ppc_google&amp;sc_camp=760e17bef14a4b399386ef32e55393a8&amp;gad_source=1&amp;gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&amp;gclsrc=aw.ds" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">London Business School</a> | Website -</p><p>https://www.london.edu/?utm_source=google&amp;utm_medium=ppc&amp;utm_campaign=MC_BRBRAND_ppc_google&amp;sc_camp=760e17bef14a4b399386ef32e55393a8&amp;gad_source=1&amp;gclid=Cj0KCQjwo8S3BhDeARIsAFRmkON1oXbsOVjQ73dCIwrvngSGSF0PBYwWGVKRtCdil8ptF2vmAzcW7lEaAvCxEALw_wcB&amp;gclsrc=aw.ds</p><p><br></p><p><a href="https://worldcoin.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">WorldCoin</a> - </p><p>https://worldcoin.org/</p><p><br></p><p><a href="https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Case for Regulating Generative AI Through Common Law</a> -</p><p>https://www.project-syndicate.org/commentary/european-union-ai-act-could-impede-innovation-by-s-alex-yang-and-angela-huyue-zhang-2024-02</p><p><br></p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Generative AI and Copyright: A Dynamic Perspective</a> -</p><p>https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The rapid rise of AI brings both extraordinary potential and profound risks, demanding urgent global collaboration to ensure its safe development. In this episode, I’m joined by Professor S. Alex Yang, Professor of Management Science and Operations...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>55</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[79e0024b-9ff4-41da-a49b-0544c3df08ac]]></guid>
  <title><![CDATA[Overcoming the Cultural Clash Between AI Innovation and Data Privacy with Norman Sadeh, Professor of Computer Science, Co-Founder and Co-Director, Privacy Engineering Program, Carnegie Mellon University]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/normansadeh/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Norman Sadeh</a><span style="background-color: transparent;">, a Computer Science Professor at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;">, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:09) How Professor Sadeh’s work in AI and privacy began.</span></p><p><span style="background-color: transparent;">(05:30) Privacy engineers are in AI governance.</span></p><p><span style="background-color: transparent;">(08:45) Why AI governance must integrate with existing company structures.</span></p><p><span style="background-color: transparent;">(12:10) The challenges of data ownership and consent in AI applications.</span></p><p><span style="background-color: transparent;">(15:20) Privacy implications of foundational models in AI.</span></p><p><span style="background-color: transparent;">(18:30) The limitations of current regulations like GDPR in addressing AI concerns.</span></p><p><span style="background-color: transparent;">(22:00) How user expectations shape the principles of AI governance.</span></p><p><span style="background-color: transparent;">(26:15) The growing debate around the need for specialized AI regulations.</span></p><p><span style="background-color: transparent;">(30:40) The role of transparency in AI governance for building trust.</span></p><p><span style="background-color: transparent;">(35:50) The potential impact of open-source AI models on security and privacy.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/normansadeh/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Norman Sadeh</a> -</p><p>https://www.linkedin.com/in/normansadeh/</p><p><br></p><p><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> | LinkedIn -</p><p>https://www.linkedin.com/school/carnegie-mellon-university/</p><p><br></p><p><a href="https://www.cmu.edu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> | Website -</p><p>https://www.cmu.edu/</p><p><br></p><p><a href="https://artificialintelligenceact.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - </p><p>https://artificialintelligenceact.eu/</p><p><br></p><p><a href="https://gdpr-info.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">General Data Protection Regulation (GDPR)</a> -</p><p>https://gdpr-info.eu/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/5aff5bef-9b6b-4aee-b015-8399777d05e1/41891f994c.jpg" />
  <pubDate>Tue, 19 Nov 2024 01:10:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39820724" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/5aff5bef-9b6b-4aee-b015-8399777d05e1/episode.mp3" />
  <itunes:title><![CDATA[Overcoming the Cultural Clash Between AI Innovation and Data Privacy with Norman Sadeh, Professor of Computer Science, Co-Founder and Co-Director, Privacy Engineering Program, Carnegie Mellon University]]></itunes:title>
  <itunes:duration>41:28</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/normansadeh/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Norman Sadeh</a><span style="background-color: transparent;">, a Computer Science Professor at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;">, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:09) How Professor Sadeh’s work in AI and privacy began.</span></p><p><span style="background-color: transparent;">(05:30) Privacy engineers are in AI governance.</span></p><p><span style="background-color: transparent;">(08:45) Why AI governance must integrate with existing company structures.</span></p><p><span style="background-color: transparent;">(12:10) The challenges of data ownership and consent in AI applications.</span></p><p><span style="background-color: transparent;">(15:20) Privacy implications of foundational models in AI.</span></p><p><span style="background-color: transparent;">(18:30) The limitations of current regulations like GDPR in addressing AI concerns.</span></p><p><span style="background-color: transparent;">(22:00) How user expectations shape the principles of AI governance.</span></p><p><span style="background-color: transparent;">(26:15) The growing debate around the need for specialized AI regulations.</span></p><p><span style="background-color: transparent;">(30:40) The role of transparency in AI governance for building trust.</span></p><p><span style="background-color: transparent;">(35:50) The potential impact of open-source AI models on security and privacy.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/normansadeh/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Norman Sadeh</a> -</p><p>https://www.linkedin.com/in/normansadeh/</p><p><br></p><p><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> | LinkedIn -</p><p>https://www.linkedin.com/school/carnegie-mellon-university/</p><p><br></p><p><a href="https://www.cmu.edu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> | Website -</p><p>https://www.cmu.edu/</p><p><br></p><p><a href="https://artificialintelligenceact.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - </p><p>https://artificialintelligenceact.eu/</p><p><br></p><p><a href="https://gdpr-info.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">General Data Protection Regulation (GDPR)</a> -</p><p>https://gdpr-info.eu/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/normansadeh/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Norman Sadeh</a><span style="background-color: transparent;">, a Computer Science Professor at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;">, and Co-Founder and Co-Director of the Privacy Engineering Program. With years of experience in AI and privacy, he offers valuable insights into the complexities of AI governance, the evolving landscape of data privacy and why a multidisciplinary approach is vital for creating effective and ethical AI policies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:09) How Professor Sadeh’s work in AI and privacy began.</span></p><p><span style="background-color: transparent;">(05:30) Privacy engineers are in AI governance.</span></p><p><span style="background-color: transparent;">(08:45) Why AI governance must integrate with existing company structures.</span></p><p><span style="background-color: transparent;">(12:10) The challenges of data ownership and consent in AI applications.</span></p><p><span style="background-color: transparent;">(15:20) Privacy implications of foundational models in AI.</span></p><p><span style="background-color: transparent;">(18:30) The limitations of current regulations like GDPR in addressing AI concerns.</span></p><p><span style="background-color: transparent;">(22:00) How user expectations shape the principles of AI governance.</span></p><p><span style="background-color: transparent;">(26:15) The growing debate around the need for specialized AI regulations.</span></p><p><span style="background-color: transparent;">(30:40) The role of transparency in AI governance for building trust.</span></p><p><span style="background-color: transparent;">(35:50) The potential impact of open-source AI models on security and privacy.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/normansadeh/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Norman Sadeh</a> -</p><p>https://www.linkedin.com/in/normansadeh/</p><p><br></p><p><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> | LinkedIn -</p><p>https://www.linkedin.com/school/carnegie-mellon-university/</p><p><br></p><p><a href="https://www.cmu.edu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> | Website -</p><p>https://www.cmu.edu/</p><p><br></p><p><a href="https://artificialintelligenceact.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - </p><p>https://artificialintelligenceact.eu/</p><p><br></p><p><a href="https://gdpr-info.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">General Data Protection Regulation (GDPR)</a> -</p><p>https://gdpr-info.eu/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI presents endless opportunities, but its implications for privacy and governance are multifaceted. On this episode, I’m joined by Professor Norman Sadeh, a Computer Science Professor at Carnegie Mellon University, and Co-Founder and Co-Director o...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>54</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[0846bdfe-c115-4840-b665-096f4fad52fc]]></guid>
  <title><![CDATA[Championing Diversity, AI Skills, and Youth Empowerment: Reshaping Education and the Future of Work]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/799e8d42-5fa4-4c11-a941-c831a3e7ca68/9269f01af6.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:36:29 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="27121415" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/799e8d42-5fa4-4c11-a941-c831a3e7ca68/episode.mp3" />
  <itunes:title><![CDATA[Championing Diversity, AI Skills, and Youth Empowerment: Reshaping Education and the Future of Work]]></itunes:title>
  <itunes:duration>28:15</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Elena Sinel, FRSA and Founder of Teens in AI, shares her mission to champion diversity and equip young people with the skills they need to thrive in the AI era. She discusses the importance of empowering youth to lead the way in creating ethical AI solutions for a better future.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this inspiring episode, we explore how AI is not only transforming industries but also reshaping education and the future of work. Learn how diversity, AI skills, and youth empowerment are critical in building an ethical, AI-driven world.Our gue...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[765dd1ad-8dd6-459d-bf6b-918b7ddd7513]]></guid>
  <title><![CDATA[Democratizing AI: The Role of Governments and Ethical Insights in Shaping Policy]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics &amp; Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/b66b977c-c954-4422-bf7e-f15d122feec8/f1c02c25d6.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:27:37 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="17198646" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/b66b977c-c954-4422-bf7e-f15d122feec8/episode.mp3" />
  <itunes:title><![CDATA[Democratizing AI: The Role of Governments and Ethical Insights in Shaping Policy]]></itunes:title>
  <itunes:duration>17:54</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics &amp; Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the philosophical underpinnings that drive this evolving landscape.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our distinguished guest, Ted Lechterman, Holder of the UNESCO Chair in AI Ethics &amp; Governance at IE University, provides critical perspectives on how governments can lead the way in creating inclusive, ethical AI policies that align with democratic values.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this thought-provoking episode, we explore the crucial role governments play in democratizing AI, ensuring its benefits reach all sectors of society. We discuss the ethical and governance challenges involved in shaping AI policy, as well as the ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[50f71315-4d3e-424e-9084-023fde4289bd]]></guid>
  <title><![CDATA[AI Compliance Challenges: Navigating the European AI Act and Regulatory Frameworks]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Sean Musch, Founder and CEO of AI &amp; Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/cba5f5d3-55ca-4fe5-8cd6-b02c5e6cd8d9/b017d3f06c.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:24:56 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="20116001" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/cba5f5d3-55ca-4fe5-8cd6-b02c5e6cd8d9/episode.mp3" />
  <itunes:title><![CDATA[AI Compliance Challenges: Navigating the European AI Act and Regulatory Frameworks]]></itunes:title>
  <itunes:duration>20:57</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Sean Musch, Founder and CEO of AI &amp; Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innovation in AI development.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Sean Musch, Founder and CEO of AI &amp; Partners, shares his expertise on the European AI Act and other regulatory frameworks shaping the future of AI. Discover practical strategies for navigating compliance while fostering responsible AI practices.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, we dive into the complexities of AI compliance and the challenges organizations face in navigating the evolving regulatory landscape, especially with the European AI Act. Learn how businesses can stay compliant while driving innova...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[759d5ae0-f073-4c12-bf3f-b68fbc7b0078]]></guid>
  <title><![CDATA[Harnessing Geospatial Data: Crisis Response and AI Integration | RegulatingAI Podcast Ft Paul Uithol]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/50be068f-dd80-4030-934e-c38fa2236955/0e1885dc02.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:22:37 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="17247965" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/50be068f-dd80-4030-934e-c38fa2236955/episode.mp3" />
  <itunes:title><![CDATA[Harnessing Geospatial Data: Crisis Response and AI Integration | RegulatingAI Podcast Ft Paul Uithol]]></itunes:title>
  <itunes:duration>17:57</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI to support humanitarian missions in real-time.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our guest, Paul Uithol, Director of Humanitarian Data at the Humanitarian OpenStreetMap Team, shares his insights into how geospatial data and AI are transforming disaster management and crisis response. Discover the innovative strategies that enable faster, more accurate responses to humanitarian challenges.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, we explore how geospatial data is being leveraged to improve crisis response efforts through the integration of AI. Learn about the groundbreaking work of the Humanitarian OpenStreetMap Team in mapping vulnerable areas and using AI...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[074bb946-28ce-4668-adcd-a623a2281617]]></guid>
  <title><![CDATA[Global AI Regulation: Balancing Compliance, Innovation, and Supervision Across Diverse Laws]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/6670838d-4ca4-44e4-80c3-01915d8ad827/3f478268f0.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:19:57 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="27399358" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/6670838d-4ca4-44e4-80c3-01915d8ad827/episode.mp3" />
  <itunes:title><![CDATA[Global AI Regulation: Balancing Compliance, Innovation, and Supervision Across Diverse Laws]]></itunes:title>
  <itunes:duration>28:32</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across different legislative frameworks and how these regulations shape the future of AI technologies.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our featured guest, Huub Janssen, Manager on AI at the Ministry of Economic Affairs and the Dutch Authority for Digital Infrastructure, The Netherlands, shares his insights on navigating the regulatory landscape and driving responsible AI development.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, we explore the complexities of global AI regulation and enforcement, focusing on how governments and organizations can balance the need for compliance while fostering innovation. We dive into the challenges of supervising AI across...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[574d74ce-881f-49a3-899e-ffa784a539ff]]></guid>
  <title><![CDATA[Bridging the Gap:Navigating AI Governance and Legal Innovation with Hadassah Drukarch]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/a2334204-6c90-4cbb-ac51-c13ba06c219a/ac323ce8d4.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:17:39 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="17630816" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/a2334204-6c90-4cbb-ac51-c13ba06c219a/episode.mp3" />
  <itunes:title><![CDATA[Bridging the Gap:Navigating AI Governance and Legal Innovation with Hadassah Drukarch]]></itunes:title>
  <itunes:duration>18:21</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolving regulatory landscape.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our esteemed guest, Hadassah Drukarch, Director of Policy and Delivery at the Responsible AI Institute, shares her expertise on how to navigate the complexities of AI governance, legal frameworks, and the importance of fostering ethical AI practices.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this insightful episode, we explore the intersection of AI governance and legal innovation. Join us as we discuss the critical challenges and opportunities that arise as organizations strive to implement responsible AI practices in an ever-evolv...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f9dd7562-b2fd-4123-949a-767923553d57]]></guid>
  <title><![CDATA[How AI is Revolutionising Disaster Response: Bridging the Gap for Vulnerable Communities]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/5daf82e5-e9cf-473d-acb4-41b9f986126c/1eeb857d9a.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:15:06 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="13617154" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/5daf82e5-e9cf-473d-acb4-41b9f986126c/episode.mp3" />
  <itunes:title><![CDATA[How AI is Revolutionising Disaster Response: Bridging the Gap for Vulnerable Communities]]></itunes:title>
  <itunes:duration>14:11</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanitarian action and build more resilient systems.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Our special guest, Katya Klinova, Head of AI and Data Insights for Social and Humanitarian Action at the United Nations Secretary-General's Innovation Lab, shares invaluable insights into the role of AI in disaster management and its potential to bridge critical gaps in support for those most in need.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this compelling episode, we explore how artificial intelligence is transforming disaster response efforts, especially for vulnerable communities impacted by crises. Join us as we discuss innovative strategies that leverage AI to enhance humanita...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1bfacabd-6df2-45fc-b857-56a3bc192703]]></guid>
  <title><![CDATA[Transform Your Organization with AI: Augment, Reskill, Improve HCI, & Hire Ethically]]></title>
  <description><![CDATA[<p><span style="color: rgb(13, 13, 13);">In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/079a664f-ec41-43b4-b576-2bd0047de02b/0da87be436.jpg" />
  <pubDate>Thu, 07 Nov 2024 07:07:46 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="17874904" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/079a664f-ec41-43b4-b576-2bd0047de02b/episode.mp3" />
  <itunes:title><![CDATA[Transform Your Organization with AI: Augment, Reskill, Improve HCI, & Hire Ethically]]></itunes:title>
  <itunes:duration>18:37</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(13, 13, 13);">In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(13, 13, 13);">In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce through augmentation, reskilling, and enhancing human-computer interaction, all while promoting ethical AI hiring practices.</span></p><p><br></p><p><span style="color: rgb(13, 13, 13);">Special guest Dr. Kevin J. Jones, Director at the IU Columbus Center for Teaching and Learning and Associate Professor of Management, shares insights on how leaders can leverage AI to enhance their organizations and stay ahead of the curve.</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In the latest episode of the RegulatingAI Podcast at the World Summit AI on October 9, 2024, the discussion dives deep into the critical AI competencies driving organizational transformation. The episode explores how AI revolutionizes the workforce...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>bonus</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[5849d2ec-0930-4d84-8995-be291f517663]]></guid>
  <title><![CDATA[How AI Is Reshaping Industries and Society with Professor Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ruslan Salakhutdinov</a><span style="background-color: transparent;">, UPMC Professor of Computer Science at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;">. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:14) The need to regulate AI to prevent monopolization by large corporations.</span></p><p><span style="background-color: transparent;">(06:03) The dangers of AI-driven misinformation and its impact on public opinion.</span></p><p><span style="background-color: transparent;">(10:32) The risks AI poses in job displacement across multiple industries.</span></p><p><span style="background-color: transparent;">(14:22) How deepfake technology is evolving and its potential consequences.</span></p><p><span style="background-color: transparent;">(18:47) The challenge of balancing AI innovation with data privacy concerns.</span></p><p><span style="background-color: transparent;">(22:10) AI’s growing role in military applications and the need for careful oversight.</span></p><p><span style="background-color: transparent;">(26:05) How AI agents could autonomously interact and the risks involved.</span></p><p><span style="background-color: transparent;">(31:30) The potential for AI to surpass human performance in certain professions.</span></p><p><span style="background-color: transparent;">(37:14) Why international collaboration is critical for effective AI regulation.</span></p><p><span style="background-color: transparent;">(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ruslan Salakhutdinov</a> - </p><p>https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/</p><p><br></p><p><a href="https://openai.com/index/sora/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI’s Sora Technology</a> - </p><p>https://openai.com/index/sora/</p><p><br></p><p><a href="https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Geoffrey Hinton and his contributions to AI</a> -</p><p>https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/</p><p><br></p><p><a href="https://www.cmu.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> -</p><p>https://www.cmu.edu</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/9bce3928-47e6-453d-8dd4-089a178f3143/efddb697e6.jpg" />
  <pubDate>Tue, 05 Nov 2024 01:30:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="48678115" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9bce3928-47e6-453d-8dd4-089a178f3143/episode.mp3" />
  <itunes:title><![CDATA[How AI Is Reshaping Industries and Society with Professor Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University]]></itunes:title>
  <itunes:duration>50:42</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ruslan Salakhutdinov</a><span style="background-color: transparent;">, UPMC Professor of Computer Science at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;">. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:14) The need to regulate AI to prevent monopolization by large corporations.</span></p><p><span style="background-color: transparent;">(06:03) The dangers of AI-driven misinformation and its impact on public opinion.</span></p><p><span style="background-color: transparent;">(10:32) The risks AI poses in job displacement across multiple industries.</span></p><p><span style="background-color: transparent;">(14:22) How deepfake technology is evolving and its potential consequences.</span></p><p><span style="background-color: transparent;">(18:47) The challenge of balancing AI innovation with data privacy concerns.</span></p><p><span style="background-color: transparent;">(22:10) AI’s growing role in military applications and the need for careful oversight.</span></p><p><span style="background-color: transparent;">(26:05) How AI agents could autonomously interact and the risks involved.</span></p><p><span style="background-color: transparent;">(31:30) The potential for AI to surpass human performance in certain professions.</span></p><p><span style="background-color: transparent;">(37:14) Why international collaboration is critical for effective AI regulation.</span></p><p><span style="background-color: transparent;">(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ruslan Salakhutdinov</a> - </p><p>https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/</p><p><br></p><p><a href="https://openai.com/index/sora/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI’s Sora Technology</a> - </p><p>https://openai.com/index/sora/</p><p><br></p><p><a href="https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Geoffrey Hinton and his contributions to AI</a> -</p><p>https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/</p><p><br></p><p><a href="https://www.cmu.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> -</p><p>https://www.cmu.edu</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ruslan Salakhutdinov</a><span style="background-color: transparent;">, UPMC Professor of Computer Science at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;">. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of its future development, including how to safeguard humanity while embracing innovation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:14) The need to regulate AI to prevent monopolization by large corporations.</span></p><p><span style="background-color: transparent;">(06:03) The dangers of AI-driven misinformation and its impact on public opinion.</span></p><p><span style="background-color: transparent;">(10:32) The risks AI poses in job displacement across multiple industries.</span></p><p><span style="background-color: transparent;">(14:22) How deepfake technology is evolving and its potential consequences.</span></p><p><span style="background-color: transparent;">(18:47) The challenge of balancing AI innovation with data privacy concerns.</span></p><p><span style="background-color: transparent;">(22:10) AI’s growing role in military applications and the need for careful oversight.</span></p><p><span style="background-color: transparent;">(26:05) How AI agents could autonomously interact and the risks involved.</span></p><p><span style="background-color: transparent;">(31:30) The potential for AI to surpass human performance in certain professions.</span></p><p><span style="background-color: transparent;">(37:14) Why international collaboration is critical for effective AI regulation.</span></p><p><span style="background-color: transparent;">(42:56) The ethical dilemmas surrounding AI’s influence in healthcare and decision-making.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ruslan Salakhutdinov</a> - </p><p>https://www.linkedin.com/in/russ-salakhutdinov-53a0b610/</p><p><br></p><p><a href="https://openai.com/index/sora/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI’s Sora Technology</a> - </p><p>https://openai.com/index/sora/</p><p><br></p><p><a href="https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Geoffrey Hinton and his contributions to AI</a> -</p><p>https://www.linkedin.com/pulse/geoffrey-hinton-alan-francis/</p><p><br></p><p><a href="https://www.cmu.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> -</p><p>https://www.cmu.edu</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Ruslan Salakhutdinov, UPMC Professor of Computer Science at Carnegie Mellon University. Ruslan discusses the pressing need for AI regulation, its potential for societal transformation and the ethical considerations of...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>53</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b78e48b8-36fb-4799-a645-6c32ac9a5d41]]></guid>
  <title><![CDATA[Breaking Down the Senate AI Policy Roadmap with Senator Todd Young of the United States Senate]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/senator-todd-young/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Todd Young</a><span style="background-color: transparent;">, United States Senator (R-Ind.) at the </span><a href="https://www.linkedin.com/company/ussenate/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United States Senate</a><span style="background-color: transparent;">. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:54) The bipartisan effort behind the Senate AI Working Group.</span></p><p><span style="background-color: transparent;">(03:34) How existing laws adapt to an AI-enabled world.</span></p><p><span style="background-color: transparent;">(05:17) Identifying AI risks and regulatory barriers.</span></p><p><span style="background-color: transparent;">(07:41) The role of government expertise in AI-related areas.</span></p><p><span style="background-color: transparent;">(10:12) Understanding the significance of the $32 billion AI public investment.</span></p><p><span style="background-color: transparent;">(13:17) Applying AI innovations across various industries.</span></p><p><span style="background-color: transparent;">(15:27) The impact of China on AI competition and US strategy.</span></p><p><span style="background-color: transparent;">(17:44) Why semiconductors are vital to AI development.</span></p><p><span style="background-color: transparent;">(20:26) Balancing open-source and closed-source AI models.</span></p><p><span style="background-color: transparent;">(22:51) The need for global AI standards and harmonization.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/senator-todd-young/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Todd Young</a> - </p><p>https://www.linkedin.com/in/senator-todd-young/</p><p><br></p><p><a href="https://www.young.senate.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Todd Young</a> - </p><p>https://www.young.senate.gov/</p><p><br></p><p><a href="https://www.linkedin.com/company/ussenate/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United States Senate</a> - </p><p>https://www.linkedin.com/company/ussenate/</p><p><br></p><p><a href="https://nairrpilot.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - </p><p>https://nairrpilot.org/</p><p><br></p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">CHIPS and Science Act</a> - </p><p>https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/</p><p><br></p><p><a href="https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senate AI Policy Roadmap</a> - </p><p>https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf</p><p><br></p><p><a href="https://reports.nscai.gov/final-report/introduction" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Commission on Artificial Intelligence</a> - </p><p>https://reports.nscai.gov/final-report/introduction</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/25a6f39c-a944-48a6-9703-7a9f5361738c/da9229a028.jpg" />
  <pubDate>Wed, 16 Oct 2024 08:21:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="26200270" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/25a6f39c-a944-48a6-9703-7a9f5361738c/episode.mp3" />
  <itunes:title><![CDATA[Breaking Down the Senate AI Policy Roadmap with Senator Todd Young of the United States Senate]]></itunes:title>
  <itunes:duration>27:17</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/senator-todd-young/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Todd Young</a><span style="background-color: transparent;">, United States Senator (R-Ind.) at the </span><a href="https://www.linkedin.com/company/ussenate/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United States Senate</a><span style="background-color: transparent;">. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:54) The bipartisan effort behind the Senate AI Working Group.</span></p><p><span style="background-color: transparent;">(03:34) How existing laws adapt to an AI-enabled world.</span></p><p><span style="background-color: transparent;">(05:17) Identifying AI risks and regulatory barriers.</span></p><p><span style="background-color: transparent;">(07:41) The role of government expertise in AI-related areas.</span></p><p><span style="background-color: transparent;">(10:12) Understanding the significance of the $32 billion AI public investment.</span></p><p><span style="background-color: transparent;">(13:17) Applying AI innovations across various industries.</span></p><p><span style="background-color: transparent;">(15:27) The impact of China on AI competition and US strategy.</span></p><p><span style="background-color: transparent;">(17:44) Why semiconductors are vital to AI development.</span></p><p><span style="background-color: transparent;">(20:26) Balancing open-source and closed-source AI models.</span></p><p><span style="background-color: transparent;">(22:51) The need for global AI standards and harmonization.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/senator-todd-young/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Todd Young</a> - </p><p>https://www.linkedin.com/in/senator-todd-young/</p><p><br></p><p><a href="https://www.young.senate.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Todd Young</a> - </p><p>https://www.young.senate.gov/</p><p><br></p><p><a href="https://www.linkedin.com/company/ussenate/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United States Senate</a> - </p><p>https://www.linkedin.com/company/ussenate/</p><p><br></p><p><a href="https://nairrpilot.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - </p><p>https://nairrpilot.org/</p><p><br></p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">CHIPS and Science Act</a> - </p><p>https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/</p><p><br></p><p><a href="https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senate AI Policy Roadmap</a> - </p><p>https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf</p><p><br></p><p><a href="https://reports.nscai.gov/final-report/introduction" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Commission on Artificial Intelligence</a> - </p><p>https://reports.nscai.gov/final-report/introduction</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/senator-todd-young/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Todd Young</a><span style="background-color: transparent;">, United States Senator (R-Ind.) at the </span><a href="https://www.linkedin.com/company/ussenate/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United States Senate</a><span style="background-color: transparent;">. He shares insights into AI policy, national security and the steps needed to maintain US leadership in this critical field.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:54) The bipartisan effort behind the Senate AI Working Group.</span></p><p><span style="background-color: transparent;">(03:34) How existing laws adapt to an AI-enabled world.</span></p><p><span style="background-color: transparent;">(05:17) Identifying AI risks and regulatory barriers.</span></p><p><span style="background-color: transparent;">(07:41) The role of government expertise in AI-related areas.</span></p><p><span style="background-color: transparent;">(10:12) Understanding the significance of the $32 billion AI public investment.</span></p><p><span style="background-color: transparent;">(13:17) Applying AI innovations across various industries.</span></p><p><span style="background-color: transparent;">(15:27) The impact of China on AI competition and US strategy.</span></p><p><span style="background-color: transparent;">(17:44) Why semiconductors are vital to AI development.</span></p><p><span style="background-color: transparent;">(20:26) Balancing open-source and closed-source AI models.</span></p><p><span style="background-color: transparent;">(22:51) The need for global AI standards and harmonization.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/senator-todd-young/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Todd Young</a> - </p><p>https://www.linkedin.com/in/senator-todd-young/</p><p><br></p><p><a href="https://www.young.senate.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Todd Young</a> - </p><p>https://www.young.senate.gov/</p><p><br></p><p><a href="https://www.linkedin.com/company/ussenate/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United States Senate</a> - </p><p>https://www.linkedin.com/company/ussenate/</p><p><br></p><p><a href="https://nairrpilot.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - </p><p>https://nairrpilot.org/</p><p><br></p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">CHIPS and Science Act</a> - </p><p>https://www.whitehouse.gov/briefing-room/statements-releases/2022/08/09/fact-sheet-chips-and-science-act-will-lower-costs-create-jobs-strengthen-supply-chains-and-counter-china/</p><p><br></p><p><a href="https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senate AI Policy Roadmap</a> - </p><p>https://www.young.senate.gov/wp-content/uploads/One_Pager_Roadmap.pdf</p><p><br></p><p><a href="https://reports.nscai.gov/final-report/introduction" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Commission on Artificial Intelligence</a> - </p><p>https://reports.nscai.gov/final-report/introduction</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The race for AI leadership is not just about technology; it’s a battle of values and national security that will shape our future. In this episode, I’m joined by Senator Todd Young, United States Senator (R-Ind.) at the United States Senate. He sha...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>52</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[ecb5e575-3ee6-4bd8-bfd9-63f10d70f0dc]]></guid>
  <title><![CDATA[AI's Role in Accelerating Drug Development and Clinical Trials with Raphael Townshend, PhD, Founder and CEO of Atomic AI]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.</span></p><p><br></p><p><span style="background-color: transparent;">In this episode, </span><a href="https://www.linkedin.com/in/raphael-townshend-9154962a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raphael Townshend</a><span style="background-color: transparent;">, PhD, Founder and CEO of </span><a href="https://www.linkedin.com/company/atomic-ai-rna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI</a><span style="background-color: transparent;">, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:15) Raphael's background in AI and biology, and founding of Atomic AI.</span></p><p><span style="background-color: transparent;">(05:59) Reducing time and failure rate in drug discovery with AI.</span></p><p><span style="background-color: transparent;">(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.</span></p><p><span style="background-color: transparent;">(09:23) Ensuring transparency and accountability in AI-driven drug discovery.</span></p><p><span style="background-color: transparent;">(12:22) Navigating intellectual property concerns in healthcare AI.</span></p><p><span style="background-color: transparent;">(15:34) Integrating AI with wet lab testing for accurate drug discovery results.</span></p><p><span style="background-color: transparent;">(17:31) Balancing intellectual property and open research in biotech.</span></p><p><span style="background-color: transparent;">(20:02) Addressing data privacy and security in AI algorithms.</span></p><p><span style="background-color: transparent;">(22:30) Educating users and healthcare professionals about AI in drug discovery.</span></p><p><span style="background-color: transparent;">(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/raphael-townshend-9154962a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raphael Townshend</a> - </p><p>https://www.linkedin.com/in/raphael-townshend-9154962a/</p><p><br></p><p><a href="https://www.linkedin.com/company/atomic-ai-rna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI</a> | LinkedIn - </p><p>https://www.linkedin.com/company/atomic-ai-rna/</p><p><br></p><p><a href="https://deepmind.google/technologies/alphafold/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AlphaFold</a> - </p><p>https://deepmind.google/technologies/alphafold/</p><p><br></p><p><a href="https://atomic.ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI Homepage</a> - </p><p>https://atomic.ai/</p><p><br></p><p><a href="https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ATOM-1 Large Language Model</a> - </p><p>https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/178e33a2-7971-4194-9855-fb8af6f57ac3/8e23802a61.jpg" />
  <pubDate>Tue, 01 Oct 2024 08:42:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="30247369" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/178e33a2-7971-4194-9855-fb8af6f57ac3/episode.mp3" />
  <itunes:title><![CDATA[AI's Role in Accelerating Drug Development and Clinical Trials with Raphael Townshend, PhD, Founder and CEO of Atomic AI]]></itunes:title>
  <itunes:duration>31:30</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.</span></p><p><br></p><p><span style="background-color: transparent;">In this episode, </span><a href="https://www.linkedin.com/in/raphael-townshend-9154962a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raphael Townshend</a><span style="background-color: transparent;">, PhD, Founder and CEO of </span><a href="https://www.linkedin.com/company/atomic-ai-rna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI</a><span style="background-color: transparent;">, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:15) Raphael's background in AI and biology, and founding of Atomic AI.</span></p><p><span style="background-color: transparent;">(05:59) Reducing time and failure rate in drug discovery with AI.</span></p><p><span style="background-color: transparent;">(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.</span></p><p><span style="background-color: transparent;">(09:23) Ensuring transparency and accountability in AI-driven drug discovery.</span></p><p><span style="background-color: transparent;">(12:22) Navigating intellectual property concerns in healthcare AI.</span></p><p><span style="background-color: transparent;">(15:34) Integrating AI with wet lab testing for accurate drug discovery results.</span></p><p><span style="background-color: transparent;">(17:31) Balancing intellectual property and open research in biotech.</span></p><p><span style="background-color: transparent;">(20:02) Addressing data privacy and security in AI algorithms.</span></p><p><span style="background-color: transparent;">(22:30) Educating users and healthcare professionals about AI in drug discovery.</span></p><p><span style="background-color: transparent;">(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/raphael-townshend-9154962a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raphael Townshend</a> - </p><p>https://www.linkedin.com/in/raphael-townshend-9154962a/</p><p><br></p><p><a href="https://www.linkedin.com/company/atomic-ai-rna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI</a> | LinkedIn - </p><p>https://www.linkedin.com/company/atomic-ai-rna/</p><p><br></p><p><a href="https://deepmind.google/technologies/alphafold/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AlphaFold</a> - </p><p>https://deepmind.google/technologies/alphafold/</p><p><br></p><p><a href="https://atomic.ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI Homepage</a> - </p><p>https://atomic.ai/</p><p><br></p><p><a href="https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ATOM-1 Large Language Model</a> - </p><p>https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.</span></p><p><br></p><p><span style="background-color: transparent;">In this episode, </span><a href="https://www.linkedin.com/in/raphael-townshend-9154962a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raphael Townshend</a><span style="background-color: transparent;">, PhD, Founder and CEO of </span><a href="https://www.linkedin.com/company/atomic-ai-rna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI</a><span style="background-color: transparent;">, sits down with me to discuss the intersection of AI and RNA in drug development. We explore how AI technologies can reduce the cost and time required for clinical trials and target previously incurable diseases.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:15) Raphael's background in AI and biology, and founding of Atomic AI.</span></p><p><span style="background-color: transparent;">(05:59) Reducing time and failure rate in drug discovery with AI.</span></p><p><span style="background-color: transparent;">(07:16) AlphaFold's breakthrough in understanding molecular shapes using AI.</span></p><p><span style="background-color: transparent;">(09:23) Ensuring transparency and accountability in AI-driven drug discovery.</span></p><p><span style="background-color: transparent;">(12:22) Navigating intellectual property concerns in healthcare AI.</span></p><p><span style="background-color: transparent;">(15:34) Integrating AI with wet lab testing for accurate drug discovery results.</span></p><p><span style="background-color: transparent;">(17:31) Balancing intellectual property and open research in biotech.</span></p><p><span style="background-color: transparent;">(20:02) Addressing data privacy and security in AI algorithms.</span></p><p><span style="background-color: transparent;">(22:30) Educating users and healthcare professionals about AI in drug discovery.</span></p><p><span style="background-color: transparent;">(24:48) Collaborating with global regulators for AI-driven drug discovery innovations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/raphael-townshend-9154962a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raphael Townshend</a> - </p><p>https://www.linkedin.com/in/raphael-townshend-9154962a/</p><p><br></p><p><a href="https://www.linkedin.com/company/atomic-ai-rna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI</a> | LinkedIn - </p><p>https://www.linkedin.com/company/atomic-ai-rna/</p><p><br></p><p><a href="https://deepmind.google/technologies/alphafold/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AlphaFold</a> - </p><p>https://deepmind.google/technologies/alphafold/</p><p><br></p><p><a href="https://atomic.ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Atomic AI Homepage</a> - </p><p>https://atomic.ai/</p><p><br></p><p><a href="https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ATOM-1 Large Language Model</a> - </p><p>https://www.biospace.com/atomic-ai-creates-first-large-language-model-using-chemical-mapping-data-to-optimize-rna-therapeutic-development</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI and RNA are revolutionizing drug discovery, promising a future where life-saving medications are developed faster and at lower costs.In this episode, Raphael Townshend, PhD, Founder and CEO of Atomic AI, sits down with me to discuss the intersec...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>51</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[c086a8ad-f407-4e2b-a989-0f81192d17ed]]></guid>
  <title><![CDATA[Addressing Bias in AI To Build Trust in Technology with Dr. Rashawn Ray, Vice President of the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland, and Senior Fellow at The Brookings Institution]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/sociologistray/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray</a><span style="background-color: transparent;">, Vice President at the American Institutes for Research (AIR) and Executive Director of </span><a href="https://www.linkedin.com/showcase/air-equity-initiative/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a><span style="background-color: transparent;">, Professor of Sociology at the </span><a href="https://www.linkedin.com/school/university-of-maryland/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Maryland</a><span style="background-color: transparent;"> and Senior Fellow at </span><a href="https://www.linkedin.com/company/the-brookings-institution/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Brookings Institution</a><span style="background-color: transparent;">. Dr. Ray’s innovative work lies at the powerful intersection of policing, technology and social equity, where he explores how AI can be designed and implemented to enhance fairness, reduce inequality and ultimately be a force for positive change in both local communities and the broader world.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:00) Regulating AI without stifling innovation is crucial.</span></p><p><span style="background-color: transparent;">(07:06) How virtual reality enhances police training by addressing implicit bias.</span></p><p><span style="background-color: transparent;">(12:22) The impact of diverse teams on equitable AI development.</span></p><p><span style="background-color: transparent;">(19:36) Overcoming challenges in implementing VR training in smaller law enforcement agencies.</span></p><p><span style="background-color: transparent;">(25:50) Tech companies collaborating on socially impactful AI projects is vital.</span></p><p><span style="background-color: transparent;">(31:55) Community involvement is critical in shaping AI and VR technologies.</span></p><p><span style="background-color: transparent;">(36:21) The role of DEI initiatives in improving AI’s fairness and effectiveness.</span></p><p><span style="background-color: transparent;">(42:09) The future of AI legislation and its potential to democratize technology.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/sociologistray/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray</a> - </p><p>https://www.linkedin.com/in/sociologistray/</p><p><br></p><p><a href="https://www.air.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR | Website</a> - https://www.air.org/</p><p><br></p><p><a href="https://www.linkedin.com/showcase/air-equity-initiative/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a> | LinkedIn - </p><p>https://www.linkedin.com/showcase/air-equity-initiative/about/</p><p><br></p><p><a href="https://www.air.org/air-equity-initiative-bridge-more-equitable-world" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a> Website - </p><p>https://www.air.org/air-equity-initiative-bridge-more-equitable-world</p><p><br></p><p><a href="https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lab for Applied Social Science Research</a> - </p><p>https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29</p><p><br></p><p><a href="https://www.brookings.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brookings Institution</a> - </p><p>https://www.brookings.edu</p><p><br></p><p><a href="https://www.air.org/experts/person/rashawn-ray" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray - AIR</a><span style="background-color: transparent; color: rgb(17, 85, 204);">&nbsp;- </span></p><p>https://www.air.org/experts/person/rashawn-ray</p><p><br></p><p><a href="https://www.rashawnray.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashan Ray’s | Website</a> - </p><p>https://www.rashawnray.com/</p><p><br></p><p><a href="https://uncmap.org/publication/chat-wp/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“Extracting Protest Events from Newspaper Articles with ChatGPT” (working paper)</a> - https://uncmap.org/publication/chat-wp/</p><p><br></p><p><a href="https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“5 questions policymakers should ask about facial recognition, law enforcement and algorithmic bias”</a> - https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/</p><p><br></p><p><a href="https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“Examining equity in transportation safety enforcement”</a> - </p><p>https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e9f28571-a8a6-4d7c-a76b-b687812fde4c/bdaf2d84d7.jpg" />
  <pubDate>Mon, 23 Sep 2024 06:59:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="44300410" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e9f28571-a8a6-4d7c-a76b-b687812fde4c/episode.mp3" />
  <itunes:title><![CDATA[Addressing Bias in AI To Build Trust in Technology with Dr. Rashawn Ray, Vice President of the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland, and Senior Fellow at The Brookings Institution]]></itunes:title>
  <itunes:duration>46:08</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/sociologistray/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray</a><span style="background-color: transparent;">, Vice President at the American Institutes for Research (AIR) and Executive Director of </span><a href="https://www.linkedin.com/showcase/air-equity-initiative/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a><span style="background-color: transparent;">, Professor of Sociology at the </span><a href="https://www.linkedin.com/school/university-of-maryland/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Maryland</a><span style="background-color: transparent;"> and Senior Fellow at </span><a href="https://www.linkedin.com/company/the-brookings-institution/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Brookings Institution</a><span style="background-color: transparent;">. Dr. Ray’s innovative work lies at the powerful intersection of policing, technology and social equity, where he explores how AI can be designed and implemented to enhance fairness, reduce inequality and ultimately be a force for positive change in both local communities and the broader world.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:00) Regulating AI without stifling innovation is crucial.</span></p><p><span style="background-color: transparent;">(07:06) How virtual reality enhances police training by addressing implicit bias.</span></p><p><span style="background-color: transparent;">(12:22) The impact of diverse teams on equitable AI development.</span></p><p><span style="background-color: transparent;">(19:36) Overcoming challenges in implementing VR training in smaller law enforcement agencies.</span></p><p><span style="background-color: transparent;">(25:50) Tech companies collaborating on socially impactful AI projects is vital.</span></p><p><span style="background-color: transparent;">(31:55) Community involvement is critical in shaping AI and VR technologies.</span></p><p><span style="background-color: transparent;">(36:21) The role of DEI initiatives in improving AI’s fairness and effectiveness.</span></p><p><span style="background-color: transparent;">(42:09) The future of AI legislation and its potential to democratize technology.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/sociologistray/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray</a> - </p><p>https://www.linkedin.com/in/sociologistray/</p><p><br></p><p><a href="https://www.air.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR | Website</a> - https://www.air.org/</p><p><br></p><p><a href="https://www.linkedin.com/showcase/air-equity-initiative/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a> | LinkedIn - </p><p>https://www.linkedin.com/showcase/air-equity-initiative/about/</p><p><br></p><p><a href="https://www.air.org/air-equity-initiative-bridge-more-equitable-world" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a> Website - </p><p>https://www.air.org/air-equity-initiative-bridge-more-equitable-world</p><p><br></p><p><a href="https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lab for Applied Social Science Research</a> - </p><p>https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29</p><p><br></p><p><a href="https://www.brookings.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brookings Institution</a> - </p><p>https://www.brookings.edu</p><p><br></p><p><a href="https://www.air.org/experts/person/rashawn-ray" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray - AIR</a><span style="background-color: transparent; color: rgb(17, 85, 204);">&nbsp;- </span></p><p>https://www.air.org/experts/person/rashawn-ray</p><p><br></p><p><a href="https://www.rashawnray.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashan Ray’s | Website</a> - </p><p>https://www.rashawnray.com/</p><p><br></p><p><a href="https://uncmap.org/publication/chat-wp/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“Extracting Protest Events from Newspaper Articles with ChatGPT” (working paper)</a> - https://uncmap.org/publication/chat-wp/</p><p><br></p><p><a href="https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“5 questions policymakers should ask about facial recognition, law enforcement and algorithmic bias”</a> - https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/</p><p><br></p><p><a href="https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“Examining equity in transportation safety enforcement”</a> - </p><p>https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/sociologistray/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray</a><span style="background-color: transparent;">, Vice President at the American Institutes for Research (AIR) and Executive Director of </span><a href="https://www.linkedin.com/showcase/air-equity-initiative/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a><span style="background-color: transparent;">, Professor of Sociology at the </span><a href="https://www.linkedin.com/school/university-of-maryland/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Maryland</a><span style="background-color: transparent;"> and Senior Fellow at </span><a href="https://www.linkedin.com/company/the-brookings-institution/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The Brookings Institution</a><span style="background-color: transparent;">. Dr. Ray’s innovative work lies at the powerful intersection of policing, technology and social equity, where he explores how AI can be designed and implemented to enhance fairness, reduce inequality and ultimately be a force for positive change in both local communities and the broader world.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:00) Regulating AI without stifling innovation is crucial.</span></p><p><span style="background-color: transparent;">(07:06) How virtual reality enhances police training by addressing implicit bias.</span></p><p><span style="background-color: transparent;">(12:22) The impact of diverse teams on equitable AI development.</span></p><p><span style="background-color: transparent;">(19:36) Overcoming challenges in implementing VR training in smaller law enforcement agencies.</span></p><p><span style="background-color: transparent;">(25:50) Tech companies collaborating on socially impactful AI projects is vital.</span></p><p><span style="background-color: transparent;">(31:55) Community involvement is critical in shaping AI and VR technologies.</span></p><p><span style="background-color: transparent;">(36:21) The role of DEI initiatives in improving AI’s fairness and effectiveness.</span></p><p><span style="background-color: transparent;">(42:09) The future of AI legislation and its potential to democratize technology.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/sociologistray/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray</a> - </p><p>https://www.linkedin.com/in/sociologistray/</p><p><br></p><p><a href="https://www.air.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR | Website</a> - https://www.air.org/</p><p><br></p><p><a href="https://www.linkedin.com/showcase/air-equity-initiative/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a> | LinkedIn - </p><p>https://www.linkedin.com/showcase/air-equity-initiative/about/</p><p><br></p><p><a href="https://www.air.org/air-equity-initiative-bridge-more-equitable-world" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AIR Equity Initiative</a> Website - </p><p>https://www.air.org/air-equity-initiative-bridge-more-equitable-world</p><p><br></p><p><a href="https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lab for Applied Social Science Research</a> - </p><p>https://socy.umd.edu/centers/lab-applied-social-science-research-%28lassr%29</p><p><br></p><p><a href="https://www.brookings.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brookings Institution</a> - </p><p>https://www.brookings.edu</p><p><br></p><p><a href="https://www.air.org/experts/person/rashawn-ray" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashawn Ray - AIR</a><span style="background-color: transparent; color: rgb(17, 85, 204);">&nbsp;- </span></p><p>https://www.air.org/experts/person/rashawn-ray</p><p><br></p><p><a href="https://www.rashawnray.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Rashan Ray’s | Website</a> - </p><p>https://www.rashawnray.com/</p><p><br></p><p><a href="https://uncmap.org/publication/chat-wp/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“Extracting Protest Events from Newspaper Articles with ChatGPT” (working paper)</a> - https://uncmap.org/publication/chat-wp/</p><p><br></p><p><a href="https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“5 questions policymakers should ask about facial recognition, law enforcement and algorithmic bias”</a> - https://www.brookings.edu/articles/5-questions-policymakers-should-ask-about-facial-recognition-law-enforcement-and-algorithmic-bias/</p><p><br></p><p><a href="https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">“Examining equity in transportation safety enforcement”</a> - </p><p>https://www.brookings.edu/articles/examining-equity-in-transportation-safety-enforcement/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, I’m joined by Dr. Rashawn Ray, Vice President at the American Institutes for Research (AIR) and Executive Director of AIR Equity Initiative, Professor of Sociology at the University of Maryland and Senior Fellow at The Brookings In...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>50</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[3b8d7dd8-5bb8-4c21-a2df-ac6903c1fdd8]]></guid>
  <title><![CDATA[Regulating AI Innovation for National Security and Healthcare with Mike Rounds, US Senator for South Dakota, Co-Chair of the Senate AI Caucus and Member of the Bipartisan Senate AI Working Group]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.</span></p><p><span style="background-color: transparent;">(05:07) Why intellectual property protections are essential in AI development.</span></p><p><span style="background-color: transparent;">(07:27) National security implications of AI in weapons systems and defense.</span></p><p><span style="background-color: transparent;">(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.</span></p><p><span style="background-color: transparent;">(10:55) How AI can aid in detecting and combating biological threats.</span></p><p><span style="background-color: transparent;">(15:00) The importance of workforce training to mitigate AI-driven job displacement.</span></p><p><span style="background-color: transparent;">(19:05) The role of community colleges in preparing the workforce for an AI-driven future.</span></p><p><span style="background-color: transparent;">(24:00) Insights from international collaboration on AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.rounds.senate.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Mike Rounds Homepage</a> - </p><p>https://www.rounds.senate.gov/</p><p><br></p><p><a href="https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">GUIDE AI Initiative</a> - </p><p>https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package</p><p><br></p><p><a href="https://www.linkedin.com/company/medshield-llc" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Medshield</a> - </p><p>https://www.linkedin.com/company/medshield-llc</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/90762c9e-68d7-40b1-a4b2-1eb401a7b7e4/79d33b641d.jpg" />
  <pubDate>Wed, 18 Sep 2024 08:13:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="27719134" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/90762c9e-68d7-40b1-a4b2-1eb401a7b7e4/episode.mp3" />
  <itunes:title><![CDATA[Regulating AI Innovation for National Security and Healthcare with Mike Rounds, US Senator for South Dakota, Co-Chair of the Senate AI Caucus and Member of the Bipartisan Senate AI Working Group]]></itunes:title>
  <itunes:duration>28:52</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.</span></p><p><span style="background-color: transparent;">(05:07) Why intellectual property protections are essential in AI development.</span></p><p><span style="background-color: transparent;">(07:27) National security implications of AI in weapons systems and defense.</span></p><p><span style="background-color: transparent;">(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.</span></p><p><span style="background-color: transparent;">(10:55) How AI can aid in detecting and combating biological threats.</span></p><p><span style="background-color: transparent;">(15:00) The importance of workforce training to mitigate AI-driven job displacement.</span></p><p><span style="background-color: transparent;">(19:05) The role of community colleges in preparing the workforce for an AI-driven future.</span></p><p><span style="background-color: transparent;">(24:00) Insights from international collaboration on AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.rounds.senate.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Mike Rounds Homepage</a> - </p><p>https://www.rounds.senate.gov/</p><p><br></p><p><a href="https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">GUIDE AI Initiative</a> - </p><p>https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package</p><p><br></p><p><a href="https://www.linkedin.com/company/medshield-llc" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Medshield</a> - </p><p>https://www.linkedin.com/company/medshield-llc</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.</span></p><p><span style="background-color: transparent;">(05:07) Why intellectual property protections are essential in AI development.</span></p><p><span style="background-color: transparent;">(07:27) National security implications of AI in weapons systems and defense.</span></p><p><span style="background-color: transparent;">(09:19) The potential of AI to revolutionize healthcare through faster drug approvals.</span></p><p><span style="background-color: transparent;">(10:55) How AI can aid in detecting and combating biological threats.</span></p><p><span style="background-color: transparent;">(15:00) The importance of workforce training to mitigate AI-driven job displacement.</span></p><p><span style="background-color: transparent;">(19:05) The role of community colleges in preparing the workforce for an AI-driven future.</span></p><p><span style="background-color: transparent;">(24:00) Insights from international collaboration on AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.rounds.senate.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Mike Rounds Homepage</a> - </p><p>https://www.rounds.senate.gov/</p><p><br></p><p><a href="https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">GUIDE AI Initiative</a> - </p><p>https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package</p><p><br></p><p><a href="https://www.linkedin.com/company/medshield-llc" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Medshield</a> - </p><p>https://www.linkedin.com/company/medshield-llc</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and fede...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>49</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[53a9e457-7eaf-47e1-8de8-a835b35b834f]]></guid>
  <title><![CDATA[Protecting Consumer Rights in the Age of AI with Attorney General Charity Rae Clark and Representative Monique Priestley]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/charityrclark/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Charity Rae Clark</a><span style="background-color: transparent;">, Vermont Attorney General, and </span><a href="https://www.linkedin.com/in/mepriestley/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Monique Priestley</a><span style="background-color: transparent;">, Vermont State Representative. They have been instrumental in shaping </span><a href="https://www.linkedin.com/company/state-of-vermont/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Vermont</a><span style="background-color: transparent;">’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:10) “Free” apps and websites take payment with your data.</span></p><p><span style="background-color: transparent;">(08:15) The Data Privacy Act includes stringent provisions to protect children online.</span></p><p><span style="background-color: transparent;">(10:05) Protecting consumer privacy and reducing security risks.</span></p><p><span style="background-color: transparent;">(15:29) Vermont’s legislative journey includes educating lawmakers.</span></p><p><span style="background-color: transparent;">(18:45) Innovation and regulation must be balanced for future AI development.</span></p><p><span style="background-color: transparent;">(23:50) Collaboration and education can overcome intense pressure from lobbyists.</span></p><p><span style="background-color: transparent;">(30:02) AI’s potential to exacerbate discrimination demands regulation.</span></p><p><span style="background-color: transparent;">(36:15) Deepfakes present a growing threat.</span></p><p><span style="background-color: transparent;">(42:40) Consumer trust could be lost due to premature releases of AI products.</span></p><p><span style="background-color: transparent;">(50:10) The necessity of a strong foundation in data privacy.&nbsp;</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/charityrclark/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Charity Rae Clark</a> -</p><p>https://www.linkedin.com/in/charityrclark/</p><p><br></p><p><a href="https://www.linkedin.com/in/mepriestley/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Monique Priestley</a> -</p><p>https://www.linkedin.com/in/mepriestley/</p><p><br></p><p><a href="https://www.linkedin.com/company/state-of-vermont/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Vermont</a> -</p><p>https://www.linkedin.com/company/state-of-vermont/</p><p><br></p><p><a href="https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"><strong>“</strong>The Age of Surveillance Capitalism” by Shoshana Zuboff</a> -</p><p>https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697</p><p><br></p><p><span style="background-color: transparent; color: rgb(17, 85, 204);">“</span><a href="https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Why Privacy Matters” by Neil Richards</a> -</p><p>https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/4fea3c8a-fb18-4d9e-acfa-37ba5a0df554/e1d46933f4.jpg" />
  <pubDate>Wed, 28 Aug 2024 16:45:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="60729135" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/4fea3c8a-fb18-4d9e-acfa-37ba5a0df554/episode.mp3" />
  <itunes:title><![CDATA[Protecting Consumer Rights in the Age of AI with Attorney General Charity Rae Clark and Representative Monique Priestley]]></itunes:title>
  <itunes:duration>1:03:15</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/charityrclark/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Charity Rae Clark</a><span style="background-color: transparent;">, Vermont Attorney General, and </span><a href="https://www.linkedin.com/in/mepriestley/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Monique Priestley</a><span style="background-color: transparent;">, Vermont State Representative. They have been instrumental in shaping </span><a href="https://www.linkedin.com/company/state-of-vermont/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Vermont</a><span style="background-color: transparent;">’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:10) “Free” apps and websites take payment with your data.</span></p><p><span style="background-color: transparent;">(08:15) The Data Privacy Act includes stringent provisions to protect children online.</span></p><p><span style="background-color: transparent;">(10:05) Protecting consumer privacy and reducing security risks.</span></p><p><span style="background-color: transparent;">(15:29) Vermont’s legislative journey includes educating lawmakers.</span></p><p><span style="background-color: transparent;">(18:45) Innovation and regulation must be balanced for future AI development.</span></p><p><span style="background-color: transparent;">(23:50) Collaboration and education can overcome intense pressure from lobbyists.</span></p><p><span style="background-color: transparent;">(30:02) AI’s potential to exacerbate discrimination demands regulation.</span></p><p><span style="background-color: transparent;">(36:15) Deepfakes present a growing threat.</span></p><p><span style="background-color: transparent;">(42:40) Consumer trust could be lost due to premature releases of AI products.</span></p><p><span style="background-color: transparent;">(50:10) The necessity of a strong foundation in data privacy.&nbsp;</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/charityrclark/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Charity Rae Clark</a> -</p><p>https://www.linkedin.com/in/charityrclark/</p><p><br></p><p><a href="https://www.linkedin.com/in/mepriestley/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Monique Priestley</a> -</p><p>https://www.linkedin.com/in/mepriestley/</p><p><br></p><p><a href="https://www.linkedin.com/company/state-of-vermont/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Vermont</a> -</p><p>https://www.linkedin.com/company/state-of-vermont/</p><p><br></p><p><a href="https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"><strong>“</strong>The Age of Surveillance Capitalism” by Shoshana Zuboff</a> -</p><p>https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697</p><p><br></p><p><span style="background-color: transparent; color: rgb(17, 85, 204);">“</span><a href="https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Why Privacy Matters” by Neil Richards</a> -</p><p>https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/charityrclark/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Charity Rae Clark</a><span style="background-color: transparent;">, Vermont Attorney General, and </span><a href="https://www.linkedin.com/in/mepriestley/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Monique Priestley</a><span style="background-color: transparent;">, Vermont State Representative. They have been instrumental in shaping </span><a href="https://www.linkedin.com/company/state-of-vermont/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Vermont</a><span style="background-color: transparent;">’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:10) “Free” apps and websites take payment with your data.</span></p><p><span style="background-color: transparent;">(08:15) The Data Privacy Act includes stringent provisions to protect children online.</span></p><p><span style="background-color: transparent;">(10:05) Protecting consumer privacy and reducing security risks.</span></p><p><span style="background-color: transparent;">(15:29) Vermont’s legislative journey includes educating lawmakers.</span></p><p><span style="background-color: transparent;">(18:45) Innovation and regulation must be balanced for future AI development.</span></p><p><span style="background-color: transparent;">(23:50) Collaboration and education can overcome intense pressure from lobbyists.</span></p><p><span style="background-color: transparent;">(30:02) AI’s potential to exacerbate discrimination demands regulation.</span></p><p><span style="background-color: transparent;">(36:15) Deepfakes present a growing threat.</span></p><p><span style="background-color: transparent;">(42:40) Consumer trust could be lost due to premature releases of AI products.</span></p><p><span style="background-color: transparent;">(50:10) The necessity of a strong foundation in data privacy.&nbsp;</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/charityrclark/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Charity Rae Clark</a> -</p><p>https://www.linkedin.com/in/charityrclark/</p><p><br></p><p><a href="https://www.linkedin.com/in/mepriestley/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Monique Priestley</a> -</p><p>https://www.linkedin.com/in/mepriestley/</p><p><br></p><p><a href="https://www.linkedin.com/company/state-of-vermont/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Vermont</a> -</p><p>https://www.linkedin.com/company/state-of-vermont/</p><p><br></p><p><a href="https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"><strong>“</strong>The Age of Surveillance Capitalism” by Shoshana Zuboff</a> -</p><p>https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697</p><p><br></p><p><span style="background-color: transparent; color: rgb(17, 85, 204);">“</span><a href="https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Why Privacy Matters” by Neil Richards</a> -</p><p>https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challeng...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>48</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[0f745a64-31f3-47e8-81c4-320169337854]]></guid>
  <title><![CDATA[Protecting Creative Rights in the AI Era with Keith Kupferschmid, Chief Executive Officer of Copyright Alliance]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Dive into the tangled web of AI and copyright law with </span><a href="https://www.linkedin.com/in/keith-kupferschmid-723b19a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keith Kupferschmid</a><span style="background-color: transparent;">, CEO of the </span><a href="https://www.linkedin.com/company/copyright-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Copyright Alliance</a><span style="background-color: transparent;">, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.</span></p><p><span style="background-color: transparent;">(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.</span></p><p><span style="background-color: transparent;">(06:00) There have been 17 or 18 AI copyright cases filed recently.</span></p><p><span style="background-color: transparent;">(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.</span></p><p><span style="background-color: transparent;">(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.</span></p><p><span style="background-color: transparent;">(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.</span></p><p><span style="background-color: transparent;">(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.</span></p><p><span style="background-color: transparent;">(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.</span></p><p><span style="background-color: transparent;">(27:34) Education and public awareness are vital for understanding copyright issues related to AI.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/keith-kupferschmid-723b19a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keith Kupferschmid</a> - </p><p>https://www.linkedin.com/in/keith-kupferschmid-723b19a/</p><p><br></p><p><a href="https://copyrightalliance.org" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Copyright Alliance</a> - </p><p>https://copyrightalliance.org</p><p><br></p><p><a href="https://www.copyright.gov" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Copyright Office</a> - </p><p>https://www.copyright.gov</p><p><br></p><p><a href="https://www.gettyimages.com" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Getty Images Licensing</a> - </p><p>https://www.gettyimages.com</p><p><br></p><p><a href="https://www.nar.realtor" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Association of Realtors</a> - </p><p>https://www.nar.realtor</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/3928ff0b-01a0-45ed-a76a-edc396db4e30/43bd698c9e.jpg" />
  <pubDate>Thu, 22 Aug 2024 06:41:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32331316" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/3928ff0b-01a0-45ed-a76a-edc396db4e30/episode.mp3" />
  <itunes:title><![CDATA[Protecting Creative Rights in the AI Era with Keith Kupferschmid, Chief Executive Officer of Copyright Alliance]]></itunes:title>
  <itunes:duration>33:40</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Dive into the tangled web of AI and copyright law with </span><a href="https://www.linkedin.com/in/keith-kupferschmid-723b19a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keith Kupferschmid</a><span style="background-color: transparent;">, CEO of the </span><a href="https://www.linkedin.com/company/copyright-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Copyright Alliance</a><span style="background-color: transparent;">, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.</span></p><p><span style="background-color: transparent;">(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.</span></p><p><span style="background-color: transparent;">(06:00) There have been 17 or 18 AI copyright cases filed recently.</span></p><p><span style="background-color: transparent;">(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.</span></p><p><span style="background-color: transparent;">(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.</span></p><p><span style="background-color: transparent;">(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.</span></p><p><span style="background-color: transparent;">(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.</span></p><p><span style="background-color: transparent;">(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.</span></p><p><span style="background-color: transparent;">(27:34) Education and public awareness are vital for understanding copyright issues related to AI.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/keith-kupferschmid-723b19a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keith Kupferschmid</a> - </p><p>https://www.linkedin.com/in/keith-kupferschmid-723b19a/</p><p><br></p><p><a href="https://copyrightalliance.org" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Copyright Alliance</a> - </p><p>https://copyrightalliance.org</p><p><br></p><p><a href="https://www.copyright.gov" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Copyright Office</a> - </p><p>https://www.copyright.gov</p><p><br></p><p><a href="https://www.gettyimages.com" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Getty Images Licensing</a> - </p><p>https://www.gettyimages.com</p><p><br></p><p><a href="https://www.nar.realtor" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Association of Realtors</a> - </p><p>https://www.nar.realtor</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Dive into the tangled web of AI and copyright law with </span><a href="https://www.linkedin.com/in/keith-kupferschmid-723b19a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keith Kupferschmid</a><span style="background-color: transparent;">, CEO of the </span><a href="https://www.linkedin.com/company/copyright-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Copyright Alliance</a><span style="background-color: transparent;">, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.</span></p><p><span style="background-color: transparent;">(05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.</span></p><p><span style="background-color: transparent;">(06:00) There have been 17 or 18 AI copyright cases filed recently.</span></p><p><span style="background-color: transparent;">(08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.</span></p><p><span style="background-color: transparent;">(13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.</span></p><p><span style="background-color: transparent;">(15:00) Creators should clearly state their licensing preferences on their works to protect themselves.</span></p><p><span style="background-color: transparent;">(17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.</span></p><p><span style="background-color: transparent;">(20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.</span></p><p><span style="background-color: transparent;">(27:34) Education and public awareness are vital for understanding copyright issues related to AI.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/keith-kupferschmid-723b19a/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keith Kupferschmid</a> - </p><p>https://www.linkedin.com/in/keith-kupferschmid-723b19a/</p><p><br></p><p><a href="https://copyrightalliance.org" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Copyright Alliance</a> - </p><p>https://copyrightalliance.org</p><p><br></p><p><a href="https://www.copyright.gov" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Copyright Office</a> - </p><p>https://www.copyright.gov</p><p><br></p><p><a href="https://www.gettyimages.com" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Getty Images Licensing</a> - </p><p>https://www.gettyimages.com</p><p><br></p><p><a href="https://www.nar.realtor" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Association of Realtors</a> - </p><p>https://www.nar.realtor</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in a...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>47</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[4534cbce-06cc-469a-a19b-4acd903757ac]]></guid>
  <title><![CDATA[AI Development and Cultural Values with Maria Luciana Axente]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?&nbsp; Today, I’m joined by </span><a href="https://www.linkedin.com/in/mariaaxente/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Maria Luciana Axente</a><span style="background-color: transparent;">, Head of Public Policy and Ethics at </span><a href="https://www.linkedin.com/company/pwc-uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PwC UK</a><span style="background-color: transparent;"> and Intellectual Forum Senior Research Associate at </span><a href="https://www.linkedin.com/company/jesus-college-cambridge/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jesus College Cambridge</a><span style="background-color: transparent;">, who offers key insights into the ethical implications of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:56) The importance of integrating ethical principles into AI.</span></p><p><span style="background-color: transparent;">(08:22) Preserving humanity in the age of AI.</span></p><p><span style="background-color: transparent;">(12:19) Embedding value alignment in AI systems.</span></p><p><span style="background-color: transparent;">(15:59) Fairness and voluntary commitments in AI.</span></p><p><span style="background-color: transparent;">(21:01) Participatory AI and including diverse voices.</span></p><p><span style="background-color: transparent;">(24:05) Cultural value systems shaping AI policies.</span></p><p><span style="background-color: transparent;">(26:25) The importance of reflecting on AI’s impact before implementation.</span></p><p><span style="background-color: transparent;">(27:48) Learning from other industries to govern AI better.</span></p><p><span style="background-color: transparent;">(28:59) AI as a socio-technical system, not just technology.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/mariaaxente/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Maria Luciana Axente</a> - </p><p>https://www.linkedin.com/in/mariaaxente/  </p><p><br></p><p><a href="https://www.linkedin.com/company/pwc-uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PwC UK</a> - </p><p>https://www.linkedin.com/company/pwc-uk/</p><p><br></p><p><a href="https://www.linkedin.com/company/jesus-college-cambridge/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jesus College Cambridge</a> - </p><p>https://www.linkedin.com/company/jesus-college-cambridge/</p><p><br></p><p><a href="https://www.pwc.co.uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PWC homepage</a> - </p><p>https://www.pwc.co.uk/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/a9d5dfab-9470-429e-96dc-1b27c7f960ff/d4df5f41ba.jpg" />
  <pubDate>Thu, 08 Aug 2024 02:26:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32761813" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/a9d5dfab-9470-429e-96dc-1b27c7f960ff/episode.mp3" />
  <itunes:title><![CDATA[AI Development and Cultural Values with Maria Luciana Axente]]></itunes:title>
  <itunes:duration>34:07</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?&nbsp; Today, I’m joined by </span><a href="https://www.linkedin.com/in/mariaaxente/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Maria Luciana Axente</a><span style="background-color: transparent;">, Head of Public Policy and Ethics at </span><a href="https://www.linkedin.com/company/pwc-uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PwC UK</a><span style="background-color: transparent;"> and Intellectual Forum Senior Research Associate at </span><a href="https://www.linkedin.com/company/jesus-college-cambridge/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jesus College Cambridge</a><span style="background-color: transparent;">, who offers key insights into the ethical implications of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:56) The importance of integrating ethical principles into AI.</span></p><p><span style="background-color: transparent;">(08:22) Preserving humanity in the age of AI.</span></p><p><span style="background-color: transparent;">(12:19) Embedding value alignment in AI systems.</span></p><p><span style="background-color: transparent;">(15:59) Fairness and voluntary commitments in AI.</span></p><p><span style="background-color: transparent;">(21:01) Participatory AI and including diverse voices.</span></p><p><span style="background-color: transparent;">(24:05) Cultural value systems shaping AI policies.</span></p><p><span style="background-color: transparent;">(26:25) The importance of reflecting on AI’s impact before implementation.</span></p><p><span style="background-color: transparent;">(27:48) Learning from other industries to govern AI better.</span></p><p><span style="background-color: transparent;">(28:59) AI as a socio-technical system, not just technology.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/mariaaxente/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Maria Luciana Axente</a> - </p><p>https://www.linkedin.com/in/mariaaxente/  </p><p><br></p><p><a href="https://www.linkedin.com/company/pwc-uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PwC UK</a> - </p><p>https://www.linkedin.com/company/pwc-uk/</p><p><br></p><p><a href="https://www.linkedin.com/company/jesus-college-cambridge/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jesus College Cambridge</a> - </p><p>https://www.linkedin.com/company/jesus-college-cambridge/</p><p><br></p><p><a href="https://www.pwc.co.uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PWC homepage</a> - </p><p>https://www.pwc.co.uk/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?&nbsp; Today, I’m joined by </span><a href="https://www.linkedin.com/in/mariaaxente/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Maria Luciana Axente</a><span style="background-color: transparent;">, Head of Public Policy and Ethics at </span><a href="https://www.linkedin.com/company/pwc-uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PwC UK</a><span style="background-color: transparent;"> and Intellectual Forum Senior Research Associate at </span><a href="https://www.linkedin.com/company/jesus-college-cambridge/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jesus College Cambridge</a><span style="background-color: transparent;">, who offers key insights into the ethical implications of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:56) The importance of integrating ethical principles into AI.</span></p><p><span style="background-color: transparent;">(08:22) Preserving humanity in the age of AI.</span></p><p><span style="background-color: transparent;">(12:19) Embedding value alignment in AI systems.</span></p><p><span style="background-color: transparent;">(15:59) Fairness and voluntary commitments in AI.</span></p><p><span style="background-color: transparent;">(21:01) Participatory AI and including diverse voices.</span></p><p><span style="background-color: transparent;">(24:05) Cultural value systems shaping AI policies.</span></p><p><span style="background-color: transparent;">(26:25) The importance of reflecting on AI’s impact before implementation.</span></p><p><span style="background-color: transparent;">(27:48) Learning from other industries to govern AI better.</span></p><p><span style="background-color: transparent;">(28:59) AI as a socio-technical system, not just technology.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/mariaaxente/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Maria Luciana Axente</a> - </p><p>https://www.linkedin.com/in/mariaaxente/  </p><p><br></p><p><a href="https://www.linkedin.com/company/pwc-uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PwC UK</a> - </p><p>https://www.linkedin.com/company/pwc-uk/</p><p><br></p><p><a href="https://www.linkedin.com/company/jesus-college-cambridge/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jesus College Cambridge</a> - </p><p>https://www.linkedin.com/company/jesus-college-cambridge/</p><p><br></p><p><a href="https://www.pwc.co.uk/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">PWC homepage</a> - </p><p>https://www.pwc.co.uk/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?  Today, I’m joined by Maria Luciana Axente, Head of Public Policy and Ethics at PwC UK and Intellectual Forum Senior Research Associate a...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>46</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f1f75f9f-dd38-495f-8ddb-902822cb8c52]]></guid>
  <title><![CDATA[Empowering Diverse Creators in the AI Era with Lianne Baron]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Can AI spark new creative revolutions? On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/liannebaron/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lianne Baron</a><span style="background-color: transparent;">, Strategic Partner Manager for Creative Partnerships at </span><a href="https://www.meta.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Meta.</a><span style="background-color: transparent;"> Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:50) Embrace AI's changes; it challenges traditional methods.</span></p><p><span style="background-color: transparent;">(05:13) AI speeds up the journey from imagination to delivery.</span></p><p><span style="background-color: transparent;">(07:15) The move to cinematic quality sparks excitement and fear.</span></p><p><span style="background-color: transparent;">(08:30) Education is key in democratizing AI for all.</span></p><p><span style="background-color: transparent;">(15:00) Risk of bias without diverse voices in AI development.</span></p><p><span style="background-color: transparent;">(17:15) Ideas, not skills, are the new currency in AI.</span></p><p><span style="background-color: transparent;">(26:16) Imagination and human experience are irreplaceable by AI.</span></p><p><span style="background-color: transparent;">(29:11) AI can democratize storytelling, sharing diverse narratives.</span></p><p><span style="background-color: transparent;">(33:00) AI breaks down barriers, fostering new creative opportunities.</span></p><p><span style="background-color: transparent;">(36:20) Understanding authenticity is crucial in an AI-driven world.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/liannebaron/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lianne Baron</a> - </p><p>https://www.linkedin.com/in/liannebaron/</p><p><br></p><p><a href="https://www.meta.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Meta</a> - </p><p>https://www.meta.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/8c9e28db-baf8-4512-ac62-6e2a37a7517b/31fa3486b6.jpg" />
  <pubDate>Fri, 02 Aug 2024 20:26:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="33486972" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/8c9e28db-baf8-4512-ac62-6e2a37a7517b/episode.mp3" />
  <itunes:title><![CDATA[Empowering Diverse Creators in the AI Era with Lianne Baron]]></itunes:title>
  <itunes:duration>34:52</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Can AI spark new creative revolutions? On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/liannebaron/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lianne Baron</a><span style="background-color: transparent;">, Strategic Partner Manager for Creative Partnerships at </span><a href="https://www.meta.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Meta.</a><span style="background-color: transparent;"> Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:50) Embrace AI's changes; it challenges traditional methods.</span></p><p><span style="background-color: transparent;">(05:13) AI speeds up the journey from imagination to delivery.</span></p><p><span style="background-color: transparent;">(07:15) The move to cinematic quality sparks excitement and fear.</span></p><p><span style="background-color: transparent;">(08:30) Education is key in democratizing AI for all.</span></p><p><span style="background-color: transparent;">(15:00) Risk of bias without diverse voices in AI development.</span></p><p><span style="background-color: transparent;">(17:15) Ideas, not skills, are the new currency in AI.</span></p><p><span style="background-color: transparent;">(26:16) Imagination and human experience are irreplaceable by AI.</span></p><p><span style="background-color: transparent;">(29:11) AI can democratize storytelling, sharing diverse narratives.</span></p><p><span style="background-color: transparent;">(33:00) AI breaks down barriers, fostering new creative opportunities.</span></p><p><span style="background-color: transparent;">(36:20) Understanding authenticity is crucial in an AI-driven world.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/liannebaron/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lianne Baron</a> - </p><p>https://www.linkedin.com/in/liannebaron/</p><p><br></p><p><a href="https://www.meta.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Meta</a> - </p><p>https://www.meta.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Can AI spark new creative revolutions? On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/liannebaron/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lianne Baron</a><span style="background-color: transparent;">, Strategic Partner Manager for Creative Partnerships at </span><a href="https://www.meta.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Meta.</a><span style="background-color: transparent;"> Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:50) Embrace AI's changes; it challenges traditional methods.</span></p><p><span style="background-color: transparent;">(05:13) AI speeds up the journey from imagination to delivery.</span></p><p><span style="background-color: transparent;">(07:15) The move to cinematic quality sparks excitement and fear.</span></p><p><span style="background-color: transparent;">(08:30) Education is key in democratizing AI for all.</span></p><p><span style="background-color: transparent;">(15:00) Risk of bias without diverse voices in AI development.</span></p><p><span style="background-color: transparent;">(17:15) Ideas, not skills, are the new currency in AI.</span></p><p><span style="background-color: transparent;">(26:16) Imagination and human experience are irreplaceable by AI.</span></p><p><span style="background-color: transparent;">(29:11) AI can democratize storytelling, sharing diverse narratives.</span></p><p><span style="background-color: transparent;">(33:00) AI breaks down barriers, fostering new creative opportunities.</span></p><p><span style="background-color: transparent;">(36:20) Understanding authenticity is crucial in an AI-driven world.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/liannebaron/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lianne Baron</a> - </p><p>https://www.linkedin.com/in/liannebaron/</p><p><br></p><p><a href="https://www.meta.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Meta</a> - </p><p>https://www.meta.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Can AI spark new creative revolutions? On this episode, I’m joined by Lianne Baron, Strategic Partner Manager for Creative Partnerships at Meta. Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasiz...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>45</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[059526d8-dc95-4e36-bb68-bac02f35c256]]></guid>
  <title><![CDATA[Balancing Innovation and Regulation in AI with Zico Kolter ]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?</span></p><p><br></p><p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/zico-kolter-560382a4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Zico Kolter</a><span style="background-color: transparent;">, Professor and Director of the Machine Learning Department at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;"> and Chief Expert at </span><a href="https://www.linkedin.com/company/boschusa/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bosch USA</a><span style="background-color: transparent;">, who shares his insights on AI regulation and its challenges.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:41) AI innovation outpaces legislation.&nbsp;</span></p><p><span style="background-color: transparent;">(04:00) Regulating technology vs. its usage is crucial.&nbsp;</span></p><p><span style="background-color: transparent;">(06:36) AI is advancing faster than ever.&nbsp;</span></p><p><span style="background-color: transparent;">(11:14) Companies must prevent AI misuse.&nbsp;</span></p><p><span style="background-color: transparent;">(15:30) Bias-free algorithms are not feasible.&nbsp;</span></p><p><span style="background-color: transparent;">(21:34) Human interaction in AI decisions is essential.&nbsp;</span></p><p><span style="background-color: transparent;">(27:49) The competitive environment benefits AI development.&nbsp;</span></p><p><span style="background-color: transparent;">(32:26) Perfectly accepted regulations indicate mistakes.&nbsp;</span></p><p><span style="background-color: transparent;">(37:52) Regulations should adapt to technological changes.&nbsp;</span></p><p><span style="background-color: transparent;">(42:49) AI developers aim to benefit people.</span></p><p><span style="background-color: transparent;">(45:16) Human-in-the-loop AI is crucial for reliability.&nbsp;</span></p><p><span style="background-color: transparent;">(46:30) Addressing gaps in AI systems is critical.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://zicokolter.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Zico Kolter</a> - https://www.linkedin.com/in/zico-kolter-560382a4/</p><p><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> - https://www.linkedin.com/school/carnegie-mellon-university/</p><p><a href="https://www.linkedin.com/company/boschusa/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bosch USA</a> - https://www.linkedin.com/company/boschusa/</p><p><a href="https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en</p><p><a href="https://www.openai.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a> - https://www.openai.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/142c849f-a717-4bce-b421-e489c3c86cf9/3bd0117b04.jpg" />
  <pubDate>Fri, 19 Jul 2024 20:21:21 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="47044314" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/142c849f-a717-4bce-b421-e489c3c86cf9/episode.mp3" />
  <itunes:title><![CDATA[Balancing Innovation and Regulation in AI with Zico Kolter ]]></itunes:title>
  <itunes:duration>49:00</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?</span></p><p><br></p><p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/zico-kolter-560382a4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Zico Kolter</a><span style="background-color: transparent;">, Professor and Director of the Machine Learning Department at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;"> and Chief Expert at </span><a href="https://www.linkedin.com/company/boschusa/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bosch USA</a><span style="background-color: transparent;">, who shares his insights on AI regulation and its challenges.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:41) AI innovation outpaces legislation.&nbsp;</span></p><p><span style="background-color: transparent;">(04:00) Regulating technology vs. its usage is crucial.&nbsp;</span></p><p><span style="background-color: transparent;">(06:36) AI is advancing faster than ever.&nbsp;</span></p><p><span style="background-color: transparent;">(11:14) Companies must prevent AI misuse.&nbsp;</span></p><p><span style="background-color: transparent;">(15:30) Bias-free algorithms are not feasible.&nbsp;</span></p><p><span style="background-color: transparent;">(21:34) Human interaction in AI decisions is essential.&nbsp;</span></p><p><span style="background-color: transparent;">(27:49) The competitive environment benefits AI development.&nbsp;</span></p><p><span style="background-color: transparent;">(32:26) Perfectly accepted regulations indicate mistakes.&nbsp;</span></p><p><span style="background-color: transparent;">(37:52) Regulations should adapt to technological changes.&nbsp;</span></p><p><span style="background-color: transparent;">(42:49) AI developers aim to benefit people.</span></p><p><span style="background-color: transparent;">(45:16) Human-in-the-loop AI is crucial for reliability.&nbsp;</span></p><p><span style="background-color: transparent;">(46:30) Addressing gaps in AI systems is critical.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://zicokolter.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Zico Kolter</a> - https://www.linkedin.com/in/zico-kolter-560382a4/</p><p><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> - https://www.linkedin.com/school/carnegie-mellon-university/</p><p><a href="https://www.linkedin.com/company/boschusa/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bosch USA</a> - https://www.linkedin.com/company/boschusa/</p><p><a href="https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en</p><p><a href="https://www.openai.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a> - https://www.openai.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?</span></p><p><br></p><p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/zico-kolter-560382a4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Zico Kolter</a><span style="background-color: transparent;">, Professor and Director of the Machine Learning Department at </span><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a><span style="background-color: transparent;"> and Chief Expert at </span><a href="https://www.linkedin.com/company/boschusa/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bosch USA</a><span style="background-color: transparent;">, who shares his insights on AI regulation and its challenges.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:41) AI innovation outpaces legislation.&nbsp;</span></p><p><span style="background-color: transparent;">(04:00) Regulating technology vs. its usage is crucial.&nbsp;</span></p><p><span style="background-color: transparent;">(06:36) AI is advancing faster than ever.&nbsp;</span></p><p><span style="background-color: transparent;">(11:14) Companies must prevent AI misuse.&nbsp;</span></p><p><span style="background-color: transparent;">(15:30) Bias-free algorithms are not feasible.&nbsp;</span></p><p><span style="background-color: transparent;">(21:34) Human interaction in AI decisions is essential.&nbsp;</span></p><p><span style="background-color: transparent;">(27:49) The competitive environment benefits AI development.&nbsp;</span></p><p><span style="background-color: transparent;">(32:26) Perfectly accepted regulations indicate mistakes.&nbsp;</span></p><p><span style="background-color: transparent;">(37:52) Regulations should adapt to technological changes.&nbsp;</span></p><p><span style="background-color: transparent;">(42:49) AI developers aim to benefit people.</span></p><p><span style="background-color: transparent;">(45:16) Human-in-the-loop AI is crucial for reliability.&nbsp;</span></p><p><span style="background-color: transparent;">(46:30) Addressing gaps in AI systems is critical.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://zicokolter.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Zico Kolter</a> - https://www.linkedin.com/in/zico-kolter-560382a4/</p><p><a href="https://www.linkedin.com/school/carnegie-mellon-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carnegie Mellon University</a> - https://www.linkedin.com/school/carnegie-mellon-university/</p><p><a href="https://www.linkedin.com/company/boschusa/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bosch USA</a> - https://www.linkedin.com/company/boschusa/</p><p><a href="https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en</p><p><a href="https://www.openai.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a> - https://www.openai.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?On this episode, I’m joined by Professor Zico Kolter, Professor and Director of the Machine Learning Department at C...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>44</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[adde4ef4-68ea-4131-b4c6-1b50e183ffd8]]></guid>
  <title><![CDATA[Harnessing Evolutionary Principles To Guide AI Development with Professor Paul Rainey]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the </span><a href="https://www.linkedin.com/company/max-planck-institute-for-evolutionary-biology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Institute for Evolutionary Biology</a><span style="background-color: transparent;"> in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of </span><a href="https://www.linkedin.com/company/european-molecular-biology-organization/?originalSubdomain=de" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EMBO</a><span style="background-color: transparent;"> &amp; European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:04) Evolutionary transitions form higher-level structures.</span></p><p><span style="background-color: transparent;">(00:06) Eukaryotic cells parallel future AI-human interactions.</span></p><p><span style="background-color: transparent;">(00:08) Major evolutionary transitions inform AI-human interactions.</span></p><p><span style="background-color: transparent;">(00:11) Algorithms can evolve with variation, replication and heredity.</span></p><p><span style="background-color: transparent;">(00:13) Natural selection drives complexity.</span></p><p><span style="background-color: transparent;">(00:18) AI adapts to selective pressures unpredictably.</span></p><p><span style="background-color: transparent;">(00:21) Humans risk losing autonomy to AI.</span></p><p><span style="background-color: transparent;">(00:25) Societal engagement is needed before developing self-replicating AIs.</span></p><p><span style="background-color: transparent;">(00:30) The challenge of controlling self-replicating systems.</span></p><p><span style="background-color: transparent;">(00:33) Interdisciplinary collaboration is crucial for AI challenges.</span></p><p><br></p><p><strong style="background-color: transparent;"><span class="ql-cursor">﻿</span>Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.evolbio.mpg.de" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Institute for Evolutionary Biology</a></p><p><a href="https://www.mpg.de/14142806/evolutionary-biology-rainey" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Paul Rainey - Max Planck Institute</a></p><p><a href="https://www.mpg.de/21167084/MPR_2023_3" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Research Magazine - Issue 3/2023</a></p><p><a href="https://royalsocietypublishing.org/doi/full/10.1098/rstb.2021.0408" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Paul Rainey’s article in The Royal Society Publishing</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/57799274-7502-4481-87d3-aa076e8a4211/c0f36ac32e.jpg" />
  <pubDate>Tue, 16 Jul 2024 09:00:26 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="33665441" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/57799274-7502-4481-87d3-aa076e8a4211/episode.mp3" />
  <itunes:title><![CDATA[Harnessing Evolutionary Principles To Guide AI Development with Professor Paul Rainey]]></itunes:title>
  <itunes:duration>35:04</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the </span><a href="https://www.linkedin.com/company/max-planck-institute-for-evolutionary-biology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Institute for Evolutionary Biology</a><span style="background-color: transparent;"> in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of </span><a href="https://www.linkedin.com/company/european-molecular-biology-organization/?originalSubdomain=de" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EMBO</a><span style="background-color: transparent;"> &amp; European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:04) Evolutionary transitions form higher-level structures.</span></p><p><span style="background-color: transparent;">(00:06) Eukaryotic cells parallel future AI-human interactions.</span></p><p><span style="background-color: transparent;">(00:08) Major evolutionary transitions inform AI-human interactions.</span></p><p><span style="background-color: transparent;">(00:11) Algorithms can evolve with variation, replication and heredity.</span></p><p><span style="background-color: transparent;">(00:13) Natural selection drives complexity.</span></p><p><span style="background-color: transparent;">(00:18) AI adapts to selective pressures unpredictably.</span></p><p><span style="background-color: transparent;">(00:21) Humans risk losing autonomy to AI.</span></p><p><span style="background-color: transparent;">(00:25) Societal engagement is needed before developing self-replicating AIs.</span></p><p><span style="background-color: transparent;">(00:30) The challenge of controlling self-replicating systems.</span></p><p><span style="background-color: transparent;">(00:33) Interdisciplinary collaboration is crucial for AI challenges.</span></p><p><br></p><p><strong style="background-color: transparent;"><span class="ql-cursor">﻿</span>Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.evolbio.mpg.de" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Institute for Evolutionary Biology</a></p><p><a href="https://www.mpg.de/14142806/evolutionary-biology-rainey" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Paul Rainey - Max Planck Institute</a></p><p><a href="https://www.mpg.de/21167084/MPR_2023_3" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Research Magazine - Issue 3/2023</a></p><p><a href="https://royalsocietypublishing.org/doi/full/10.1098/rstb.2021.0408" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Paul Rainey’s article in The Royal Society Publishing</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the </span><a href="https://www.linkedin.com/company/max-planck-institute-for-evolutionary-biology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Institute for Evolutionary Biology</a><span style="background-color: transparent;"> in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of </span><a href="https://www.linkedin.com/company/european-molecular-biology-organization/?originalSubdomain=de" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EMBO</a><span style="background-color: transparent;"> &amp; European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:04) Evolutionary transitions form higher-level structures.</span></p><p><span style="background-color: transparent;">(00:06) Eukaryotic cells parallel future AI-human interactions.</span></p><p><span style="background-color: transparent;">(00:08) Major evolutionary transitions inform AI-human interactions.</span></p><p><span style="background-color: transparent;">(00:11) Algorithms can evolve with variation, replication and heredity.</span></p><p><span style="background-color: transparent;">(00:13) Natural selection drives complexity.</span></p><p><span style="background-color: transparent;">(00:18) AI adapts to selective pressures unpredictably.</span></p><p><span style="background-color: transparent;">(00:21) Humans risk losing autonomy to AI.</span></p><p><span style="background-color: transparent;">(00:25) Societal engagement is needed before developing self-replicating AIs.</span></p><p><span style="background-color: transparent;">(00:30) The challenge of controlling self-replicating systems.</span></p><p><span style="background-color: transparent;">(00:33) Interdisciplinary collaboration is crucial for AI challenges.</span></p><p><br></p><p><strong style="background-color: transparent;"><span class="ql-cursor">﻿</span>Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.evolbio.mpg.de" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Institute for Evolutionary Biology</a></p><p><a href="https://www.mpg.de/14142806/evolutionary-biology-rainey" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Paul Rainey - Max Planck Institute</a></p><p><a href="https://www.mpg.de/21167084/MPR_2023_3" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Max Planck Research Magazine - Issue 3/2023</a></p><p><a href="https://royalsocietypublishing.org/doi/full/10.1098/rstb.2021.0408" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Paul Rainey’s article in The Royal Society Publishing</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at t...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>43</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[9d9cf53c-ca36-4b02-b3bc-5f88e66ae02a]]></guid>
  <title><![CDATA[Understanding China’s AI Policy and Tech Growth with Jaap van Etten]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/jaapvanetten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jaap van Etten</a><span style="background-color: transparent;">, CEO and Co-Founder of </span><a href="https://www.linkedin.com/company/datenna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Datenna</a><span style="background-color: transparent;">, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:30) Transitioning from diplomat to tech entrepreneur.</span></p><p><span style="background-color: transparent;">(05:23) Key differences in AI approaches between China, Europe and the US.</span></p><p><span style="background-color: transparent;">(07:20) The Chinese entrepreneurial mindset and its impact on innovation.</span></p><p><span style="background-color: transparent;">(10:03) China’s strategy in AI and the importance of being a technological leader.</span></p><p><span style="background-color: transparent;">(17:05) Challenges and misconceptions about China’s technological capabilities.</span></p><p><span style="background-color: transparent;">(23:17) Recommendations for AI regulation and international cooperation.</span></p><p><span style="background-color: transparent;">(30:19) Jaap’s perspective on the future of AI legislation.</span></p><p><span style="background-color: transparent;">(35:12) The role of AI in policymaking and decision-making.</span></p><p><span style="background-color: transparent;">(40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/jaapvanetten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jaap van Etten</a> - https://www.linkedin.com/in/jaapvanetten/</p><p><a href="https://www.linkedin.com/company/datenna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Datenna</a> - https://www.linkedin.com/company/datenna/</p><p><a href="https://www.nytimes.com/2006/05/15/technology/15fraud.htm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.nytimes.com/2006/05/15/technology/15fraud.htm</a></p><p><a href="http://www.china.org.cn/english/scitech/168482.htm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">http://www.china.org.cn/english/scitech/168482.htm</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://en.wikipedia.org/wiki/Hanxin" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://en.wikipedia.org/wiki/Hanxin</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://github.com/Kkevsterrr/geneva" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://github.com/Kkevsterrr/geneva</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://geneva.cs.umd.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://geneva.cs.umd.edu</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://www.grc.com/sn/sn-779.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.grc.com/sn/sn-779.pdf</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/82cf44fd-4b56-469f-9c59-d13d4061c05a/b4c193be47.jpg" />
  <pubDate>Fri, 12 Jul 2024 12:01:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="45706845" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/82cf44fd-4b56-469f-9c59-d13d4061c05a/episode.mp3" />
  <itunes:title><![CDATA[Understanding China’s AI Policy and Tech Growth with Jaap van Etten]]></itunes:title>
  <itunes:duration>47:36</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/jaapvanetten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jaap van Etten</a><span style="background-color: transparent;">, CEO and Co-Founder of </span><a href="https://www.linkedin.com/company/datenna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Datenna</a><span style="background-color: transparent;">, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:30) Transitioning from diplomat to tech entrepreneur.</span></p><p><span style="background-color: transparent;">(05:23) Key differences in AI approaches between China, Europe and the US.</span></p><p><span style="background-color: transparent;">(07:20) The Chinese entrepreneurial mindset and its impact on innovation.</span></p><p><span style="background-color: transparent;">(10:03) China’s strategy in AI and the importance of being a technological leader.</span></p><p><span style="background-color: transparent;">(17:05) Challenges and misconceptions about China’s technological capabilities.</span></p><p><span style="background-color: transparent;">(23:17) Recommendations for AI regulation and international cooperation.</span></p><p><span style="background-color: transparent;">(30:19) Jaap’s perspective on the future of AI legislation.</span></p><p><span style="background-color: transparent;">(35:12) The role of AI in policymaking and decision-making.</span></p><p><span style="background-color: transparent;">(40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/jaapvanetten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jaap van Etten</a> - https://www.linkedin.com/in/jaapvanetten/</p><p><a href="https://www.linkedin.com/company/datenna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Datenna</a> - https://www.linkedin.com/company/datenna/</p><p><a href="https://www.nytimes.com/2006/05/15/technology/15fraud.htm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.nytimes.com/2006/05/15/technology/15fraud.htm</a></p><p><a href="http://www.china.org.cn/english/scitech/168482.htm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">http://www.china.org.cn/english/scitech/168482.htm</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://en.wikipedia.org/wiki/Hanxin" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://en.wikipedia.org/wiki/Hanxin</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://github.com/Kkevsterrr/geneva" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://github.com/Kkevsterrr/geneva</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://geneva.cs.umd.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://geneva.cs.umd.edu</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://www.grc.com/sn/sn-779.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.grc.com/sn/sn-779.pdf</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/jaapvanetten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jaap van Etten</a><span style="background-color: transparent;">, CEO and Co-Founder of </span><a href="https://www.linkedin.com/company/datenna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Datenna</a><span style="background-color: transparent;">, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:30) Transitioning from diplomat to tech entrepreneur.</span></p><p><span style="background-color: transparent;">(05:23) Key differences in AI approaches between China, Europe and the US.</span></p><p><span style="background-color: transparent;">(07:20) The Chinese entrepreneurial mindset and its impact on innovation.</span></p><p><span style="background-color: transparent;">(10:03) China’s strategy in AI and the importance of being a technological leader.</span></p><p><span style="background-color: transparent;">(17:05) Challenges and misconceptions about China’s technological capabilities.</span></p><p><span style="background-color: transparent;">(23:17) Recommendations for AI regulation and international cooperation.</span></p><p><span style="background-color: transparent;">(30:19) Jaap’s perspective on the future of AI legislation.</span></p><p><span style="background-color: transparent;">(35:12) The role of AI in policymaking and decision-making.</span></p><p><span style="background-color: transparent;">(40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/jaapvanetten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jaap van Etten</a> - https://www.linkedin.com/in/jaapvanetten/</p><p><a href="https://www.linkedin.com/company/datenna/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Datenna</a> - https://www.linkedin.com/company/datenna/</p><p><a href="https://www.nytimes.com/2006/05/15/technology/15fraud.htm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.nytimes.com/2006/05/15/technology/15fraud.htm</a></p><p><a href="http://www.china.org.cn/english/scitech/168482.htm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">http://www.china.org.cn/english/scitech/168482.htm</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://en.wikipedia.org/wiki/Hanxin" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://en.wikipedia.org/wiki/Hanxin</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://github.com/Kkevsterrr/geneva" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://github.com/Kkevsterrr/geneva</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://geneva.cs.umd.edu" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://geneva.cs.umd.edu</a><span style="background-color: transparent;">&nbsp;</span></p><p><a href="https://www.grc.com/sn/sn-779.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.grc.com/sn/sn-779.pdf</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, I’m joined by Jaap van Etten, CEO and Co-Founder of Datenna, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersecti...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>42</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[094af998-1d8b-49cd-9fce-8c966a32cdd6]]></guid>
  <title><![CDATA[Understanding Robot Learning and Its Societal Impact with Dr. Abhinav Valada]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/avalada/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Abhinav Valada</a><span style="background-color: transparent;">, Professor and Director of the Robot Learning Lab at the </span><a href="https://www.linkedin.com/company/university-of-freiburg/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Freiburg</a><span style="background-color: transparent;">, to explore the future of robotics and the essential regulations needed for their integration into society.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) The potential economic impact of AI.&nbsp;</span></p><p><span style="background-color: transparent;">(03:37) The distinction between perceived and actual AI capabilities.&nbsp;</span></p><p><span style="background-color: transparent;">(04:24) Challenges in training robots with real-world data.&nbsp;</span></p><p><span style="background-color: transparent;">(08:51) Limitations of current AI reasoning capabilities.&nbsp;</span></p><p><span style="background-color: transparent;">(13:16) The importance of conveying robot intent for collaboration.&nbsp;</span></p><p><span style="background-color: transparent;">(17:33) The need for specific guidelines for robotic systems.&nbsp;</span></p><p><span style="background-color: transparent;">(21:00) Mandating AI ethics courses in Germany.&nbsp;</span></p><p><span style="background-color: transparent;">(25:10) Collaborative robots and workforce implications.&nbsp;</span></p><p><span style="background-color: transparent;">(30:00) Privacy issues in human-robot interaction.</span></p><p><span style="background-color: transparent;">(35:02) The importance of pilot programs for autonomous vehicles.&nbsp;</span></p><p><span style="background-color: transparent;">(39:00) International collaboration in AI legislation.&nbsp;</span></p><p><span style="background-color: transparent;">(40:38) Inclusion of diverse voices in robotics research.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/avalada/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Abhinav Valada</a> - https://www.linkedin.com/in/avalada/</p><p><a href="https://www.linkedin.com/company/university-of-freiburg/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Freiburg</a> - https://www.linkedin.com/company/university-of-freiburg/</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence</p><p><a href="https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Robot Learning Lab, University of Freiburg</a> - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada</p><p> </p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e9b1882e-a7b3-4812-83de-93e20b1eb386/58ad564898.jpg" />
  <pubDate>Tue, 02 Jul 2024 21:27:29 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39498061" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e9b1882e-a7b3-4812-83de-93e20b1eb386/episode.mp3" />
  <itunes:title><![CDATA[Understanding Robot Learning and Its Societal Impact with Dr. Abhinav Valada]]></itunes:title>
  <itunes:duration>41:08</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/avalada/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Abhinav Valada</a><span style="background-color: transparent;">, Professor and Director of the Robot Learning Lab at the </span><a href="https://www.linkedin.com/company/university-of-freiburg/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Freiburg</a><span style="background-color: transparent;">, to explore the future of robotics and the essential regulations needed for their integration into society.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) The potential economic impact of AI.&nbsp;</span></p><p><span style="background-color: transparent;">(03:37) The distinction between perceived and actual AI capabilities.&nbsp;</span></p><p><span style="background-color: transparent;">(04:24) Challenges in training robots with real-world data.&nbsp;</span></p><p><span style="background-color: transparent;">(08:51) Limitations of current AI reasoning capabilities.&nbsp;</span></p><p><span style="background-color: transparent;">(13:16) The importance of conveying robot intent for collaboration.&nbsp;</span></p><p><span style="background-color: transparent;">(17:33) The need for specific guidelines for robotic systems.&nbsp;</span></p><p><span style="background-color: transparent;">(21:00) Mandating AI ethics courses in Germany.&nbsp;</span></p><p><span style="background-color: transparent;">(25:10) Collaborative robots and workforce implications.&nbsp;</span></p><p><span style="background-color: transparent;">(30:00) Privacy issues in human-robot interaction.</span></p><p><span style="background-color: transparent;">(35:02) The importance of pilot programs for autonomous vehicles.&nbsp;</span></p><p><span style="background-color: transparent;">(39:00) International collaboration in AI legislation.&nbsp;</span></p><p><span style="background-color: transparent;">(40:38) Inclusion of diverse voices in robotics research.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/avalada/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Abhinav Valada</a> - https://www.linkedin.com/in/avalada/</p><p><a href="https://www.linkedin.com/company/university-of-freiburg/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Freiburg</a> - https://www.linkedin.com/company/university-of-freiburg/</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence</p><p><a href="https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Robot Learning Lab, University of Freiburg</a> - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada</p><p> </p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/avalada/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Abhinav Valada</a><span style="background-color: transparent;">, Professor and Director of the Robot Learning Lab at the </span><a href="https://www.linkedin.com/company/university-of-freiburg/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Freiburg</a><span style="background-color: transparent;">, to explore the future of robotics and the essential regulations needed for their integration into society.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) The potential economic impact of AI.&nbsp;</span></p><p><span style="background-color: transparent;">(03:37) The distinction between perceived and actual AI capabilities.&nbsp;</span></p><p><span style="background-color: transparent;">(04:24) Challenges in training robots with real-world data.&nbsp;</span></p><p><span style="background-color: transparent;">(08:51) Limitations of current AI reasoning capabilities.&nbsp;</span></p><p><span style="background-color: transparent;">(13:16) The importance of conveying robot intent for collaboration.&nbsp;</span></p><p><span style="background-color: transparent;">(17:33) The need for specific guidelines for robotic systems.&nbsp;</span></p><p><span style="background-color: transparent;">(21:00) Mandating AI ethics courses in Germany.&nbsp;</span></p><p><span style="background-color: transparent;">(25:10) Collaborative robots and workforce implications.&nbsp;</span></p><p><span style="background-color: transparent;">(30:00) Privacy issues in human-robot interaction.</span></p><p><span style="background-color: transparent;">(35:02) The importance of pilot programs for autonomous vehicles.&nbsp;</span></p><p><span style="background-color: transparent;">(39:00) International collaboration in AI legislation.&nbsp;</span></p><p><span style="background-color: transparent;">(40:38) Inclusion of diverse voices in robotics research.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/avalada/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Abhinav Valada</a> - https://www.linkedin.com/in/avalada/</p><p><a href="https://www.linkedin.com/company/university-of-freiburg/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Freiburg</a> - https://www.linkedin.com/company/university-of-freiburg/</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence</p><p><a href="https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Robot Learning Lab, University of Freiburg</a> - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada</p><p> </p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Dr. Abhinav Valada, Professor and Director of the Robot Learning Lab at the University of Freiburg, to explore the future of robotics and the essential regulations needed for their integration into society.Key Takeawa...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>41</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[f1fb310b-74d4-4c0b-b893-15e7ab0bbae5]]></guid>
  <title><![CDATA[AI's Impact on Healthcare and Legislation with Congressman Buddy Carter]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman </span><a href="https://www.linkedin.com/in/buddycarterga/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Buddy Carter</a><span style="background-color: transparent;">, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:48) President Biden's Executive Order on AI aims to set new standards.</span></p><p><span style="background-color: transparent;">(04:34) AI's potential in healthcare, including telehealth and drug development.</span></p><p><span style="background-color: transparent;">(05:47) Legal implications for doctors not using available AI technologies.</span></p><p><span style="background-color: transparent;">(07:55) AI could speed up the drug development process.</span></p><p><span style="background-color: transparent;">(10:52) The need for constantly updated AI standards.</span></p><p><span style="background-color: transparent;">(11:56) Debate on creating a separate regulatory body for AI.</span></p><p><span style="background-color: transparent;">(14:03) Importance of including diverse voices in AI regulation.</span></p><p><span style="background-color: transparent;">(16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/buddycarterga/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Buddy Carter</a> - https://www.linkedin.com/in/buddycarterga/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><a href="https://www.eff.org/issues/cda230" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Section 230 of the Communications Decency Act</a> - https://www.eff.org/issues/cda230</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ff547e69-68cd-4089-bedb-826e6a7b4ff2/a5ceee0ce2.jpg" />
  <pubDate>Mon, 01 Jul 2024 10:09:39 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="19514597" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ff547e69-68cd-4089-bedb-826e6a7b4ff2/episode.mp3" />
  <itunes:title><![CDATA[AI's Impact on Healthcare and Legislation with Congressman Buddy Carter]]></itunes:title>
  <itunes:duration>20:19</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman </span><a href="https://www.linkedin.com/in/buddycarterga/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Buddy Carter</a><span style="background-color: transparent;">, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:48) President Biden's Executive Order on AI aims to set new standards.</span></p><p><span style="background-color: transparent;">(04:34) AI's potential in healthcare, including telehealth and drug development.</span></p><p><span style="background-color: transparent;">(05:47) Legal implications for doctors not using available AI technologies.</span></p><p><span style="background-color: transparent;">(07:55) AI could speed up the drug development process.</span></p><p><span style="background-color: transparent;">(10:52) The need for constantly updated AI standards.</span></p><p><span style="background-color: transparent;">(11:56) Debate on creating a separate regulatory body for AI.</span></p><p><span style="background-color: transparent;">(14:03) Importance of including diverse voices in AI regulation.</span></p><p><span style="background-color: transparent;">(16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/buddycarterga/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Buddy Carter</a> - https://www.linkedin.com/in/buddycarterga/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><a href="https://www.eff.org/issues/cda230" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Section 230 of the Communications Decency Act</a> - https://www.eff.org/issues/cda230</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman </span><a href="https://www.linkedin.com/in/buddycarterga/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Buddy Carter</a><span style="background-color: transparent;">, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:48) President Biden's Executive Order on AI aims to set new standards.</span></p><p><span style="background-color: transparent;">(04:34) AI's potential in healthcare, including telehealth and drug development.</span></p><p><span style="background-color: transparent;">(05:47) Legal implications for doctors not using available AI technologies.</span></p><p><span style="background-color: transparent;">(07:55) AI could speed up the drug development process.</span></p><p><span style="background-color: transparent;">(10:52) The need for constantly updated AI standards.</span></p><p><span style="background-color: transparent;">(11:56) Debate on creating a separate regulatory body for AI.</span></p><p><span style="background-color: transparent;">(14:03) Importance of including diverse voices in AI regulation.</span></p><p><span style="background-color: transparent;">(16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/buddycarterga/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Buddy Carter</a> - https://www.linkedin.com/in/buddycarterga/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><a href="https://www.eff.org/issues/cda230" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Section 230 of the Communications Decency Act</a> - https://www.eff.org/issues/cda230</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman Buddy Carter, U.S. Representative for Georgia's 1st Dis...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>40</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[254ece41-7a45-43b8-88b5-8516da1b0130]]></guid>
  <title><![CDATA[Shaping AI Policy To Safeguard Our Technological Future with Daniel Colson]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I am joined by </span><a href="https://www.linkedin.com/in/danieljcolson/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Colson</a><span style="background-color: transparent;">, Executive Director of the </span><a href="https://www.linkedin.com/company/aipolicyinstitute/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute</a><span style="background-color: transparent;">, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:15) Daniel analyzes President Biden's recent executive order on AI.</span></p><p><span style="background-color: transparent;">(04:13) Differentiating risks in AI technologies and their applications.</span></p><p><span style="background-color: transparent;">(08:52) Concerns about the open-sourcing of AI models and abuse potential.</span></p><p><span style="background-color: transparent;">(16:45) The importance of inclusive discussions in AI policymaking.</span></p><p><span style="background-color: transparent;">(19:25) Challenges and risks of regulatory capture in the AI sector.</span></p><p><span style="background-color: transparent;">(26:45) Balancing innovation with regulation.</span></p><p><span style="background-color: transparent;">(33:14) The potential for AI to transform employment and the economy.</span></p><p><span style="background-color: transparent;">(37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/danieljcolson/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Colson</a> - https://www.linkedin.com/in/danieljcolson/</p><p><a href="https://www.linkedin.com/company/aipolicyinstitute/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute</a> - https://www.linkedin.com/company/aipolicyinstitute/</p><p><a href="https://www.theaipi.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute | Website</a> - https://www.theaipi.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/1a13b482-5ec2-4f2b-9ac7-a61be14998fd/3ac9ef62f3.jpg" />
  <pubDate>Mon, 01 Jul 2024 09:59:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38549740" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/1a13b482-5ec2-4f2b-9ac7-a61be14998fd/episode.mp3" />
  <itunes:title><![CDATA[Shaping AI Policy To Safeguard Our Technological Future with Daniel Colson]]></itunes:title>
  <itunes:duration>40:09</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I am joined by </span><a href="https://www.linkedin.com/in/danieljcolson/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Colson</a><span style="background-color: transparent;">, Executive Director of the </span><a href="https://www.linkedin.com/company/aipolicyinstitute/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute</a><span style="background-color: transparent;">, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:15) Daniel analyzes President Biden's recent executive order on AI.</span></p><p><span style="background-color: transparent;">(04:13) Differentiating risks in AI technologies and their applications.</span></p><p><span style="background-color: transparent;">(08:52) Concerns about the open-sourcing of AI models and abuse potential.</span></p><p><span style="background-color: transparent;">(16:45) The importance of inclusive discussions in AI policymaking.</span></p><p><span style="background-color: transparent;">(19:25) Challenges and risks of regulatory capture in the AI sector.</span></p><p><span style="background-color: transparent;">(26:45) Balancing innovation with regulation.</span></p><p><span style="background-color: transparent;">(33:14) The potential for AI to transform employment and the economy.</span></p><p><span style="background-color: transparent;">(37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/danieljcolson/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Colson</a> - https://www.linkedin.com/in/danieljcolson/</p><p><a href="https://www.linkedin.com/company/aipolicyinstitute/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute</a> - https://www.linkedin.com/company/aipolicyinstitute/</p><p><a href="https://www.theaipi.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute | Website</a> - https://www.theaipi.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I am joined by </span><a href="https://www.linkedin.com/in/danieljcolson/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Colson</a><span style="background-color: transparent;">, Executive Director of the </span><a href="https://www.linkedin.com/company/aipolicyinstitute/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute</a><span style="background-color: transparent;">, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:15) Daniel analyzes President Biden's recent executive order on AI.</span></p><p><span style="background-color: transparent;">(04:13) Differentiating risks in AI technologies and their applications.</span></p><p><span style="background-color: transparent;">(08:52) Concerns about the open-sourcing of AI models and abuse potential.</span></p><p><span style="background-color: transparent;">(16:45) The importance of inclusive discussions in AI policymaking.</span></p><p><span style="background-color: transparent;">(19:25) Challenges and risks of regulatory capture in the AI sector.</span></p><p><span style="background-color: transparent;">(26:45) Balancing innovation with regulation.</span></p><p><span style="background-color: transparent;">(33:14) The potential for AI to transform employment and the economy.</span></p><p><span style="background-color: transparent;">(37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/danieljcolson/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Colson</a> - https://www.linkedin.com/in/danieljcolson/</p><p><a href="https://www.linkedin.com/company/aipolicyinstitute/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute</a> - https://www.linkedin.com/company/aipolicyinstitute/</p><p><a href="https://www.theaipi.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Policy Institute | Website</a> - https://www.theaipi.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I am joined by Daniel Colson, Executive Director of the AI Policy Institute, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.Key Takeaways:(02:15) Daniel...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>39</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a1299da0-170b-42ca-aa07-e64d5e7eb028]]></guid>
  <title><![CDATA[Balancing AI Innovation and Equitable Health Benefits with Professor Effy Vayena]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode of Regulating AI, I sit down with </span><a href="https://www.linkedin.com/in/effy-vayena-467b1353/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Effy Vayena</a><span style="background-color: transparent;">, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the </span><a href="https://www.linkedin.com/school/eth-zurich/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Swiss Federal Institute of Technology (ETH)</a><span style="background-color: transparent;"> and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:45) The importance of developing and using technology in ways that meet ethical standards.</span></p><p><span style="background-color: transparent;">(10:31) The necessity of agile regulation and continuous dialogue with tech developers.</span></p><p><span style="background-color: transparent;">(13:19) The concept of regulatory sandboxes for testing policies in a controlled environment.&nbsp;</span></p><p><span style="background-color: transparent;">(17:07) Balancing AI innovation with patient privacy and data security.</span></p><p><span style="background-color: transparent;">(24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.</span></p><p><span style="background-color: transparent;">(35:10) Considering the global impact of AI and the digital divide.</span></p><p><span style="background-color: transparent;">(41:06) Including and educating the public in AI regulatory processes.</span></p><p><span style="background-color: transparent;">(44:04) The importance of international collaboration in AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/effy-vayena-467b1353/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Effy Vayena</a> - https://www.linkedin.com/in/effy-vayena-467b1353/</p><p><a href="https://www.linkedin.com/school/eth-zurich/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Swiss Federal Institute of Technology (ETH)</a> - https://www.linkedin.com/school/eth-zurich/</p><p><a href="https://ethz.ch/en.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ETH Zurich</a> - https://ethz.ch/en.html</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Union’s AI Act</a> - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai</p><p><a href="https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. FDA guidelines on AI in medical devices</a> - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/b5e491e7-faba-4265-8f1b-f6aed2f0a80e/5e0de8764e.jpg" />
  <pubDate>Mon, 10 Jun 2024 13:14:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="43790502" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/b5e491e7-faba-4265-8f1b-f6aed2f0a80e/episode.mp3" />
  <itunes:title><![CDATA[Balancing AI Innovation and Equitable Health Benefits with Professor Effy Vayena]]></itunes:title>
  <itunes:duration>45:36</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode of Regulating AI, I sit down with </span><a href="https://www.linkedin.com/in/effy-vayena-467b1353/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Effy Vayena</a><span style="background-color: transparent;">, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the </span><a href="https://www.linkedin.com/school/eth-zurich/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Swiss Federal Institute of Technology (ETH)</a><span style="background-color: transparent;"> and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:45) The importance of developing and using technology in ways that meet ethical standards.</span></p><p><span style="background-color: transparent;">(10:31) The necessity of agile regulation and continuous dialogue with tech developers.</span></p><p><span style="background-color: transparent;">(13:19) The concept of regulatory sandboxes for testing policies in a controlled environment.&nbsp;</span></p><p><span style="background-color: transparent;">(17:07) Balancing AI innovation with patient privacy and data security.</span></p><p><span style="background-color: transparent;">(24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.</span></p><p><span style="background-color: transparent;">(35:10) Considering the global impact of AI and the digital divide.</span></p><p><span style="background-color: transparent;">(41:06) Including and educating the public in AI regulatory processes.</span></p><p><span style="background-color: transparent;">(44:04) The importance of international collaboration in AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/effy-vayena-467b1353/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Effy Vayena</a> - https://www.linkedin.com/in/effy-vayena-467b1353/</p><p><a href="https://www.linkedin.com/school/eth-zurich/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Swiss Federal Institute of Technology (ETH)</a> - https://www.linkedin.com/school/eth-zurich/</p><p><a href="https://ethz.ch/en.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ETH Zurich</a> - https://ethz.ch/en.html</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Union’s AI Act</a> - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai</p><p><a href="https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. FDA guidelines on AI in medical devices</a> - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode of Regulating AI, I sit down with </span><a href="https://www.linkedin.com/in/effy-vayena-467b1353/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Effy Vayena</a><span style="background-color: transparent;">, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the </span><a href="https://www.linkedin.com/school/eth-zurich/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Swiss Federal Institute of Technology (ETH)</a><span style="background-color: transparent;"> and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:45) The importance of developing and using technology in ways that meet ethical standards.</span></p><p><span style="background-color: transparent;">(10:31) The necessity of agile regulation and continuous dialogue with tech developers.</span></p><p><span style="background-color: transparent;">(13:19) The concept of regulatory sandboxes for testing policies in a controlled environment.&nbsp;</span></p><p><span style="background-color: transparent;">(17:07) Balancing AI innovation with patient privacy and data security.</span></p><p><span style="background-color: transparent;">(24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.</span></p><p><span style="background-color: transparent;">(35:10) Considering the global impact of AI and the digital divide.</span></p><p><span style="background-color: transparent;">(41:06) Including and educating the public in AI regulatory processes.</span></p><p><span style="background-color: transparent;">(44:04) The importance of international collaboration in AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/effy-vayena-467b1353/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Professor Effy Vayena</a> - https://www.linkedin.com/in/effy-vayena-467b1353/</p><p><a href="https://www.linkedin.com/school/eth-zurich/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Swiss Federal Institute of Technology (ETH)</a> - https://www.linkedin.com/school/eth-zurich/</p><p><a href="https://ethz.ch/en.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ETH Zurich</a> - https://ethz.ch/en.html</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Union’s AI Act</a> - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai</p><p><a href="https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. FDA guidelines on AI in medical devices</a> - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode of Regulating AI, I sit down with Professor Effy Vayena, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the Swiss Federal Institute of Technology (ETH) and Co-Director of Stavros Niarchos...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>38</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[dd2a087d-ca39-4e59-9806-fbde97e65282]]></guid>
  <title><![CDATA[Ensuring AI Safety and Reliability in Healthcare with Dr. Brennan Spiegel of Cedars-Sinai]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with </span><a href="https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Brennan Spiegel</a><span style="background-color: transparent;"> to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; </span><a href="https://www.linkedin.com/company/cedars-sinai-medical-center/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cedars-Sinai</a><span style="background-color: transparent;"> Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.</span></p><p><span style="background-color: transparent;">(05:47) Evaluating AI for implicit bias in mental health applications.</span></p><p><span style="background-color: transparent;">(08:03) The need for standardized guidance and rigorous oversight in AI applications.</span></p><p><span style="background-color: transparent;">(10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.</span></p><p><span style="background-color: transparent;">(16:42) The evolving role of doctors in the context of AI integration.</span></p><p><span style="background-color: transparent;">(21:22) The importance of traditional knowledge alongside AI in medical practice.</span></p><p><span style="background-color: transparent;">(24:44) International collaboration and standardized approaches to AI in healthcare.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Brennan Spiegel</a> - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/</p><p><a href="https://www.linkedin.com/company/cedars-sinai-medical-center/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cedars-Sinai</a> - https://www.linkedin.com/company/cedars-sinai-medical-center/</p><p><a href="https://x.com/BrennanSpiegel" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Spiegel on X</a> - https://x.com/BrennanSpiegel</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/cfd1c5fd-eeb7-425b-aed5-e67b9a59ecae/ce78960370.jpg" />
  <pubDate>Mon, 03 Jun 2024 08:15:30 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="26498695" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/cfd1c5fd-eeb7-425b-aed5-e67b9a59ecae/episode.mp3" />
  <itunes:title><![CDATA[Ensuring AI Safety and Reliability in Healthcare with Dr. Brennan Spiegel of Cedars-Sinai]]></itunes:title>
  <itunes:duration>27:36</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with </span><a href="https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Brennan Spiegel</a><span style="background-color: transparent;"> to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; </span><a href="https://www.linkedin.com/company/cedars-sinai-medical-center/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cedars-Sinai</a><span style="background-color: transparent;"> Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.</span></p><p><span style="background-color: transparent;">(05:47) Evaluating AI for implicit bias in mental health applications.</span></p><p><span style="background-color: transparent;">(08:03) The need for standardized guidance and rigorous oversight in AI applications.</span></p><p><span style="background-color: transparent;">(10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.</span></p><p><span style="background-color: transparent;">(16:42) The evolving role of doctors in the context of AI integration.</span></p><p><span style="background-color: transparent;">(21:22) The importance of traditional knowledge alongside AI in medical practice.</span></p><p><span style="background-color: transparent;">(24:44) International collaboration and standardized approaches to AI in healthcare.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Brennan Spiegel</a> - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/</p><p><a href="https://www.linkedin.com/company/cedars-sinai-medical-center/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cedars-Sinai</a> - https://www.linkedin.com/company/cedars-sinai-medical-center/</p><p><a href="https://x.com/BrennanSpiegel" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Spiegel on X</a> - https://x.com/BrennanSpiegel</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with </span><a href="https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Brennan Spiegel</a><span style="background-color: transparent;"> to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; </span><a href="https://www.linkedin.com/company/cedars-sinai-medical-center/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cedars-Sinai</a><span style="background-color: transparent;"> Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.</span></p><p><span style="background-color: transparent;">(05:47) Evaluating AI for implicit bias in mental health applications.</span></p><p><span style="background-color: transparent;">(08:03) The need for standardized guidance and rigorous oversight in AI applications.</span></p><p><span style="background-color: transparent;">(10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.</span></p><p><span style="background-color: transparent;">(16:42) The evolving role of doctors in the context of AI integration.</span></p><p><span style="background-color: transparent;">(21:22) The importance of traditional knowledge alongside AI in medical practice.</span></p><p><span style="background-color: transparent;">(24:44) International collaboration and standardized approaches to AI in healthcare.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Brennan Spiegel</a> - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/</p><p><a href="https://www.linkedin.com/company/cedars-sinai-medical-center/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cedars-Sinai</a> - https://www.linkedin.com/company/cedars-sinai-medical-center/</p><p><a href="https://x.com/BrennanSpiegel" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Brennan Spiegel on X</a> - https://x.com/BrennanSpiegel</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with Dr. Brennan Spiegel to explore how AI is revolutionizing the m...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>37</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[beb38b15-3dc9-479c-a473-818d8e5eb7f6]]></guid>
  <title><![CDATA[Understanding the Legal and Ethical Implications of AI in Healthcare with Carmel Shachar]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode, I welcome </span><a href="https://www.linkedin.com/in/carmel-shachar-7b3a8525/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar</a><span style="background-color: transparent;">, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at </span><a href="https://www.linkedin.com/company/harvardchlpi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School Center for Health Law and Policy Innovation</a><span style="background-color: transparent;">. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) AI’s challenges in balancing patient data needs.</span></p><p><span style="background-color: transparent;">(03:09) The revolutionary potential of AI in healthcare innovation.</span></p><p><span style="background-color: transparent;">(04:30) How AI is driving precision and personalized medicine.</span></p><p><span style="background-color: transparent;">(06:19) The urgent need for healthcare system evolution.</span></p><p><span style="background-color: transparent;">(09:00) Potential negative impacts of poorly implemented AI.</span></p><p><span style="background-color: transparent;">(12:00) The unique challenges posed by AI as a medical device.</span></p><p><span style="background-color: transparent;">(15:10) Minimizing regulatory handoffs to enhance AI efficacy.</span></p><p><span style="background-color: transparent;">(18:00) How AI can reduce healthcare disparities.</span></p><p><span style="background-color: transparent;">(20:00) Ethical considerations and biases in AI deployment.</span></p><p><span style="background-color: transparent;">(25:00) AI’s growing impact on healthcare operations and management.</span></p><p><span style="background-color: transparent;">(30:00) Enhancing patient-physician communication with AI tools.</span></p><p><span style="background-color: transparent;">(39:00) Future directions in AI and healthcare policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/carmel-shachar-7b3a8525/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar</a> - https://www.linkedin.com/in/carmel-shachar-7b3a8525/</p><p><a href="https://www.linkedin.com/company/harvardchlpi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School Center for Health Law and Policy Innovation</a> - https://www.linkedin.com/company/harvardchlpi/</p><p><a href="https://hls.harvard.edu/faculty/carmel-shachar/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar's Faculty Profile at Harvard Law School</a> - https://hls.harvard.edu/faculty/carmel-shachar/</p><p><a href="https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Precision Medicine, Artificial Intelligence and the Law Project</a> - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law</p><p><a href="https://blog.petrieflom.law.harvard.edu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Petrie-Flom Center Blog</a> - https://blog.petrieflom.law.harvard.edu/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/a0606c4e-656b-45bc-9dc0-d6626528e76d/155c904451.jpg" />
  <pubDate>Tue, 28 May 2024 12:56:28 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="40451844" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/a0606c4e-656b-45bc-9dc0-d6626528e76d/episode.mp3" />
  <itunes:title><![CDATA[Understanding the Legal and Ethical Implications of AI in Healthcare with Carmel Shachar]]></itunes:title>
  <itunes:duration>42:08</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode, I welcome </span><a href="https://www.linkedin.com/in/carmel-shachar-7b3a8525/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar</a><span style="background-color: transparent;">, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at </span><a href="https://www.linkedin.com/company/harvardchlpi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School Center for Health Law and Policy Innovation</a><span style="background-color: transparent;">. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) AI’s challenges in balancing patient data needs.</span></p><p><span style="background-color: transparent;">(03:09) The revolutionary potential of AI in healthcare innovation.</span></p><p><span style="background-color: transparent;">(04:30) How AI is driving precision and personalized medicine.</span></p><p><span style="background-color: transparent;">(06:19) The urgent need for healthcare system evolution.</span></p><p><span style="background-color: transparent;">(09:00) Potential negative impacts of poorly implemented AI.</span></p><p><span style="background-color: transparent;">(12:00) The unique challenges posed by AI as a medical device.</span></p><p><span style="background-color: transparent;">(15:10) Minimizing regulatory handoffs to enhance AI efficacy.</span></p><p><span style="background-color: transparent;">(18:00) How AI can reduce healthcare disparities.</span></p><p><span style="background-color: transparent;">(20:00) Ethical considerations and biases in AI deployment.</span></p><p><span style="background-color: transparent;">(25:00) AI’s growing impact on healthcare operations and management.</span></p><p><span style="background-color: transparent;">(30:00) Enhancing patient-physician communication with AI tools.</span></p><p><span style="background-color: transparent;">(39:00) Future directions in AI and healthcare policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/carmel-shachar-7b3a8525/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar</a> - https://www.linkedin.com/in/carmel-shachar-7b3a8525/</p><p><a href="https://www.linkedin.com/company/harvardchlpi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School Center for Health Law and Policy Innovation</a> - https://www.linkedin.com/company/harvardchlpi/</p><p><a href="https://hls.harvard.edu/faculty/carmel-shachar/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar's Faculty Profile at Harvard Law School</a> - https://hls.harvard.edu/faculty/carmel-shachar/</p><p><a href="https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Precision Medicine, Artificial Intelligence and the Law Project</a> - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law</p><p><a href="https://blog.petrieflom.law.harvard.edu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Petrie-Flom Center Blog</a> - https://blog.petrieflom.law.harvard.edu/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode, I welcome </span><a href="https://www.linkedin.com/in/carmel-shachar-7b3a8525/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar</a><span style="background-color: transparent;">, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at </span><a href="https://www.linkedin.com/company/harvardchlpi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School Center for Health Law and Policy Innovation</a><span style="background-color: transparent;">. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) AI’s challenges in balancing patient data needs.</span></p><p><span style="background-color: transparent;">(03:09) The revolutionary potential of AI in healthcare innovation.</span></p><p><span style="background-color: transparent;">(04:30) How AI is driving precision and personalized medicine.</span></p><p><span style="background-color: transparent;">(06:19) The urgent need for healthcare system evolution.</span></p><p><span style="background-color: transparent;">(09:00) Potential negative impacts of poorly implemented AI.</span></p><p><span style="background-color: transparent;">(12:00) The unique challenges posed by AI as a medical device.</span></p><p><span style="background-color: transparent;">(15:10) Minimizing regulatory handoffs to enhance AI efficacy.</span></p><p><span style="background-color: transparent;">(18:00) How AI can reduce healthcare disparities.</span></p><p><span style="background-color: transparent;">(20:00) Ethical considerations and biases in AI deployment.</span></p><p><span style="background-color: transparent;">(25:00) AI’s growing impact on healthcare operations and management.</span></p><p><span style="background-color: transparent;">(30:00) Enhancing patient-physician communication with AI tools.</span></p><p><span style="background-color: transparent;">(39:00) Future directions in AI and healthcare policy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/carmel-shachar-7b3a8525/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar</a> - https://www.linkedin.com/in/carmel-shachar-7b3a8525/</p><p><a href="https://www.linkedin.com/company/harvardchlpi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School Center for Health Law and Policy Innovation</a> - https://www.linkedin.com/company/harvardchlpi/</p><p><a href="https://hls.harvard.edu/faculty/carmel-shachar/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Carmel Shachar's Faculty Profile at Harvard Law School</a> - https://hls.harvard.edu/faculty/carmel-shachar/</p><p><a href="https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Precision Medicine, Artificial Intelligence and the Law Project</a> - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law</p><p><a href="https://blog.petrieflom.law.harvard.edu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Petrie-Flom Center Blog</a> - https://blog.petrieflom.law.harvard.edu/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, I welcome Carmel Shachar, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at Harvard Law School Center for Health Law and Policy Innovation. We delve into how AI is shaping the future of...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>36</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[cba21052-71f2-47ce-883d-54c17f564543]]></guid>
  <title><![CDATA[The Importance of Diverse Perspectives in Shaping AI Policies with Ari Kaplan ]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/arikaplan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ari Kaplan</a><span style="background-color: transparent;">, Head Evangelist of </span><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a><span style="background-color: transparent;">, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(04:42) Insights on the rapid advancements in AI technology and legislative responses.</span></p><p><span style="background-color: transparent;">(10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.</span></p><p><span style="background-color: transparent;">(13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.</span></p><p><span style="background-color: transparent;">(16:56) Ethical concerns in AI across different countries.</span></p><p><span style="background-color: transparent;">(21:21) The necessity for both industry-specific and overarching AI regulations.</span></p><p><span style="background-color: transparent;">(25:09) Automation’s potential to improve efficiency also raises employment risk.</span></p><p><span style="background-color: transparent;">(29:17) A balanced, educational approach in the age of AI is crucial.</span></p><p><span style="background-color: transparent;">(32:45) Risks associated with generative AI and the importance of intellectual property rights.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/arikaplan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ari Kaplan</a> - https://www.linkedin.com/in/arikaplan/</p><p><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a> - https://www.linkedin.com/company/databricks/</p><p><a href="https://www.databricks.com/blog/unity-catalog-governance-value-levers" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Unity Catalog Governance Value Levers</a> - https://www.databricks.com/blog/unity-catalog-governance-value-levers</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act Information</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2f624dc1-62c7-40dd-b472-45f4f9e0774e/e24aec1f23.jpg" />
  <pubDate>Fri, 24 May 2024 10:20:35 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39198803" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2f624dc1-62c7-40dd-b472-45f4f9e0774e/episode.mp3" />
  <itunes:title><![CDATA[The Importance of Diverse Perspectives in Shaping AI Policies with Ari Kaplan ]]></itunes:title>
  <itunes:duration>40:49</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/arikaplan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ari Kaplan</a><span style="background-color: transparent;">, Head Evangelist of </span><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a><span style="background-color: transparent;">, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(04:42) Insights on the rapid advancements in AI technology and legislative responses.</span></p><p><span style="background-color: transparent;">(10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.</span></p><p><span style="background-color: transparent;">(13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.</span></p><p><span style="background-color: transparent;">(16:56) Ethical concerns in AI across different countries.</span></p><p><span style="background-color: transparent;">(21:21) The necessity for both industry-specific and overarching AI regulations.</span></p><p><span style="background-color: transparent;">(25:09) Automation’s potential to improve efficiency also raises employment risk.</span></p><p><span style="background-color: transparent;">(29:17) A balanced, educational approach in the age of AI is crucial.</span></p><p><span style="background-color: transparent;">(32:45) Risks associated with generative AI and the importance of intellectual property rights.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/arikaplan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ari Kaplan</a> - https://www.linkedin.com/in/arikaplan/</p><p><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a> - https://www.linkedin.com/company/databricks/</p><p><a href="https://www.databricks.com/blog/unity-catalog-governance-value-levers" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Unity Catalog Governance Value Levers</a> - https://www.databricks.com/blog/unity-catalog-governance-value-levers</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act Information</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/arikaplan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ari Kaplan</a><span style="background-color: transparent;">, Head Evangelist of </span><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a><span style="background-color: transparent;">, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(04:42) Insights on the rapid advancements in AI technology and legislative responses.</span></p><p><span style="background-color: transparent;">(10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.</span></p><p><span style="background-color: transparent;">(13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.</span></p><p><span style="background-color: transparent;">(16:56) Ethical concerns in AI across different countries.</span></p><p><span style="background-color: transparent;">(21:21) The necessity for both industry-specific and overarching AI regulations.</span></p><p><span style="background-color: transparent;">(25:09) Automation’s potential to improve efficiency also raises employment risk.</span></p><p><span style="background-color: transparent;">(29:17) A balanced, educational approach in the age of AI is crucial.</span></p><p><span style="background-color: transparent;">(32:45) Risks associated with generative AI and the importance of intellectual property rights.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/arikaplan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ari Kaplan</a> - https://www.linkedin.com/in/arikaplan/</p><p><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a> - https://www.linkedin.com/company/databricks/</p><p><a href="https://www.databricks.com/blog/unity-catalog-governance-value-levers" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Unity Catalog Governance Value Levers</a> - https://www.databricks.com/blog/unity-catalog-governance-value-levers</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act Information</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I welcome Ari Kaplan, Head Evangelist of Databricks, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance o...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>35</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[a57ae6f1-351c-4103-b136-dee1681acf61]]></guid>
  <title><![CDATA[AI and Regulatory Frameworks in Telecommunications with Nicolas Kourtellis]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode, I welcome </span><a href="https://www.linkedin.com/in/nicolas-kourtellis-3a154511/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nicolas Kourtellis</a><span style="background-color: transparent;">, Co-Director of Telefónica Research and Head of Systems AI Lab at </span><a href="https://www.linkedin.com/company/telefonica-innovacion-digital/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefónica Innovación Digital</a><span style="background-color: transparent; color: rgb(68, 71, 70);">, a company of the </span><a href="https://www.linkedin.com/company/telefonica/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefonica Group</a><span style="background-color: transparent;">. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment.&nbsp;</span></p><p><br></p><p><span style="background-color: transparent;">Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) AI research focuses and applications in telecommunications.</span></p><p><span style="background-color: transparent;">(03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.</span></p><p><span style="background-color: transparent;">(06:00) How Telefónica uses AI to improve customer service through AI chatbots.</span></p><p><span style="background-color: transparent;">(12:03) The ethical considerations and sustainability of AI models.</span></p><p><span style="background-color: transparent;">(16:08) Democratizing AI to make it accessible and beneficial for all users.</span></p><p><span style="background-color: transparent;">(18:09) Designing AI systems with privacy and security from the start.</span></p><p><span style="background-color: transparent;">(27:00) The challenges and opportunities AI presents for the workforce.</span></p><p><span style="background-color: transparent;">(30:25) The potential of 6G and its reliance on AI technologies.</span></p><p><span style="background-color: transparent;">(32:16) The integral role of AI in future technological advancements and network optimizations.</span></p><p><span style="background-color: transparent;">(39:35) The societal impacts of AI in telecommunications.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nicolas-kourtellis-3a154511/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nicolas Kourtellis</a> - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/</p><p><a href="https://www.linkedin.com/company/telefonica-innovacion-digital/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefónica Innovación Digital</a> - https://www.linkedin.com/company/telefonica-innovacion-digital/</p><p><a href="https://www.linkedin.com/company/telefonica/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefonica Group</a> - https://www.linkedin.com/company/telefonica/</p><p>You can find all of Nicolas’ publications on his Google Scholar page: <a href="http://scholar.google.com/citations?user=Q5oWwiQAAAAJ" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">http://scholar.google.com/citations?user=Q5oWwiQAAAAJ</a><span style="background-color: transparent;">&nbsp;</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/5a7612a6-6c39-4c3f-ac3c-a58484670836/659cb49b3a.jpg" />
  <pubDate>Wed, 22 May 2024 12:44:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38339438" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/5a7612a6-6c39-4c3f-ac3c-a58484670836/episode.mp3" />
  <itunes:title><![CDATA[AI and Regulatory Frameworks in Telecommunications with Nicolas Kourtellis]]></itunes:title>
  <itunes:duration>39:56</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode, I welcome </span><a href="https://www.linkedin.com/in/nicolas-kourtellis-3a154511/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nicolas Kourtellis</a><span style="background-color: transparent;">, Co-Director of Telefónica Research and Head of Systems AI Lab at </span><a href="https://www.linkedin.com/company/telefonica-innovacion-digital/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefónica Innovación Digital</a><span style="background-color: transparent; color: rgb(68, 71, 70);">, a company of the </span><a href="https://www.linkedin.com/company/telefonica/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefonica Group</a><span style="background-color: transparent;">. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment.&nbsp;</span></p><p><br></p><p><span style="background-color: transparent;">Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) AI research focuses and applications in telecommunications.</span></p><p><span style="background-color: transparent;">(03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.</span></p><p><span style="background-color: transparent;">(06:00) How Telefónica uses AI to improve customer service through AI chatbots.</span></p><p><span style="background-color: transparent;">(12:03) The ethical considerations and sustainability of AI models.</span></p><p><span style="background-color: transparent;">(16:08) Democratizing AI to make it accessible and beneficial for all users.</span></p><p><span style="background-color: transparent;">(18:09) Designing AI systems with privacy and security from the start.</span></p><p><span style="background-color: transparent;">(27:00) The challenges and opportunities AI presents for the workforce.</span></p><p><span style="background-color: transparent;">(30:25) The potential of 6G and its reliance on AI technologies.</span></p><p><span style="background-color: transparent;">(32:16) The integral role of AI in future technological advancements and network optimizations.</span></p><p><span style="background-color: transparent;">(39:35) The societal impacts of AI in telecommunications.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nicolas-kourtellis-3a154511/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nicolas Kourtellis</a> - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/</p><p><a href="https://www.linkedin.com/company/telefonica-innovacion-digital/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefónica Innovación Digital</a> - https://www.linkedin.com/company/telefonica-innovacion-digital/</p><p><a href="https://www.linkedin.com/company/telefonica/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefonica Group</a> - https://www.linkedin.com/company/telefonica/</p><p>You can find all of Nicolas’ publications on his Google Scholar page: <a href="http://scholar.google.com/citations?user=Q5oWwiQAAAAJ" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">http://scholar.google.com/citations?user=Q5oWwiQAAAAJ</a><span style="background-color: transparent;">&nbsp;</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode, I welcome </span><a href="https://www.linkedin.com/in/nicolas-kourtellis-3a154511/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nicolas Kourtellis</a><span style="background-color: transparent;">, Co-Director of Telefónica Research and Head of Systems AI Lab at </span><a href="https://www.linkedin.com/company/telefonica-innovacion-digital/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefónica Innovación Digital</a><span style="background-color: transparent; color: rgb(68, 71, 70);">, a company of the </span><a href="https://www.linkedin.com/company/telefonica/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefonica Group</a><span style="background-color: transparent;">. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment.&nbsp;</span></p><p><br></p><p><span style="background-color: transparent;">Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) AI research focuses and applications in telecommunications.</span></p><p><span style="background-color: transparent;">(03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.</span></p><p><span style="background-color: transparent;">(06:00) How Telefónica uses AI to improve customer service through AI chatbots.</span></p><p><span style="background-color: transparent;">(12:03) The ethical considerations and sustainability of AI models.</span></p><p><span style="background-color: transparent;">(16:08) Democratizing AI to make it accessible and beneficial for all users.</span></p><p><span style="background-color: transparent;">(18:09) Designing AI systems with privacy and security from the start.</span></p><p><span style="background-color: transparent;">(27:00) The challenges and opportunities AI presents for the workforce.</span></p><p><span style="background-color: transparent;">(30:25) The potential of 6G and its reliance on AI technologies.</span></p><p><span style="background-color: transparent;">(32:16) The integral role of AI in future technological advancements and network optimizations.</span></p><p><span style="background-color: transparent;">(39:35) The societal impacts of AI in telecommunications.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nicolas-kourtellis-3a154511/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nicolas Kourtellis</a> - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/</p><p><a href="https://www.linkedin.com/company/telefonica-innovacion-digital/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefónica Innovación Digital</a> - https://www.linkedin.com/company/telefonica-innovacion-digital/</p><p><a href="https://www.linkedin.com/company/telefonica/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Telefonica Group</a> - https://www.linkedin.com/company/telefonica/</p><p>You can find all of Nicolas’ publications on his Google Scholar page: <a href="http://scholar.google.com/citations?user=Q5oWwiQAAAAJ" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">http://scholar.google.com/citations?user=Q5oWwiQAAAAJ</a><span style="background-color: transparent;">&nbsp;</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, I welcome Nicolas Kourtellis, Co-Director of Telefónica Research and Head of Systems AI Lab at Telefónica Innovación Digital, a company of the Telefonica Group. Nicolas shares his expert insights on the pivotal role of AI in revolu...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>34</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[36c1ebd5-cc6e-4bc1-9262-66d68d9c950b]]></guid>
  <title><![CDATA[Supporting Vulnerable Populations With AI-Driven Initiatives with Dr. Irina Mirkina of UNICEF]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode of the Regulating AI Podcast, I'm joined by </span><a href="https://www.linkedin.com/in/irinamirkina/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Irina Mirkina</a><span style="background-color: transparent;">, Innovation Manager and AI Lead at </span><a href="https://www.linkedin.com/company/unicef/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">UNICEF</a><span style="background-color: transparent;">'s Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:31) The role of international organizations like UNICEF in shaping global AI regulations.</span></p><p><span style="background-color: transparent;">(07:06) Challenges of democratizing AI across different regions to overcome the digital divide.</span></p><p><span style="background-color: transparent;">(10:28) The importance of developing AI systems that cater to local contexts.</span></p><p><span style="background-color: transparent;">(13:23) The transformative potential and limitations of AI in personalized education.</span></p><p><span style="background-color: transparent;">(16:37) Engaging vulnerable populations directly in AI policy discussions.</span></p><p><span style="background-color: transparent;">(20:47) UNICEF's use of AI in addressing humanitarian challenges.</span></p><p><span style="background-color: transparent;">(25:10) The role of civil society in AI regulation and policymaking.</span></p><p><span style="background-color: transparent;">(33:50) AI's risks and limitations, including issues of open-source management and societal impact.</span></p><p><span style="background-color: transparent;">(38:57) The critical need for international collaboration and standardization in AI regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/irinamirkina/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Irina Mirkina</a> - https://www.linkedin.com/in/irinamirkina/</p><p><a href="https://www.unicef.org/innovation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">UNICEF Office of Innovation</a> - https://www.unicef.org/innovation/</p><p><a href="https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Policy Guidance on AI for Children by UNICEF</a> - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/6c21b951-34d2-47a8-820e-5f5db0086c6c/eca0c6c7c8.jpg" />
  <pubDate>Fri, 17 May 2024 06:48:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="43006829" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/6c21b951-34d2-47a8-820e-5f5db0086c6c/episode.mp3" />
  <itunes:title><![CDATA[Supporting Vulnerable Populations With AI-Driven Initiatives with Dr. Irina Mirkina of UNICEF]]></itunes:title>
  <itunes:duration>44:47</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode of the Regulating AI Podcast, I'm joined by </span><a href="https://www.linkedin.com/in/irinamirkina/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Irina Mirkina</a><span style="background-color: transparent;">, Innovation Manager and AI Lead at </span><a href="https://www.linkedin.com/company/unicef/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">UNICEF</a><span style="background-color: transparent;">'s Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:31) The role of international organizations like UNICEF in shaping global AI regulations.</span></p><p><span style="background-color: transparent;">(07:06) Challenges of democratizing AI across different regions to overcome the digital divide.</span></p><p><span style="background-color: transparent;">(10:28) The importance of developing AI systems that cater to local contexts.</span></p><p><span style="background-color: transparent;">(13:23) The transformative potential and limitations of AI in personalized education.</span></p><p><span style="background-color: transparent;">(16:37) Engaging vulnerable populations directly in AI policy discussions.</span></p><p><span style="background-color: transparent;">(20:47) UNICEF's use of AI in addressing humanitarian challenges.</span></p><p><span style="background-color: transparent;">(25:10) The role of civil society in AI regulation and policymaking.</span></p><p><span style="background-color: transparent;">(33:50) AI's risks and limitations, including issues of open-source management and societal impact.</span></p><p><span style="background-color: transparent;">(38:57) The critical need for international collaboration and standardization in AI regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/irinamirkina/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Irina Mirkina</a> - https://www.linkedin.com/in/irinamirkina/</p><p><a href="https://www.unicef.org/innovation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">UNICEF Office of Innovation</a> - https://www.unicef.org/innovation/</p><p><a href="https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Policy Guidance on AI for Children by UNICEF</a> - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode of the Regulating AI Podcast, I'm joined by </span><a href="https://www.linkedin.com/in/irinamirkina/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Irina Mirkina</a><span style="background-color: transparent;">, Innovation Manager and AI Lead at </span><a href="https://www.linkedin.com/company/unicef/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">UNICEF</a><span style="background-color: transparent;">'s Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(03:31) The role of international organizations like UNICEF in shaping global AI regulations.</span></p><p><span style="background-color: transparent;">(07:06) Challenges of democratizing AI across different regions to overcome the digital divide.</span></p><p><span style="background-color: transparent;">(10:28) The importance of developing AI systems that cater to local contexts.</span></p><p><span style="background-color: transparent;">(13:23) The transformative potential and limitations of AI in personalized education.</span></p><p><span style="background-color: transparent;">(16:37) Engaging vulnerable populations directly in AI policy discussions.</span></p><p><span style="background-color: transparent;">(20:47) UNICEF's use of AI in addressing humanitarian challenges.</span></p><p><span style="background-color: transparent;">(25:10) The role of civil society in AI regulation and policymaking.</span></p><p><span style="background-color: transparent;">(33:50) AI's risks and limitations, including issues of open-source management and societal impact.</span></p><p><span style="background-color: transparent;">(38:57) The critical need for international collaboration and standardization in AI regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/irinamirkina/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Irina Mirkina</a> - https://www.linkedin.com/in/irinamirkina/</p><p><a href="https://www.unicef.org/innovation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">UNICEF Office of Innovation</a> - https://www.unicef.org/innovation/</p><p><a href="https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Policy Guidance on AI for Children by UNICEF</a> - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode of the Regulating AI Podcast, I'm joined by Dr. Irina Mirkina, Innovation Manager and AI Lead at UNICEF's Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experi...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>33</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[9db0b1c6-08fc-4e79-b478-b56c298d0306]]></guid>
  <title><![CDATA[Understanding the Role of Government and Big Tech in China’s AI Landscape with Angela Zhang]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor </span><a href="http://www.angelazhang.net" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang</a><span style="background-color: transparent;">, Associate Professor of Law at the </span><a href="https://www.linkedin.com/school/universityofhongkong/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Hong Kong</a><span style="background-color: transparent;"> and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:14) The introduction of China’s approach to AI regulation.</span></p><p><span style="background-color: transparent;">(06:40) Discussion on the volatile nature of Chinese regulatory processes.</span></p><p><span style="background-color: transparent;">(10:26) How China’s AI strategy impacts international relations and global standards.</span></p><p><span style="background-color: transparent;">(13:32) Angela explains the strategic use of law as an enabler in China’s AI development.</span></p><p><span style="background-color: transparent;">(18:53) High-level talks between the US and China on AI risk have not led to substantive actions.</span></p><p><span style="background-color: transparent;">(22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.</span></p><p><span style="background-color: transparent;">(24:13) Unintended consequences of the Chinese regulatory system.</span></p><p><span style="background-color: transparent;">(29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><span style="background-color: transparent;">Professor </span><a href="http://www.angelazhang.net" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang</a> - http://www.angelazhang.net</p><p><a href="https://global.oup.com/academic/product/high-wire-9780197682258" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">High Wire by Angela Zhang</a> - https://global.oup.com/academic/product/high-wire-9780197682258</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Article: The Promise and Perils of China’s Regulation</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Research: Generative AI and Copyright: A Dynamic Perspective</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Research: The Promise and Perils of China's Regulation of Artificial Intelligence</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676</p><p><a href="https://www.angelazhang.net/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang’s Website</a> - https://www.angelazhang.net/</p><p><a href="https://www.youtube.com/watch?v=u6OPSit6k6s" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">High Wire Book Trailer</a> - https://www.youtube.com/watch?v=u6OPSit6k6s</p><p><a href="https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&amp;keywords=high+wire+angela+zhang&amp;qid=1706441967&amp;sprefix=high+wire+angela+zha,aps,333&amp;sr=8-1" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Purchase High Wire by Angela Zhang</a> - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&amp;keywords=high+wire+angela+zhang&amp;qid=1706441967&amp;sprefix=high+wire+angela+zha,aps,333&amp;sr=8-1</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/640138b0-c385-403c-af08-3726bd6486eb/9fc40ee37b.jpg" />
  <pubDate>Fri, 03 May 2024 11:31:20 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="35387811" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/640138b0-c385-403c-af08-3726bd6486eb/episode.mp3" />
  <itunes:title><![CDATA[Understanding the Role of Government and Big Tech in China’s AI Landscape with Angela Zhang]]></itunes:title>
  <itunes:duration>36:51</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor </span><a href="http://www.angelazhang.net" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang</a><span style="background-color: transparent;">, Associate Professor of Law at the </span><a href="https://www.linkedin.com/school/universityofhongkong/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Hong Kong</a><span style="background-color: transparent;"> and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:14) The introduction of China’s approach to AI regulation.</span></p><p><span style="background-color: transparent;">(06:40) Discussion on the volatile nature of Chinese regulatory processes.</span></p><p><span style="background-color: transparent;">(10:26) How China’s AI strategy impacts international relations and global standards.</span></p><p><span style="background-color: transparent;">(13:32) Angela explains the strategic use of law as an enabler in China’s AI development.</span></p><p><span style="background-color: transparent;">(18:53) High-level talks between the US and China on AI risk have not led to substantive actions.</span></p><p><span style="background-color: transparent;">(22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.</span></p><p><span style="background-color: transparent;">(24:13) Unintended consequences of the Chinese regulatory system.</span></p><p><span style="background-color: transparent;">(29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><span style="background-color: transparent;">Professor </span><a href="http://www.angelazhang.net" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang</a> - http://www.angelazhang.net</p><p><a href="https://global.oup.com/academic/product/high-wire-9780197682258" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">High Wire by Angela Zhang</a> - https://global.oup.com/academic/product/high-wire-9780197682258</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Article: The Promise and Perils of China’s Regulation</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Research: Generative AI and Copyright: A Dynamic Perspective</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Research: The Promise and Perils of China's Regulation of Artificial Intelligence</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676</p><p><a href="https://www.angelazhang.net/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang’s Website</a> - https://www.angelazhang.net/</p><p><a href="https://www.youtube.com/watch?v=u6OPSit6k6s" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">High Wire Book Trailer</a> - https://www.youtube.com/watch?v=u6OPSit6k6s</p><p><a href="https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&amp;keywords=high+wire+angela+zhang&amp;qid=1706441967&amp;sprefix=high+wire+angela+zha,aps,333&amp;sr=8-1" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Purchase High Wire by Angela Zhang</a> - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&amp;keywords=high+wire+angela+zhang&amp;qid=1706441967&amp;sprefix=high+wire+angela+zha,aps,333&amp;sr=8-1</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor </span><a href="http://www.angelazhang.net" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang</a><span style="background-color: transparent;">, Associate Professor of Law at the </span><a href="https://www.linkedin.com/school/universityofhongkong/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">University of Hong Kong</a><span style="background-color: transparent;"> and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:14) The introduction of China’s approach to AI regulation.</span></p><p><span style="background-color: transparent;">(06:40) Discussion on the volatile nature of Chinese regulatory processes.</span></p><p><span style="background-color: transparent;">(10:26) How China’s AI strategy impacts international relations and global standards.</span></p><p><span style="background-color: transparent;">(13:32) Angela explains the strategic use of law as an enabler in China’s AI development.</span></p><p><span style="background-color: transparent;">(18:53) High-level talks between the US and China on AI risk have not led to substantive actions.</span></p><p><span style="background-color: transparent;">(22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.</span></p><p><span style="background-color: transparent;">(24:13) Unintended consequences of the Chinese regulatory system.</span></p><p><span style="background-color: transparent;">(29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><span style="background-color: transparent;">Professor </span><a href="http://www.angelazhang.net" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang</a> - http://www.angelazhang.net</p><p><a href="https://global.oup.com/academic/product/high-wire-9780197682258" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">High Wire by Angela Zhang</a> - https://global.oup.com/academic/product/high-wire-9780197682258</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Article: The Promise and Perils of China’s Regulation</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Research: Generative AI and Copyright: A Dynamic Perspective</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233</p><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Research: The Promise and Perils of China's Regulation of Artificial Intelligence</a> - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676</p><p><a href="https://www.angelazhang.net/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Angela Zhang’s Website</a> - https://www.angelazhang.net/</p><p><a href="https://www.youtube.com/watch?v=u6OPSit6k6s" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">High Wire Book Trailer</a> - https://www.youtube.com/watch?v=u6OPSit6k6s</p><p><a href="https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&amp;keywords=high+wire+angela+zhang&amp;qid=1706441967&amp;sprefix=high+wire+angela+zha,aps,333&amp;sr=8-1" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Purchase High Wire by Angela Zhang</a> - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&amp;keywords=high+wire+angela+zhang&amp;qid=1706441967&amp;sprefix=high+wire+angela+zha,aps,333&amp;sr=8-1</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Professor Angela Zhang, Associate Professor of Law at the University of Hong Kong and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how t...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>32</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[897f038e-e602-4035-9a81-11daf74a87e6]]></guid>
  <title><![CDATA[Advocating for Stronger AI Regulations To Safeguard Civil Liberties with Congressman Joseph Morelle]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I am thrilled to sit down with </span><a href="https://www.linkedin.com/in/joe-morelle-8246099/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Joseph Morelle</a><span style="background-color: transparent;">, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.</span></p><p><span style="background-color: transparent;">(04:27) Deep fakes and their growing threat to privacy and integrity.</span></p><p><span style="background-color: transparent;">(07:13) Introducing federal legislation against non-consensual deep fakes.</span></p><p><span style="background-color: transparent;">(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.</span></p><p><span style="background-color: transparent;">(19:46) The No AI Fraud Act and protecting individual likeness in AI use.</span></p><p><span style="background-color: transparent;">(23:06) The importance of adaptable and 'living' statutes in technology regulation.</span></p><p><span style="background-color: transparent;">(32:59) The critical role of continuous education and skill adaptation in the AI era.</span></p><p><span style="background-color: transparent;">(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/joe-morelle-8246099/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Joseph Morelle</a> - https://www.linkedin.com/in/joe-morelle-8246099/</p><p><a href="https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&amp;r=9" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">No AI Fraud Act</a> - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&amp;r=9</p><p><a href="https://www.congress.gov/bill/118th-congress/house-bill/3106" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Preventing Deep Fakes of Intimate Images Act</a> - https://www.congress.gov/bill/118th-congress/house-bill/3106</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e1e83438-f040-46af-8e89-8b37edc8c28e/c8c818707a.jpg" />
  <pubDate>Tue, 30 Apr 2024 11:43:28 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38694741" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e1e83438-f040-46af-8e89-8b37edc8c28e/episode.mp3" />
  <itunes:title><![CDATA[Advocating for Stronger AI Regulations To Safeguard Civil Liberties with Congressman Joseph Morelle]]></itunes:title>
  <itunes:duration>40:18</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I am thrilled to sit down with </span><a href="https://www.linkedin.com/in/joe-morelle-8246099/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Joseph Morelle</a><span style="background-color: transparent;">, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.</span></p><p><span style="background-color: transparent;">(04:27) Deep fakes and their growing threat to privacy and integrity.</span></p><p><span style="background-color: transparent;">(07:13) Introducing federal legislation against non-consensual deep fakes.</span></p><p><span style="background-color: transparent;">(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.</span></p><p><span style="background-color: transparent;">(19:46) The No AI Fraud Act and protecting individual likeness in AI use.</span></p><p><span style="background-color: transparent;">(23:06) The importance of adaptable and 'living' statutes in technology regulation.</span></p><p><span style="background-color: transparent;">(32:59) The critical role of continuous education and skill adaptation in the AI era.</span></p><p><span style="background-color: transparent;">(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/joe-morelle-8246099/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Joseph Morelle</a> - https://www.linkedin.com/in/joe-morelle-8246099/</p><p><a href="https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&amp;r=9" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">No AI Fraud Act</a> - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&amp;r=9</p><p><a href="https://www.congress.gov/bill/118th-congress/house-bill/3106" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Preventing Deep Fakes of Intimate Images Act</a> - https://www.congress.gov/bill/118th-congress/house-bill/3106</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I am thrilled to sit down with </span><a href="https://www.linkedin.com/in/joe-morelle-8246099/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Joseph Morelle</a><span style="background-color: transparent;">, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.</span></p><p><span style="background-color: transparent;">(04:27) Deep fakes and their growing threat to privacy and integrity.</span></p><p><span style="background-color: transparent;">(07:13) Introducing federal legislation against non-consensual deep fakes.</span></p><p><span style="background-color: transparent;">(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.</span></p><p><span style="background-color: transparent;">(19:46) The No AI Fraud Act and protecting individual likeness in AI use.</span></p><p><span style="background-color: transparent;">(23:06) The importance of adaptable and 'living' statutes in technology regulation.</span></p><p><span style="background-color: transparent;">(32:59) The critical role of continuous education and skill adaptation in the AI era.</span></p><p><span style="background-color: transparent;">(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/joe-morelle-8246099/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Joseph Morelle</a> - https://www.linkedin.com/in/joe-morelle-8246099/</p><p><a href="https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&amp;r=9" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">No AI Fraud Act</a> - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&amp;r=9</p><p><a href="https://www.congress.gov/bill/118th-congress/house-bill/3106" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Preventing Deep Fakes of Intimate Images Act</a> - https://www.congress.gov/bill/118th-congress/house-bill/3106</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>31</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[359e2a61-9816-47a7-a541-6342099ccad5]]></guid>
  <title><![CDATA[Empowering Innovators for a Brighter AI Tomorrow with Dr. Sethuraman Panchanathan]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/drpanch/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan</a><span style="background-color: transparent;">, Director of the </span><a href="https://www.linkedin.com/company/national-science-foundation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a><span style="background-color: transparent;"> and a professor at </span><a href="https://www.linkedin.com/school/arizona-state-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Arizona State University</a><span style="background-color: transparent;">. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:21) AI’s pivotal role in enhancing speech-language services.</span></p><p><span style="background-color: transparent;">(01:28) Introduction to Sethuraman’s visionary leadership at NSF.</span></p><p><span style="background-color: transparent;">(02:36) NSF’s significant AI investment totaled over $820 million.</span></p><p><span style="background-color: transparent;">(06:19) The shift toward interdisciplinary AI research at NSF.</span></p><p><span style="background-color: transparent;">(10:26) NSF’s initiative of launching 25 AI institutes for innovation.</span></p><p><span style="background-color: transparent;">(18:26) Emphasis on AI democratization through education and training.</span></p><p><span style="background-color: transparent;">(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.</span></p><p><span style="background-color: transparent;">(30:21) Focus on ethical AI development to build public trust.</span></p><p><span style="background-color: transparent;">(40:10) AI’s transformative applications in healthcare, agriculture and more.</span></p><p><span style="background-color: transparent;">(42:45) The importance of ethical guardrails in AI’s development.</span></p><p><span style="background-color: transparent;">(43:08) Advancing AI through international collaborations.</span></p><p><span style="background-color: transparent;">(44:53) Lessons from a career in AI and advice for the next generation.</span></p><p><span style="background-color: transparent;">(50:19) Motivating young researchers and entrepreneurs in AI.</span></p><p><span style="background-color: transparent;">(52:24) Advocating for AI innovation and accessibility for everyone.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/drpanch/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan</a> -</p><p>https://www.linkedin.com/in/drpanch/</p><p><a href="https://www.linkedin.com/company/national-science-foundation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a> | LinkedIn -</p><p>https://www.linkedin.com/company/national-science-foundation/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a> | Website -</p><p>https://www.nsf.gov/</p><p><a href="https://www.linkedin.com/school/arizona-state-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Arizona State University</a> -</p><p>https://www.linkedin.com/school/arizona-state-university/</p><p><a href="https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ExpandAI Program</a> -</p><p>https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building</p><p><a href="https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan’s NSF Profile</a> -</p><p>https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan</p><p><a href="https://new.nsf.gov/funding/initiatives/regional-innovation-engines" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF Regional Innovation Engines</a> -</p><p>https://new.nsf.gov/funding/initiatives/regional-innovation-engines</p><p><a href="https://new.nsf.gov/focus-areas/artificial-intelligence/nairr" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource (NAIRR)</a> -</p><p>https://new.nsf.gov/focus-areas/artificial-intelligence/nairr</p><p><a href="https://new.nsf.gov/focus-areas/artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF Focus on Artificial Intelligence</a> -</p><p>https://new.nsf.gov/focus-areas/artificial-intelligence</p><p><a href="https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF AI Research Funding</a> -</p><p>https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research</p><p><a href="https://new.nsf.gov/funding/initiatives/broadening-participation/granted" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">GRANTED Initiative for Broadening Participation in STEM</a> -</p><p>https://new.nsf.gov/funding/initiatives/broadening-participation/granted</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/dc03798e-e1ec-44e7-a43d-e822cbc5cbe6/5b57a660d9.jpg" />
  <pubDate>Wed, 24 Apr 2024 10:00:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="52540057" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/dc03798e-e1ec-44e7-a43d-e822cbc5cbe6/episode.mp3" />
  <itunes:title><![CDATA[Empowering Innovators for a Brighter AI Tomorrow with Dr. Sethuraman Panchanathan]]></itunes:title>
  <itunes:duration>54:43</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/drpanch/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan</a><span style="background-color: transparent;">, Director of the </span><a href="https://www.linkedin.com/company/national-science-foundation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a><span style="background-color: transparent;"> and a professor at </span><a href="https://www.linkedin.com/school/arizona-state-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Arizona State University</a><span style="background-color: transparent;">. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:21) AI’s pivotal role in enhancing speech-language services.</span></p><p><span style="background-color: transparent;">(01:28) Introduction to Sethuraman’s visionary leadership at NSF.</span></p><p><span style="background-color: transparent;">(02:36) NSF’s significant AI investment totaled over $820 million.</span></p><p><span style="background-color: transparent;">(06:19) The shift toward interdisciplinary AI research at NSF.</span></p><p><span style="background-color: transparent;">(10:26) NSF’s initiative of launching 25 AI institutes for innovation.</span></p><p><span style="background-color: transparent;">(18:26) Emphasis on AI democratization through education and training.</span></p><p><span style="background-color: transparent;">(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.</span></p><p><span style="background-color: transparent;">(30:21) Focus on ethical AI development to build public trust.</span></p><p><span style="background-color: transparent;">(40:10) AI’s transformative applications in healthcare, agriculture and more.</span></p><p><span style="background-color: transparent;">(42:45) The importance of ethical guardrails in AI’s development.</span></p><p><span style="background-color: transparent;">(43:08) Advancing AI through international collaborations.</span></p><p><span style="background-color: transparent;">(44:53) Lessons from a career in AI and advice for the next generation.</span></p><p><span style="background-color: transparent;">(50:19) Motivating young researchers and entrepreneurs in AI.</span></p><p><span style="background-color: transparent;">(52:24) Advocating for AI innovation and accessibility for everyone.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/drpanch/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan</a> -</p><p>https://www.linkedin.com/in/drpanch/</p><p><a href="https://www.linkedin.com/company/national-science-foundation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a> | LinkedIn -</p><p>https://www.linkedin.com/company/national-science-foundation/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a> | Website -</p><p>https://www.nsf.gov/</p><p><a href="https://www.linkedin.com/school/arizona-state-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Arizona State University</a> -</p><p>https://www.linkedin.com/school/arizona-state-university/</p><p><a href="https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ExpandAI Program</a> -</p><p>https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building</p><p><a href="https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan’s NSF Profile</a> -</p><p>https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan</p><p><a href="https://new.nsf.gov/funding/initiatives/regional-innovation-engines" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF Regional Innovation Engines</a> -</p><p>https://new.nsf.gov/funding/initiatives/regional-innovation-engines</p><p><a href="https://new.nsf.gov/focus-areas/artificial-intelligence/nairr" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource (NAIRR)</a> -</p><p>https://new.nsf.gov/focus-areas/artificial-intelligence/nairr</p><p><a href="https://new.nsf.gov/focus-areas/artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF Focus on Artificial Intelligence</a> -</p><p>https://new.nsf.gov/focus-areas/artificial-intelligence</p><p><a href="https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF AI Research Funding</a> -</p><p>https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research</p><p><a href="https://new.nsf.gov/funding/initiatives/broadening-participation/granted" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">GRANTED Initiative for Broadening Participation in STEM</a> -</p><p>https://new.nsf.gov/funding/initiatives/broadening-participation/granted</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/drpanch/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan</a><span style="background-color: transparent;">, Director of the </span><a href="https://www.linkedin.com/company/national-science-foundation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a><span style="background-color: transparent;"> and a professor at </span><a href="https://www.linkedin.com/school/arizona-state-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Arizona State University</a><span style="background-color: transparent;">. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:21) AI’s pivotal role in enhancing speech-language services.</span></p><p><span style="background-color: transparent;">(01:28) Introduction to Sethuraman’s visionary leadership at NSF.</span></p><p><span style="background-color: transparent;">(02:36) NSF’s significant AI investment totaled over $820 million.</span></p><p><span style="background-color: transparent;">(06:19) The shift toward interdisciplinary AI research at NSF.</span></p><p><span style="background-color: transparent;">(10:26) NSF’s initiative of launching 25 AI institutes for innovation.</span></p><p><span style="background-color: transparent;">(18:26) Emphasis on AI democratization through education and training.</span></p><p><span style="background-color: transparent;">(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.</span></p><p><span style="background-color: transparent;">(30:21) Focus on ethical AI development to build public trust.</span></p><p><span style="background-color: transparent;">(40:10) AI’s transformative applications in healthcare, agriculture and more.</span></p><p><span style="background-color: transparent;">(42:45) The importance of ethical guardrails in AI’s development.</span></p><p><span style="background-color: transparent;">(43:08) Advancing AI through international collaborations.</span></p><p><span style="background-color: transparent;">(44:53) Lessons from a career in AI and advice for the next generation.</span></p><p><span style="background-color: transparent;">(50:19) Motivating young researchers and entrepreneurs in AI.</span></p><p><span style="background-color: transparent;">(52:24) Advocating for AI innovation and accessibility for everyone.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/drpanch/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan</a> -</p><p>https://www.linkedin.com/in/drpanch/</p><p><a href="https://www.linkedin.com/company/national-science-foundation/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a> | LinkedIn -</p><p>https://www.linkedin.com/company/national-science-foundation/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. National Science Foundation</a> | Website -</p><p>https://www.nsf.gov/</p><p><a href="https://www.linkedin.com/school/arizona-state-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Arizona State University</a> -</p><p>https://www.linkedin.com/school/arizona-state-university/</p><p><a href="https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ExpandAI Program</a> -</p><p>https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building</p><p><a href="https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Sethuraman Panchanathan’s NSF Profile</a> -</p><p>https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan</p><p><a href="https://new.nsf.gov/funding/initiatives/regional-innovation-engines" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF Regional Innovation Engines</a> -</p><p>https://new.nsf.gov/funding/initiatives/regional-innovation-engines</p><p><a href="https://new.nsf.gov/focus-areas/artificial-intelligence/nairr" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource (NAIRR)</a> -</p><p>https://new.nsf.gov/focus-areas/artificial-intelligence/nairr</p><p><a href="https://new.nsf.gov/focus-areas/artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF Focus on Artificial Intelligence</a> -</p><p>https://new.nsf.gov/focus-areas/artificial-intelligence</p><p><a href="https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">NSF AI Research Funding</a> -</p><p>https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research</p><p><a href="https://new.nsf.gov/funding/initiatives/broadening-participation/granted" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">GRANTED Initiative for Broadening Participation in STEM</a> -</p><p>https://new.nsf.gov/funding/initiatives/broadening-participation/granted</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I welcome Dr. Sethuraman Panchanathan, Director of the U.S. National Science Foundation and a professor at Arizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>30</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2b7427ce-64bf-4509-bc98-ff36399d4442]]></guid>
  <title><![CDATA[Evaluating the Effectiveness of AI Legislation in Cybersecurity with Bruce Schneier]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) I discuss with Bruce the challenges of regulating AI in the US.</span></p><p><span style="background-color: transparent;">(02:28) Bruce explains the role and future potential of AI in cybersecurity.</span></p><p><span style="background-color: transparent;">(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.</span></p><p><span style="background-color: transparent;">(07:22) The need for robust regulations akin to those in the EU.</span></p><p><span style="background-color: transparent;">(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.</span></p><p><span style="background-color: transparent;">(19:56) The critical role of knowledgeable staff in supporting legislators.</span></p><p><span style="background-color: transparent;">(22:24) The challenges of effectively regulating AI.</span></p><p><span style="background-color: transparent;">(26:15) The potential of AI to transform enforcement across various sectors.</span></p><p><span style="background-color: transparent;">(30:58) Reflections on the future of AI governance and ethical considerations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><br></p><p><a href="https://www.schneier.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bruce Schneier Website</a> - https://www.schneier.com/</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Strategy</a> - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/b2b80b6a-9c11-48ce-be8b-5ee2d634bdfd/832eeb5e54.jpg" />
  <pubDate>Tue, 23 Apr 2024 10:34:05 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="31772464" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/b2b80b6a-9c11-48ce-be8b-5ee2d634bdfd/episode.mp3" />
  <itunes:title><![CDATA[Evaluating the Effectiveness of AI Legislation in Cybersecurity with Bruce Schneier]]></itunes:title>
  <itunes:duration>33:05</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) I discuss with Bruce the challenges of regulating AI in the US.</span></p><p><span style="background-color: transparent;">(02:28) Bruce explains the role and future potential of AI in cybersecurity.</span></p><p><span style="background-color: transparent;">(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.</span></p><p><span style="background-color: transparent;">(07:22) The need for robust regulations akin to those in the EU.</span></p><p><span style="background-color: transparent;">(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.</span></p><p><span style="background-color: transparent;">(19:56) The critical role of knowledgeable staff in supporting legislators.</span></p><p><span style="background-color: transparent;">(22:24) The challenges of effectively regulating AI.</span></p><p><span style="background-color: transparent;">(26:15) The potential of AI to transform enforcement across various sectors.</span></p><p><span style="background-color: transparent;">(30:58) Reflections on the future of AI governance and ethical considerations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><br></p><p><a href="https://www.schneier.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bruce Schneier Website</a> - https://www.schneier.com/</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Strategy</a> - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:00) I discuss with Bruce the challenges of regulating AI in the US.</span></p><p><span style="background-color: transparent;">(02:28) Bruce explains the role and future potential of AI in cybersecurity.</span></p><p><span style="background-color: transparent;">(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.</span></p><p><span style="background-color: transparent;">(07:22) The need for robust regulations akin to those in the EU.</span></p><p><span style="background-color: transparent;">(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.</span></p><p><span style="background-color: transparent;">(19:56) The critical role of knowledgeable staff in supporting legislators.</span></p><p><span style="background-color: transparent;">(22:24) The challenges of effectively regulating AI.</span></p><p><span style="background-color: transparent;">(26:15) The potential of AI to transform enforcement across various sectors.</span></p><p><span style="background-color: transparent;">(30:58) Reflections on the future of AI governance and ethical considerations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><br></p><p><a href="https://www.schneier.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bruce Schneier Website</a> - https://www.schneier.com/</p><p><a href="https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Strategy</a> - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbe...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>29</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[4d2bce8e-d14f-4e8a-9c5f-c366a4b226e9]]></guid>
  <title><![CDATA[AI's Potential in Public Services with Trooper Sanders]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/troopersanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders</a><span style="background-color: transparent;">, CEO of </span><a href="https://www.linkedin.com/company/benefits-data-trust/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a><span style="background-color: transparent;"> and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.</span></p><p><span style="background-color: transparent;">(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.</span></p><p><span style="background-color: transparent;">(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.</span></p><p><span style="background-color: transparent;">(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.</span></p><p><span style="background-color: transparent;">(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.</span></p><p><span style="background-color: transparent;">(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.</span></p><p><span style="background-color: transparent;">(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.</span></p><p><span style="background-color: transparent;">(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.</span></p><p><span style="background-color: transparent;">(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/troopersanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders</a> -</p><p>https://www.linkedin.com/in/troopersanders/</p><p><a href="https://www.linkedin.com/company/benefits-data-trust/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a> | LinkedIn -</p><p>https://www.linkedin.com/company/benefits-data-trust/</p><p><a href="https://bdtrust.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a> | Website -</p><p>https://bdtrust.org/</p><p><a href="https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">White House National Artificial Intelligence Advisory Committee</a> -</p><p>https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/</p><p><a href="https://bdtrust.org/bdt-launches-ai-learning-lab/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">BDT Launches AI and Human Services Learning Hub</a> -</p><p>https://bdtrust.org/bdt-launches-ai-learning-lab/</p><p><a href="https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Our Vision for an Intelligent Human Services and Benefits Access System</a> -</p><p>https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system</p><p><a href="https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Humans Must Control Human-Serving AI</a> -</p><p>https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/</p><p><a href="https://bdtrust.org/trooper-sanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders’ Bio</a> -</p><p>https://bdtrust.org/trooper-sanders/</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/64830eb7-dfaf-4807-afd8-4c1fb8998cab/51b5a5fa04.jpg" />
  <pubDate>Fri, 19 Apr 2024 15:06:14 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39998318" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/64830eb7-dfaf-4807-afd8-4c1fb8998cab/episode.mp3" />
  <itunes:title><![CDATA[AI's Potential in Public Services with Trooper Sanders]]></itunes:title>
  <itunes:duration>41:39</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/troopersanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders</a><span style="background-color: transparent;">, CEO of </span><a href="https://www.linkedin.com/company/benefits-data-trust/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a><span style="background-color: transparent;"> and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.</span></p><p><span style="background-color: transparent;">(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.</span></p><p><span style="background-color: transparent;">(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.</span></p><p><span style="background-color: transparent;">(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.</span></p><p><span style="background-color: transparent;">(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.</span></p><p><span style="background-color: transparent;">(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.</span></p><p><span style="background-color: transparent;">(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.</span></p><p><span style="background-color: transparent;">(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.</span></p><p><span style="background-color: transparent;">(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/troopersanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders</a> -</p><p>https://www.linkedin.com/in/troopersanders/</p><p><a href="https://www.linkedin.com/company/benefits-data-trust/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a> | LinkedIn -</p><p>https://www.linkedin.com/company/benefits-data-trust/</p><p><a href="https://bdtrust.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a> | Website -</p><p>https://bdtrust.org/</p><p><a href="https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">White House National Artificial Intelligence Advisory Committee</a> -</p><p>https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/</p><p><a href="https://bdtrust.org/bdt-launches-ai-learning-lab/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">BDT Launches AI and Human Services Learning Hub</a> -</p><p>https://bdtrust.org/bdt-launches-ai-learning-lab/</p><p><a href="https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Our Vision for an Intelligent Human Services and Benefits Access System</a> -</p><p>https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system</p><p><a href="https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Humans Must Control Human-Serving AI</a> -</p><p>https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/</p><p><a href="https://bdtrust.org/trooper-sanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders’ Bio</a> -</p><p>https://bdtrust.org/trooper-sanders/</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/troopersanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders</a><span style="background-color: transparent;">, CEO of </span><a href="https://www.linkedin.com/company/benefits-data-trust/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a><span style="background-color: transparent;"> and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.</span></p><p><span style="background-color: transparent;">(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.</span></p><p><span style="background-color: transparent;">(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.</span></p><p><span style="background-color: transparent;">(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.</span></p><p><span style="background-color: transparent;">(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.</span></p><p><span style="background-color: transparent;">(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.</span></p><p><span style="background-color: transparent;">(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.</span></p><p><span style="background-color: transparent;">(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.</span></p><p><span style="background-color: transparent;">(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/troopersanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders</a> -</p><p>https://www.linkedin.com/in/troopersanders/</p><p><a href="https://www.linkedin.com/company/benefits-data-trust/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a> | LinkedIn -</p><p>https://www.linkedin.com/company/benefits-data-trust/</p><p><a href="https://bdtrust.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Benefits Data Trust</a> | Website -</p><p>https://bdtrust.org/</p><p><a href="https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">White House National Artificial Intelligence Advisory Committee</a> -</p><p>https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/</p><p><a href="https://bdtrust.org/bdt-launches-ai-learning-lab/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">BDT Launches AI and Human Services Learning Hub</a> -</p><p>https://bdtrust.org/bdt-launches-ai-learning-lab/</p><p><a href="https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Our Vision for an Intelligent Human Services and Benefits Access System</a> -</p><p>https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system</p><p><a href="https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Humans Must Control Human-Serving AI</a> -</p><p>https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/</p><p><a href="https://bdtrust.org/trooper-sanders/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Trooper Sanders’ Bio</a> -</p><p>https://bdtrust.org/trooper-sanders/</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Trooper Sanders, CEO of Benefits Data Trust and a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>28</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b6dd8615-dabc-4521-be5a-177a29c2032c]]></guid>
  <title><![CDATA[The Impact of AI on Global Military Strategies with Dr. Paul Lushenko]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">I'm thrilled to be joined by </span><a href="https://www.linkedin.com/in/paul-lushenko-phd-5b805113/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Paul Lushenko</a><span style="background-color: transparent;">, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the </span><a href="https://www.linkedin.com/school/united-states-army-war-college/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Army War College</a><span style="background-color: transparent;">. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.</span></p><p><span style="background-color: transparent;">(06:37) The gaps in global governance regarding AI and autonomous weapon systems.</span></p><p><span style="background-color: transparent;">(08:30) U.S. policies on the responsible use of AI in military operations.</span></p><p><span style="background-color: transparent;">(16:29) The importance of cutting-edge research in informing legislative actions on AI.</span></p><p><span style="background-color: transparent;">(18:49) The risk of biases in AI systems used in national security.</span></p><p><span style="background-color: transparent;">(20:09) Discussion on automation bias and its consequences in military operations.</span></p><p><span style="background-color: transparent;">(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.</span></p><p><span style="background-color: transparent;">(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.</span></p><p><span style="background-color: transparent;">(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/paul-lushenko-phd-5b805113/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Paul Lushenko</a> -</p><p>https://www.linkedin.com/in/paul-lushenko-phd-5b805113/</p><p><a href="https://www.linkedin.com/school/united-states-army-war-college/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Army War College</a> -</p><p>https://www.linkedin.com/school/united-states-army-war-college/</p><p><a href="https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Political Declaration on Responsible Use of AI in Military Technologies&nbsp;</a>-</p><p>https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf</p><p><a href="https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Memorandum on Ethical Use of AI - White House 2023</a> -</p><p>https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ce356cee-aec9-41b4-9a5a-bf9fbccf2a87/f4ce83e7f7.jpg" />
  <pubDate>Thu, 18 Apr 2024 11:32:08 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="39905950" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ce356cee-aec9-41b4-9a5a-bf9fbccf2a87/episode.mp3" />
  <itunes:title><![CDATA[The Impact of AI on Global Military Strategies with Dr. Paul Lushenko]]></itunes:title>
  <itunes:duration>41:34</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">I'm thrilled to be joined by </span><a href="https://www.linkedin.com/in/paul-lushenko-phd-5b805113/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Paul Lushenko</a><span style="background-color: transparent;">, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the </span><a href="https://www.linkedin.com/school/united-states-army-war-college/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Army War College</a><span style="background-color: transparent;">. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.</span></p><p><span style="background-color: transparent;">(06:37) The gaps in global governance regarding AI and autonomous weapon systems.</span></p><p><span style="background-color: transparent;">(08:30) U.S. policies on the responsible use of AI in military operations.</span></p><p><span style="background-color: transparent;">(16:29) The importance of cutting-edge research in informing legislative actions on AI.</span></p><p><span style="background-color: transparent;">(18:49) The risk of biases in AI systems used in national security.</span></p><p><span style="background-color: transparent;">(20:09) Discussion on automation bias and its consequences in military operations.</span></p><p><span style="background-color: transparent;">(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.</span></p><p><span style="background-color: transparent;">(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.</span></p><p><span style="background-color: transparent;">(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/paul-lushenko-phd-5b805113/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Paul Lushenko</a> -</p><p>https://www.linkedin.com/in/paul-lushenko-phd-5b805113/</p><p><a href="https://www.linkedin.com/school/united-states-army-war-college/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Army War College</a> -</p><p>https://www.linkedin.com/school/united-states-army-war-college/</p><p><a href="https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Political Declaration on Responsible Use of AI in Military Technologies&nbsp;</a>-</p><p>https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf</p><p><a href="https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Memorandum on Ethical Use of AI - White House 2023</a> -</p><p>https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">I'm thrilled to be joined by </span><a href="https://www.linkedin.com/in/paul-lushenko-phd-5b805113/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Paul Lushenko</a><span style="background-color: transparent;">, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the </span><a href="https://www.linkedin.com/school/united-states-army-war-college/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Army War College</a><span style="background-color: transparent;">. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.</span></p><p><span style="background-color: transparent;">(06:37) The gaps in global governance regarding AI and autonomous weapon systems.</span></p><p><span style="background-color: transparent;">(08:30) U.S. policies on the responsible use of AI in military operations.</span></p><p><span style="background-color: transparent;">(16:29) The importance of cutting-edge research in informing legislative actions on AI.</span></p><p><span style="background-color: transparent;">(18:49) The risk of biases in AI systems used in national security.</span></p><p><span style="background-color: transparent;">(20:09) Discussion on automation bias and its consequences in military operations.</span></p><p><span style="background-color: transparent;">(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.</span></p><p><span style="background-color: transparent;">(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.</span></p><p><span style="background-color: transparent;">(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/paul-lushenko-phd-5b805113/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Paul Lushenko</a> -</p><p>https://www.linkedin.com/in/paul-lushenko-phd-5b805113/</p><p><a href="https://www.linkedin.com/school/united-states-army-war-college/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">U.S. Army War College</a> -</p><p>https://www.linkedin.com/school/united-states-army-war-college/</p><p><a href="https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Political Declaration on Responsible Use of AI in Military Technologies&nbsp;</a>-</p><p>https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf</p><p><a href="https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Memorandum on Ethical Use of AI - White House 2023</a> -</p><p>https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[I'm thrilled to be joined by Dr. Paul Lushenko, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at the U.S. Army War College. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military st...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>27</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[6c9a77e5-e738-4900-819a-98bcf7efbc3e]]></guid>
  <title><![CDATA[Harnessing AI for Equitable Education with Randi Weingarten, President of American Federation of Teachers]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/randi-weingarten-05896224/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Randi Weingarten</a><span style="background-color: transparent;">, President of the </span><a href="https://www.linkedin.com/company/american-federation-of-teachers/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">American Federation of Teachers (AFT)</a><span style="background-color: transparent;">. She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:08) Introduction of Randi Weingarten and her role in the AFT.</span></p><p><span style="background-color: transparent;">(05:00) The critical issue of ensuring equitable access to AI technologies in education.</span></p><p><span style="background-color: transparent;">(08:06) Addressing bias and discrimination within AI-driven educational systems.</span></p><p><span style="background-color: transparent;">(11:53) The importance of inclusive participation in the implementation of educational technologies.</span></p><p><span style="background-color: transparent;">(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.</span></p><p><span style="background-color: transparent;">(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.</span></p><p><span style="background-color: transparent;">(18:08) Concerns surrounding data privacy and security within AI-driven platforms.</span></p><p><span style="background-color: transparent;">(20:25) The need for regulation and oversight in the application of AI in educational settings.</span></p><p><span style="background-color: transparent;">(25:22) The potential for productive industry collaboration in developing AI tools for education.</span></p><p><span style="background-color: transparent;">(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/randi-weingarten-05896224/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Randi Weingarten</a> - https://www.linkedin.com/in/randi-weingarten-05896224/</p><p><a href="https://www.aft.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">American Federation of Teachers</a> - https://www.aft.org/</p><p><a href="https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Testimony to Senator Schumer by Randi Weingarten on equity in AI</a> - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/1b1b3201-c9a7-4a8a-8e7f-b111a6e5a0c3/429c2de04b.jpg" />
  <pubDate>Mon, 01 Apr 2024 18:52:29 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="35360225" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/1b1b3201-c9a7-4a8a-8e7f-b111a6e5a0c3/episode.mp3" />
  <itunes:title><![CDATA[Harnessing AI for Equitable Education with Randi Weingarten, President of American Federation of Teachers]]></itunes:title>
  <itunes:duration>36:49</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/randi-weingarten-05896224/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Randi Weingarten</a><span style="background-color: transparent;">, President of the </span><a href="https://www.linkedin.com/company/american-federation-of-teachers/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">American Federation of Teachers (AFT)</a><span style="background-color: transparent;">. She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:08) Introduction of Randi Weingarten and her role in the AFT.</span></p><p><span style="background-color: transparent;">(05:00) The critical issue of ensuring equitable access to AI technologies in education.</span></p><p><span style="background-color: transparent;">(08:06) Addressing bias and discrimination within AI-driven educational systems.</span></p><p><span style="background-color: transparent;">(11:53) The importance of inclusive participation in the implementation of educational technologies.</span></p><p><span style="background-color: transparent;">(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.</span></p><p><span style="background-color: transparent;">(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.</span></p><p><span style="background-color: transparent;">(18:08) Concerns surrounding data privacy and security within AI-driven platforms.</span></p><p><span style="background-color: transparent;">(20:25) The need for regulation and oversight in the application of AI in educational settings.</span></p><p><span style="background-color: transparent;">(25:22) The potential for productive industry collaboration in developing AI tools for education.</span></p><p><span style="background-color: transparent;">(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/randi-weingarten-05896224/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Randi Weingarten</a> - https://www.linkedin.com/in/randi-weingarten-05896224/</p><p><a href="https://www.aft.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">American Federation of Teachers</a> - https://www.aft.org/</p><p><a href="https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Testimony to Senator Schumer by Randi Weingarten on equity in AI</a> - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/randi-weingarten-05896224/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Randi Weingarten</a><span style="background-color: transparent;">, President of the </span><a href="https://www.linkedin.com/company/american-federation-of-teachers/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">American Federation of Teachers (AFT)</a><span style="background-color: transparent;">. She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:08) Introduction of Randi Weingarten and her role in the AFT.</span></p><p><span style="background-color: transparent;">(05:00) The critical issue of ensuring equitable access to AI technologies in education.</span></p><p><span style="background-color: transparent;">(08:06) Addressing bias and discrimination within AI-driven educational systems.</span></p><p><span style="background-color: transparent;">(11:53) The importance of inclusive participation in the implementation of educational technologies.</span></p><p><span style="background-color: transparent;">(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.</span></p><p><span style="background-color: transparent;">(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.</span></p><p><span style="background-color: transparent;">(18:08) Concerns surrounding data privacy and security within AI-driven platforms.</span></p><p><span style="background-color: transparent;">(20:25) The need for regulation and oversight in the application of AI in educational settings.</span></p><p><span style="background-color: transparent;">(25:22) The potential for productive industry collaboration in developing AI tools for education.</span></p><p><span style="background-color: transparent;">(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/randi-weingarten-05896224/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Randi Weingarten</a> - https://www.linkedin.com/in/randi-weingarten-05896224/</p><p><a href="https://www.aft.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">American Federation of Teachers</a> - https://www.aft.org/</p><p><a href="https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Testimony to Senator Schumer by Randi Weingarten on equity in AI</a> - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I welcome Randi Weingarten, President of the American Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in sha...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>26</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[220e9174-a069-42f6-8d30-ed4cb8ec0a7f]]></guid>
  <title><![CDATA[Crafting Effective AI Policies for National Security With Insights From Anja Manuel]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome </span><a href="https://www.linkedin.com/in/anja-manuel-26805023/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anja Manuel</a><span style="background-color: transparent;">, the Executive Director of the </span><a href="https://www.linkedin.com/showcase/the-aspen-strategy-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Strategy Group and the Aspen Security Forum</a><span style="background-color: transparent;">, as well as Co-Founder and Partner at </span><a href="https://www.linkedin.com/company/ricehadleygates-llc/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Rice, Hadley, Gates &amp; Manuel, LLC</a><span style="background-color: transparent;">. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:17) The functionality of intelligence committees across party lines.</span></p><p><span style="background-color: transparent;">(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.</span></p><p><span style="background-color: transparent;">(03:10) The rapid innovation in military technology and the US’s efforts to adapt.</span></p><p><span style="background-color: transparent;">(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.</span></p><p><span style="background-color: transparent;">(07:09) AI regulation is needed both globally and nationally.</span></p><p><span style="background-color: transparent;">(11:21) International collaboration plays a vital role in AI regulation.</span></p><p><span style="background-color: transparent;">(13:39) Ethical considerations unique to AI applications in national security.</span></p><p><span style="background-color: transparent;">(14:31) National security agencies’ openness to regulatory frameworks.</span></p><p><span style="background-color: transparent;">(15:35) Public-private collaboration in addressing national security considerations.</span></p><p><span style="background-color: transparent;">(17:08) Establishing standards in AI technology for national security is necessary.</span></p><p><span style="background-color: transparent;">(18:28) Regulation of autonomous weapons and international agreements.</span></p><p><span style="background-color: transparent;">(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.</span></p><p><span style="background-color: transparent;">(20:17) AI’s role and risks in intelligence and privacy.</span></p><p><span style="background-color: transparent;">(21:13) Regulating AI in cybersecurity and other areas is a challenge.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/anja-manuel-26805023/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anja Manuel</a> - https://www.linkedin.com/in/anja-manuel-26805023/</p><p><a href="https://www.aspeninstitute.org/programs/aspen-strategy-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Strategy Group</a> - https://www.aspeninstitute.org/programs/aspen-strategy-group/</p><p><a href="https://www.aspensecurityforum.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Security Forum</a> - https://www.aspensecurityforum.org/</p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/fa55d55a-fe94-42a8-9462-3a5459126048/ca44e07f1d.jpg" />
  <pubDate>Tue, 26 Mar 2024 11:14:57 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="23608467" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/fa55d55a-fe94-42a8-9462-3a5459126048/episode.mp3" />
  <itunes:title><![CDATA[Crafting Effective AI Policies for National Security With Insights From Anja Manuel]]></itunes:title>
  <itunes:duration>24:35</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome </span><a href="https://www.linkedin.com/in/anja-manuel-26805023/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anja Manuel</a><span style="background-color: transparent;">, the Executive Director of the </span><a href="https://www.linkedin.com/showcase/the-aspen-strategy-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Strategy Group and the Aspen Security Forum</a><span style="background-color: transparent;">, as well as Co-Founder and Partner at </span><a href="https://www.linkedin.com/company/ricehadleygates-llc/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Rice, Hadley, Gates &amp; Manuel, LLC</a><span style="background-color: transparent;">. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:17) The functionality of intelligence committees across party lines.</span></p><p><span style="background-color: transparent;">(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.</span></p><p><span style="background-color: transparent;">(03:10) The rapid innovation in military technology and the US’s efforts to adapt.</span></p><p><span style="background-color: transparent;">(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.</span></p><p><span style="background-color: transparent;">(07:09) AI regulation is needed both globally and nationally.</span></p><p><span style="background-color: transparent;">(11:21) International collaboration plays a vital role in AI regulation.</span></p><p><span style="background-color: transparent;">(13:39) Ethical considerations unique to AI applications in national security.</span></p><p><span style="background-color: transparent;">(14:31) National security agencies’ openness to regulatory frameworks.</span></p><p><span style="background-color: transparent;">(15:35) Public-private collaboration in addressing national security considerations.</span></p><p><span style="background-color: transparent;">(17:08) Establishing standards in AI technology for national security is necessary.</span></p><p><span style="background-color: transparent;">(18:28) Regulation of autonomous weapons and international agreements.</span></p><p><span style="background-color: transparent;">(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.</span></p><p><span style="background-color: transparent;">(20:17) AI’s role and risks in intelligence and privacy.</span></p><p><span style="background-color: transparent;">(21:13) Regulating AI in cybersecurity and other areas is a challenge.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/anja-manuel-26805023/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anja Manuel</a> - https://www.linkedin.com/in/anja-manuel-26805023/</p><p><a href="https://www.aspeninstitute.org/programs/aspen-strategy-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Strategy Group</a> - https://www.aspeninstitute.org/programs/aspen-strategy-group/</p><p><a href="https://www.aspensecurityforum.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Security Forum</a> - https://www.aspensecurityforum.org/</p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome </span><a href="https://www.linkedin.com/in/anja-manuel-26805023/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anja Manuel</a><span style="background-color: transparent;">, the Executive Director of the </span><a href="https://www.linkedin.com/showcase/the-aspen-strategy-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Strategy Group and the Aspen Security Forum</a><span style="background-color: transparent;">, as well as Co-Founder and Partner at </span><a href="https://www.linkedin.com/company/ricehadleygates-llc/about/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Rice, Hadley, Gates &amp; Manuel, LLC</a><span style="background-color: transparent;">. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:17) The functionality of intelligence committees across party lines.</span></p><p><span style="background-color: transparent;">(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.</span></p><p><span style="background-color: transparent;">(03:10) The rapid innovation in military technology and the US’s efforts to adapt.</span></p><p><span style="background-color: transparent;">(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.</span></p><p><span style="background-color: transparent;">(07:09) AI regulation is needed both globally and nationally.</span></p><p><span style="background-color: transparent;">(11:21) International collaboration plays a vital role in AI regulation.</span></p><p><span style="background-color: transparent;">(13:39) Ethical considerations unique to AI applications in national security.</span></p><p><span style="background-color: transparent;">(14:31) National security agencies’ openness to regulatory frameworks.</span></p><p><span style="background-color: transparent;">(15:35) Public-private collaboration in addressing national security considerations.</span></p><p><span style="background-color: transparent;">(17:08) Establishing standards in AI technology for national security is necessary.</span></p><p><span style="background-color: transparent;">(18:28) Regulation of autonomous weapons and international agreements.</span></p><p><span style="background-color: transparent;">(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.</span></p><p><span style="background-color: transparent;">(20:17) AI’s role and risks in intelligence and privacy.</span></p><p><span style="background-color: transparent;">(21:13) Regulating AI in cybersecurity and other areas is a challenge.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/anja-manuel-26805023/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anja Manuel</a> - https://www.linkedin.com/in/anja-manuel-26805023/</p><p><a href="https://www.aspeninstitute.org/programs/aspen-strategy-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Strategy Group</a> - https://www.aspeninstitute.org/programs/aspen-strategy-group/</p><p><a href="https://www.aspensecurityforum.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Aspen Security Forum</a> - https://www.aspensecurityforum.org/</p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcome Anja Manuel, the Executive Director of the Aspen Strategy Group and the Aspen Security Forum, as wel...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>25</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1af814c4-1160-4b86-a16d-b4e51ba61d9a]]></guid>
  <title><![CDATA[Shaping the Future of Manufacturing With AI Insights with Dr. Gunter Beitinger]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/gunter-dr-beitinger/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Gunter Beitinger</a><span style="background-color: transparent;">, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at </span><a href="https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a><span style="background-color: transparent;">. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:17) Dr. Beitinger’s extensive background and role at Siemens.</span></p><p><span style="background-color: transparent;">(05:13) Specific examples of AI-driven improvements in Siemens’ operations.</span></p><p><span style="background-color: transparent;">(07:52) The measurable productivity gains attributed to AI in manufacturing.</span></p><p><span style="background-color: transparent;">(10:02) The impact of AI on employment and the importance of re-skilling.</span></p><p><span style="background-color: transparent;">(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.</span></p><p><span style="background-color: transparent;">(16:24) The role of AI in improving the working conditions of industrial workers.</span></p><p><span style="background-color: transparent;">(26:53) The potential for smaller companies to leverage AI and compete with industry giants.</span></p><p><span style="background-color: transparent;">(36:49) AI’s future role in creating digital twins and the industrial metaverse.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/gunter-dr-beitinger/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Gunter Beitinger</a> -</p><p>https://www.linkedin.com/in/gunter-dr-beitinger/</p><p><a href="https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a> | LinkedIn -</p><p>https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text</p><p><a href="https://www.siemens.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a> | Website -</p><p>https://www.siemens.com/</p><p><a href="https://blog.siemens.com/space/artificial-intelligence-in-industry/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://blog.siemens.com/space/artificial-intelligence-in-industry/</a></p><p><a href="https://blog.siemens.com/2023/07/the-need-to-rethink-production/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://blog.siemens.com/2023/07/the-need-to-rethink-production/</a></p><p><a href="https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023</a></p><p><a href="https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html</a></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/c3c4b3a0-8b2a-4a7d-bee7-e66d1908640c/50fc46da34.jpg" />
  <pubDate>Tue, 19 Mar 2024 13:10:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="38562629" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/c3c4b3a0-8b2a-4a7d-bee7-e66d1908640c/episode.mp3" />
  <itunes:title><![CDATA[Shaping the Future of Manufacturing With AI Insights with Dr. Gunter Beitinger]]></itunes:title>
  <itunes:duration>40:10</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/gunter-dr-beitinger/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Gunter Beitinger</a><span style="background-color: transparent;">, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at </span><a href="https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a><span style="background-color: transparent;">. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:17) Dr. Beitinger’s extensive background and role at Siemens.</span></p><p><span style="background-color: transparent;">(05:13) Specific examples of AI-driven improvements in Siemens’ operations.</span></p><p><span style="background-color: transparent;">(07:52) The measurable productivity gains attributed to AI in manufacturing.</span></p><p><span style="background-color: transparent;">(10:02) The impact of AI on employment and the importance of re-skilling.</span></p><p><span style="background-color: transparent;">(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.</span></p><p><span style="background-color: transparent;">(16:24) The role of AI in improving the working conditions of industrial workers.</span></p><p><span style="background-color: transparent;">(26:53) The potential for smaller companies to leverage AI and compete with industry giants.</span></p><p><span style="background-color: transparent;">(36:49) AI’s future role in creating digital twins and the industrial metaverse.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/gunter-dr-beitinger/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Gunter Beitinger</a> -</p><p>https://www.linkedin.com/in/gunter-dr-beitinger/</p><p><a href="https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a> | LinkedIn -</p><p>https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text</p><p><a href="https://www.siemens.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a> | Website -</p><p>https://www.siemens.com/</p><p><a href="https://blog.siemens.com/space/artificial-intelligence-in-industry/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://blog.siemens.com/space/artificial-intelligence-in-industry/</a></p><p><a href="https://blog.siemens.com/2023/07/the-need-to-rethink-production/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://blog.siemens.com/2023/07/the-need-to-rethink-production/</a></p><p><a href="https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023</a></p><p><a href="https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html</a></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/gunter-dr-beitinger/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Gunter Beitinger</a><span style="background-color: transparent;">, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at </span><a href="https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a><span style="background-color: transparent;">. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:17) Dr. Beitinger’s extensive background and role at Siemens.</span></p><p><span style="background-color: transparent;">(05:13) Specific examples of AI-driven improvements in Siemens’ operations.</span></p><p><span style="background-color: transparent;">(07:52) The measurable productivity gains attributed to AI in manufacturing.</span></p><p><span style="background-color: transparent;">(10:02) The impact of AI on employment and the importance of re-skilling.</span></p><p><span style="background-color: transparent;">(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.</span></p><p><span style="background-color: transparent;">(16:24) The role of AI in improving the working conditions of industrial workers.</span></p><p><span style="background-color: transparent;">(26:53) The potential for smaller companies to leverage AI and compete with industry giants.</span></p><p><span style="background-color: transparent;">(36:49) AI’s future role in creating digital twins and the industrial metaverse.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/gunter-dr-beitinger/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Gunter Beitinger</a> -</p><p>https://www.linkedin.com/in/gunter-dr-beitinger/</p><p><a href="https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a> | LinkedIn -</p><p>https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text</p><p><a href="https://www.siemens.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Siemens</a> | Website -</p><p>https://www.siemens.com/</p><p><a href="https://blog.siemens.com/space/artificial-intelligence-in-industry/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://blog.siemens.com/space/artificial-intelligence-in-industry/</a></p><p><a href="https://blog.siemens.com/2023/07/the-need-to-rethink-production/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://blog.siemens.com/2023/07/the-need-to-rethink-production/</a></p><p><a href="https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.siemens.com/global/en/products/automation/topic-areas/industrial-operations-x.html#GetyourfreeticketforHannoverMesse2023</a></p><p><a href="https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">https://www.siemens.com/global/en/company/innovation/research-development/next-gen-industrial-ai.html</a></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Dr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint at Siemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufactur...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>24</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[028be19d-9cf2-4b12-ac5d-d4b55f2486bc]]></guid>
  <title><![CDATA[Exploring AI’s Impact on National Security and Legislation with Sarah Kreps]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/sarah-kreps-51a3b7257/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps</a><span style="background-color: transparent;">, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at </span><a href="https://www.linkedin.com/school/cornell-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cornell</a><span style="background-color: transparent;"> Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.</span></p><p><span style="background-color: transparent;">(03:27) AI's multifaceted applications and its national security implications.</span></p><p><span style="background-color: transparent;">(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.</span></p><p><span style="background-color: transparent;">(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.</span></p><p><span style="background-color: transparent;">(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.</span></p><p><span style="background-color: transparent;">(20:30) Concerns about potential AI monopolies and the economic consequences.</span></p><p><span style="background-color: transparent;">(28:16) Ethical and practical aspects of AI assistance in legislative processes.</span></p><p><span style="background-color: transparent;">(30:13) The critical need for human involvement in AI-augmented military decisions.</span></p><p><span style="background-color: transparent;">(35:32) National security agencies' approach to AI regulatory frameworks.</span></p><p><span style="background-color: transparent;">(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/sarah-kreps-51a3b7257/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps</a> - https://www.linkedin.com/in/sarah-kreps-51a3b7257/</p><p><a href="https://www.linkedin.com/school/cornell-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cornell</a> - https://www.linkedin.com/school/cornell-university/</p><p><a href="https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps’ paper for the Brookings Institution</a> - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI Global Governance</a> - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm</p><p><a href="https://government.cornell.edu/sarah-kreps" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps - Cornell University</a> - </p><p>https://government.cornell.edu/sarah-kreps</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/a985e8bc-68e4-4b33-ac93-7934fbc22d84/d77f7c9d03.jpg" />
  <pubDate>Thu, 14 Mar 2024 14:19:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="42982965" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/a985e8bc-68e4-4b33-ac93-7934fbc22d84/episode.mp3" />
  <itunes:title><![CDATA[Exploring AI’s Impact on National Security and Legislation with Sarah Kreps]]></itunes:title>
  <itunes:duration>44:46</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/sarah-kreps-51a3b7257/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps</a><span style="background-color: transparent;">, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at </span><a href="https://www.linkedin.com/school/cornell-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cornell</a><span style="background-color: transparent;"> Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.</span></p><p><span style="background-color: transparent;">(03:27) AI's multifaceted applications and its national security implications.</span></p><p><span style="background-color: transparent;">(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.</span></p><p><span style="background-color: transparent;">(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.</span></p><p><span style="background-color: transparent;">(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.</span></p><p><span style="background-color: transparent;">(20:30) Concerns about potential AI monopolies and the economic consequences.</span></p><p><span style="background-color: transparent;">(28:16) Ethical and practical aspects of AI assistance in legislative processes.</span></p><p><span style="background-color: transparent;">(30:13) The critical need for human involvement in AI-augmented military decisions.</span></p><p><span style="background-color: transparent;">(35:32) National security agencies' approach to AI regulatory frameworks.</span></p><p><span style="background-color: transparent;">(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/sarah-kreps-51a3b7257/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps</a> - https://www.linkedin.com/in/sarah-kreps-51a3b7257/</p><p><a href="https://www.linkedin.com/school/cornell-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cornell</a> - https://www.linkedin.com/school/cornell-university/</p><p><a href="https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps’ paper for the Brookings Institution</a> - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI Global Governance</a> - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm</p><p><a href="https://government.cornell.edu/sarah-kreps" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps - Cornell University</a> - </p><p>https://government.cornell.edu/sarah-kreps</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/sarah-kreps-51a3b7257/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps</a><span style="background-color: transparent;">, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at </span><a href="https://www.linkedin.com/school/cornell-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cornell</a><span style="background-color: transparent;"> Brooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.</span></p><p><span style="background-color: transparent;">(03:27) AI's multifaceted applications and its national security implications.</span></p><p><span style="background-color: transparent;">(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.</span></p><p><span style="background-color: transparent;">(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.</span></p><p><span style="background-color: transparent;">(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.</span></p><p><span style="background-color: transparent;">(20:30) Concerns about potential AI monopolies and the economic consequences.</span></p><p><span style="background-color: transparent;">(28:16) Ethical and practical aspects of AI assistance in legislative processes.</span></p><p><span style="background-color: transparent;">(30:13) The critical need for human involvement in AI-augmented military decisions.</span></p><p><span style="background-color: transparent;">(35:32) National security agencies' approach to AI regulatory frameworks.</span></p><p><span style="background-color: transparent;">(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/sarah-kreps-51a3b7257/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps</a> - https://www.linkedin.com/in/sarah-kreps-51a3b7257/</p><p><a href="https://www.linkedin.com/school/cornell-university/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Cornell</a> - https://www.linkedin.com/school/cornell-university/</p><p><a href="https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps’ paper for the Brookings Institution</a> - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI Global Governance</a> - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm</p><p><a href="https://government.cornell.edu/sarah-kreps" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Sarah Kreps - Cornell University</a> - </p><p>https://government.cornell.edu/sarah-kreps</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Sarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute at Cornell Brooks School of Public Policy. Her expertise in internat...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>23</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[281763b0-1b04-4765-be55-1d57d71ffb51]]></guid>
  <title><![CDATA[The Ethical Boundaries of AI and Robotics with Professor Emeritus Ronald Arkin]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor </span><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ronald Arkin</a><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: transparent;">,</a><span style="background-color: transparent;"> a renowned expert in robotics and roboethics from the </span><a href="https://www.linkedin.com/school/georgia-institute-of-technology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Georgia Institute of Technology</a><span style="background-color: transparent;">. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:40) Ethical guidelines for AI and robotics.</span></p><p><span style="background-color: transparent;">(03:19) IEEE’s role in creating soft law guidelines.</span></p><p><span style="background-color: transparent;">(06:56) Robotics’ overshadowing by large language models.</span></p><p><span style="background-color: transparent;">(10:13) The necessity of oversight and compliance in AI development.</span></p><p><span style="background-color: transparent;">(15:30) Ethical considerations for emotionally expressive robots.</span></p><p><span style="background-color: transparent;">(23:41) Liability frameworks for ethical lapses in robotics.</span></p><p><span style="background-color: transparent;">(27:43) The debate on open-sourcing robotics software.</span></p><p><span style="background-color: transparent;">(29:52) The impact of robotics on workforce and employment.</span></p><p><span style="background-color: transparent;">(33:37) Human rights implications in robotic deployment.</span></p><p><span style="background-color: transparent;">(42:55) Final insights on cautious advancement in AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://sites.cc.gatech.edu/aimosaic/faculty/arkin/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ronald Arkin</a> - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/</p><p><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);">Ronald Arkin</a> | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/</p><p><a href="https://sites.cc.gatech.edu/ai/robot-lab/" target="_blank" style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);">Georgia Tech Mobile Robot Lab</a><span style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);"> - </span>https://sites.cc.gatech.edu/ai/robot-lab/</p><p><a href="https://www.linkedin.com/school/georgia-institute-of-technology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Georgia Institute of Technology</a> - https://www.linkedin.com/school/georgia-institute-of-technology/</p><p><a href="https://standards.ieee.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">IEEE Standards Association</a> - https://standards.ieee.org/</p><p><a href="https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&amp;clang=_en&amp;mtdsg_no=XXVI-2&amp;src=TREATY" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United Nations Convention on Certain Conventional Weapons</a> - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&amp;clang=_en&amp;mtdsg_no=XXVI-2&amp;src=TREATY</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2efdeb19-fce2-44e8-982d-fe35dc8ac0e7/3121fe8a5f.jpg" />
  <pubDate>Fri, 08 Mar 2024 20:05:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="41011489" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2efdeb19-fce2-44e8-982d-fe35dc8ac0e7/episode.mp3" />
  <itunes:title><![CDATA[The Ethical Boundaries of AI and Robotics with Professor Emeritus Ronald Arkin]]></itunes:title>
  <itunes:duration>42:43</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor </span><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ronald Arkin</a><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: transparent;">,</a><span style="background-color: transparent;"> a renowned expert in robotics and roboethics from the </span><a href="https://www.linkedin.com/school/georgia-institute-of-technology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Georgia Institute of Technology</a><span style="background-color: transparent;">. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:40) Ethical guidelines for AI and robotics.</span></p><p><span style="background-color: transparent;">(03:19) IEEE’s role in creating soft law guidelines.</span></p><p><span style="background-color: transparent;">(06:56) Robotics’ overshadowing by large language models.</span></p><p><span style="background-color: transparent;">(10:13) The necessity of oversight and compliance in AI development.</span></p><p><span style="background-color: transparent;">(15:30) Ethical considerations for emotionally expressive robots.</span></p><p><span style="background-color: transparent;">(23:41) Liability frameworks for ethical lapses in robotics.</span></p><p><span style="background-color: transparent;">(27:43) The debate on open-sourcing robotics software.</span></p><p><span style="background-color: transparent;">(29:52) The impact of robotics on workforce and employment.</span></p><p><span style="background-color: transparent;">(33:37) Human rights implications in robotic deployment.</span></p><p><span style="background-color: transparent;">(42:55) Final insights on cautious advancement in AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://sites.cc.gatech.edu/aimosaic/faculty/arkin/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ronald Arkin</a> - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/</p><p><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);">Ronald Arkin</a> | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/</p><p><a href="https://sites.cc.gatech.edu/ai/robot-lab/" target="_blank" style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);">Georgia Tech Mobile Robot Lab</a><span style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);"> - </span>https://sites.cc.gatech.edu/ai/robot-lab/</p><p><a href="https://www.linkedin.com/school/georgia-institute-of-technology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Georgia Institute of Technology</a> - https://www.linkedin.com/school/georgia-institute-of-technology/</p><p><a href="https://standards.ieee.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">IEEE Standards Association</a> - https://standards.ieee.org/</p><p><a href="https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&amp;clang=_en&amp;mtdsg_no=XXVI-2&amp;src=TREATY" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United Nations Convention on Certain Conventional Weapons</a> - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&amp;clang=_en&amp;mtdsg_no=XXVI-2&amp;src=TREATY</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by Professor </span><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ronald Arkin</a><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: transparent;">,</a><span style="background-color: transparent;"> a renowned expert in robotics and roboethics from the </span><a href="https://www.linkedin.com/school/georgia-institute-of-technology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Georgia Institute of Technology</a><span style="background-color: transparent;">. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:40) Ethical guidelines for AI and robotics.</span></p><p><span style="background-color: transparent;">(03:19) IEEE’s role in creating soft law guidelines.</span></p><p><span style="background-color: transparent;">(06:56) Robotics’ overshadowing by large language models.</span></p><p><span style="background-color: transparent;">(10:13) The necessity of oversight and compliance in AI development.</span></p><p><span style="background-color: transparent;">(15:30) Ethical considerations for emotionally expressive robots.</span></p><p><span style="background-color: transparent;">(23:41) Liability frameworks for ethical lapses in robotics.</span></p><p><span style="background-color: transparent;">(27:43) The debate on open-sourcing robotics software.</span></p><p><span style="background-color: transparent;">(29:52) The impact of robotics on workforce and employment.</span></p><p><span style="background-color: transparent;">(33:37) Human rights implications in robotic deployment.</span></p><p><span style="background-color: transparent;">(42:55) Final insights on cautious advancement in AI regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://sites.cc.gatech.edu/aimosaic/faculty/arkin/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Ronald Arkin</a> - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/</p><p><a href="https://www.linkedin.com/in/ronald-arkin-a3a9206/" target="_blank" style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);">Ronald Arkin</a> | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/</p><p><a href="https://sites.cc.gatech.edu/ai/robot-lab/" target="_blank" style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);">Georgia Tech Mobile Robot Lab</a><span style="background-color: rgb(248, 248, 248); color: rgba(var(--sk_highlight_hover,11,76,140),1);"> - </span>https://sites.cc.gatech.edu/ai/robot-lab/</p><p><a href="https://www.linkedin.com/school/georgia-institute-of-technology/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Georgia Institute of Technology</a> - https://www.linkedin.com/school/georgia-institute-of-technology/</p><p><a href="https://standards.ieee.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">IEEE Standards Association</a> - https://standards.ieee.org/</p><p><a href="https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&amp;clang=_en&amp;mtdsg_no=XXVI-2&amp;src=TREATY" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">United Nations Convention on Certain Conventional Weapons</a> - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&amp;clang=_en&amp;mtdsg_no=XXVI-2&amp;src=TREATY</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Professor Ronald Arkin, a renowned expert in robotics and roboethics from the Georgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regul...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>22</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[c0a62107-680b-4a2a-9745-e14c49fab8ea]]></guid>
  <title><![CDATA[Navigating AI Innovation and Ethics in Legislation with Steve Mills]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/stevndmills/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Steve Mills,</a><span style="background-color: transparent;"> Global Chief AI Ethics Officer for </span><a href="https://www.linkedin.com/company/boston-consulting-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Boston Consulting Group</a><span style="background-color: transparent;"> and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:26) The role clear regulations play in fostering innovation.</span></p><p><span style="background-color: transparent;">(02:43) The importance of consultation with industry to set achievable regulations.</span></p><p><span style="background-color: transparent;">(04:07) Addressing the uncertainty surrounding AI regulation.</span></p><p><span style="background-color: transparent;">(06:19) The necessity of sector-specific AI regulations.</span></p><p><span style="background-color: transparent;">(07:33) The debate over establishing a separate AI regulatory body.</span></p><p><span style="background-color: transparent;">(09:22) Adapting AI policy to keep pace with technological advancements.</span></p><p><span style="background-color: transparent;">(11:40) Enhancing AI literacy and upskilling the workforce.</span></p><p><span style="background-color: transparent;">(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.</span></p><p><span style="background-color: transparent;">(15:01) Strategies for ensuring AI systems are fair and equitable.</span></p><p><span style="background-color: transparent;">(20:10) The discussion on open-source AI and combating monopolies.</span></p><p><span style="background-color: transparent;">(22:00) The importance of transparency in AI usage by companies.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/stevndmills/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Steve Mills</a> - https://www.linkedin.com/in/stevndmills/</p><p><a href="https://www.linkedin.com/company/boston-consulting-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Boston Consulting Group</a> - https://www.linkedin.com/company/boston-consulting-group/</p><p><a href="https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Responsible AI Ethics</a> - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai</p><p><a href="https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Study on the impact of AI in the workforce</a> - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/8924aa68-a0a5-47a5-ad9e-76e58c68bea7/08363c651a.jpg" />
  <pubDate>Thu, 07 Mar 2024 07:13:29 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="24369153" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/8924aa68-a0a5-47a5-ad9e-76e58c68bea7/episode.mp3" />
  <itunes:title><![CDATA[Navigating AI Innovation and Ethics in Legislation with Steve Mills]]></itunes:title>
  <itunes:duration>25:23</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/stevndmills/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Steve Mills,</a><span style="background-color: transparent;"> Global Chief AI Ethics Officer for </span><a href="https://www.linkedin.com/company/boston-consulting-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Boston Consulting Group</a><span style="background-color: transparent;"> and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:26) The role clear regulations play in fostering innovation.</span></p><p><span style="background-color: transparent;">(02:43) The importance of consultation with industry to set achievable regulations.</span></p><p><span style="background-color: transparent;">(04:07) Addressing the uncertainty surrounding AI regulation.</span></p><p><span style="background-color: transparent;">(06:19) The necessity of sector-specific AI regulations.</span></p><p><span style="background-color: transparent;">(07:33) The debate over establishing a separate AI regulatory body.</span></p><p><span style="background-color: transparent;">(09:22) Adapting AI policy to keep pace with technological advancements.</span></p><p><span style="background-color: transparent;">(11:40) Enhancing AI literacy and upskilling the workforce.</span></p><p><span style="background-color: transparent;">(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.</span></p><p><span style="background-color: transparent;">(15:01) Strategies for ensuring AI systems are fair and equitable.</span></p><p><span style="background-color: transparent;">(20:10) The discussion on open-source AI and combating monopolies.</span></p><p><span style="background-color: transparent;">(22:00) The importance of transparency in AI usage by companies.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/stevndmills/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Steve Mills</a> - https://www.linkedin.com/in/stevndmills/</p><p><a href="https://www.linkedin.com/company/boston-consulting-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Boston Consulting Group</a> - https://www.linkedin.com/company/boston-consulting-group/</p><p><a href="https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Responsible AI Ethics</a> - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai</p><p><a href="https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Study on the impact of AI in the workforce</a> - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/stevndmills/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Steve Mills,</a><span style="background-color: transparent;"> Global Chief AI Ethics Officer for </span><a href="https://www.linkedin.com/company/boston-consulting-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Boston Consulting Group</a><span style="background-color: transparent;"> and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:26) The role clear regulations play in fostering innovation.</span></p><p><span style="background-color: transparent;">(02:43) The importance of consultation with industry to set achievable regulations.</span></p><p><span style="background-color: transparent;">(04:07) Addressing the uncertainty surrounding AI regulation.</span></p><p><span style="background-color: transparent;">(06:19) The necessity of sector-specific AI regulations.</span></p><p><span style="background-color: transparent;">(07:33) The debate over establishing a separate AI regulatory body.</span></p><p><span style="background-color: transparent;">(09:22) Adapting AI policy to keep pace with technological advancements.</span></p><p><span style="background-color: transparent;">(11:40) Enhancing AI literacy and upskilling the workforce.</span></p><p><span style="background-color: transparent;">(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.</span></p><p><span style="background-color: transparent;">(15:01) Strategies for ensuring AI systems are fair and equitable.</span></p><p><span style="background-color: transparent;">(20:10) The discussion on open-source AI and combating monopolies.</span></p><p><span style="background-color: transparent;">(22:00) The importance of transparency in AI usage by companies.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/stevndmills/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Steve Mills</a> - https://www.linkedin.com/in/stevndmills/</p><p><a href="https://www.linkedin.com/company/boston-consulting-group/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Boston Consulting Group</a> - https://www.linkedin.com/company/boston-consulting-group/</p><p><a href="https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Responsible AI Ethics</a> - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai</p><p><a href="https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Study on the impact of AI in the workforce</a> - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I welcome Steve Mills, Global Chief AI Ethics Officer for Boston Consulting Group and Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>21</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[9ff95ae0-3cd9-431d-869b-a4ea7df7cd5f]]></guid>
  <title><![CDATA[The Impact of Rapid AI Evolution with Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/kzenner/?originalSubdomain=be" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kai Zenner</a><span style="background-color: transparent;">, Head of Office and Digital Policy Advisor at the </span><a href="https://www.linkedin.com/company/european-parliament/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Parliament</a><span style="background-color: transparent;">. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:36) Diverse perspectives in AI legislation play a significant role.</span></p><p><span style="background-color: transparent;">(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.</span></p><p><span style="background-color: transparent;">(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.</span></p><p><span style="background-color: transparent;">(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.</span></p><p><span style="background-color: transparent;">(11:50) The global approach of the EU AI Act and its focus on international alignment.</span></p><p><span style="background-color: transparent;">(14:28) Ethical considerations in AI development addressed by the AI Act.</span></p><p><span style="background-color: transparent;">(16:21) Implementation and enforcement mechanisms of the EU AI Act.</span></p><p><span style="background-color: transparent;">(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.</span></p><p><span style="background-color: transparent;">(29:51) The importance of educating the public on AI issues.</span></p><p><span style="background-color: transparent;">(33:12) Concerns about deepfake technology and election interference.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/kzenner/?originalSubdomain=be" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kai Zenner</a> - https://www.linkedin.com/in/kzenner/?originalSubdomain=be</p><p><a href="https://www.linkedin.com/company/european-parliament/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Parliament</a> - https://www.linkedin.com/company/european-parliament/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e36f6adf-4287-4870-9f94-c0e8a622d1f2/63a0d5016b.jpg" />
  <pubDate>Mon, 04 Mar 2024 01:48:16 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="36693097" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e36f6adf-4287-4870-9f94-c0e8a622d1f2/episode.mp3" />
  <itunes:title><![CDATA[The Impact of Rapid AI Evolution with Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament]]></itunes:title>
  <itunes:duration>38:13</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/kzenner/?originalSubdomain=be" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kai Zenner</a><span style="background-color: transparent;">, Head of Office and Digital Policy Advisor at the </span><a href="https://www.linkedin.com/company/european-parliament/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Parliament</a><span style="background-color: transparent;">. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:36) Diverse perspectives in AI legislation play a significant role.</span></p><p><span style="background-color: transparent;">(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.</span></p><p><span style="background-color: transparent;">(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.</span></p><p><span style="background-color: transparent;">(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.</span></p><p><span style="background-color: transparent;">(11:50) The global approach of the EU AI Act and its focus on international alignment.</span></p><p><span style="background-color: transparent;">(14:28) Ethical considerations in AI development addressed by the AI Act.</span></p><p><span style="background-color: transparent;">(16:21) Implementation and enforcement mechanisms of the EU AI Act.</span></p><p><span style="background-color: transparent;">(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.</span></p><p><span style="background-color: transparent;">(29:51) The importance of educating the public on AI issues.</span></p><p><span style="background-color: transparent;">(33:12) Concerns about deepfake technology and election interference.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/kzenner/?originalSubdomain=be" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kai Zenner</a> - https://www.linkedin.com/in/kzenner/?originalSubdomain=be</p><p><a href="https://www.linkedin.com/company/european-parliament/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Parliament</a> - https://www.linkedin.com/company/european-parliament/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I welcome </span><a href="https://www.linkedin.com/in/kzenner/?originalSubdomain=be" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kai Zenner</a><span style="background-color: transparent;">, Head of Office and Digital Policy Advisor at the </span><a href="https://www.linkedin.com/company/european-parliament/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Parliament</a><span style="background-color: transparent;">. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:36) Diverse perspectives in AI legislation play a significant role.</span></p><p><span style="background-color: transparent;">(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.</span></p><p><span style="background-color: transparent;">(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.</span></p><p><span style="background-color: transparent;">(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.</span></p><p><span style="background-color: transparent;">(11:50) The global approach of the EU AI Act and its focus on international alignment.</span></p><p><span style="background-color: transparent;">(14:28) Ethical considerations in AI development addressed by the AI Act.</span></p><p><span style="background-color: transparent;">(16:21) Implementation and enforcement mechanisms of the EU AI Act.</span></p><p><span style="background-color: transparent;">(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.</span></p><p><span style="background-color: transparent;">(29:51) The importance of educating the public on AI issues.</span></p><p><span style="background-color: transparent;">(33:12) Concerns about deepfake technology and election interference.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/kzenner/?originalSubdomain=be" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kai Zenner</a> - https://www.linkedin.com/in/kzenner/?originalSubdomain=be</p><p><a href="https://www.linkedin.com/company/european-parliament/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">European Parliament</a> - https://www.linkedin.com/company/european-parliament/</p><p><a href="https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I welcome Kai Zenner, Head of Office and Digital Policy Advisor at the European Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regu...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>20</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1c04955e-17e5-49dd-b89a-f812946c822b]]></guid>
  <title><![CDATA[The Role of AI in Society with Lexy Kassan, Lead Data and AI Strategist of Databricks]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lexi Kassan</a><span style="background-color: transparent;">, Lead Data and AI Strategist of </span><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a><span style="background-color: transparent;"> and Founder and Host of the </span><a href="https://www.linkedin.com/company/dsethics/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Data Science Ethics Podcast</a><span style="background-color: transparent;">. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:44) The global impact of the EU AI Act.</span></p><p><span style="background-color: transparent;">(03:46) The necessity for risk-based AI model assessments.</span></p><p><span style="background-color: transparent;">(08:20) Ethical challenges hidden within AI applications.</span></p><p><span style="background-color: transparent;">(11:45) Strategies for inclusive AI benefiting marginalized communities.</span></p><p><span style="background-color: transparent;">(13:29) Core ethical principles for AI systems.</span></p><p><span style="background-color: transparent;">(19:50) The complexity of creating unbiased AI data sets.</span></p><p><span style="background-color: transparent;">(21:58) Categories of unacceptable risks in AI according to the EU Act.</span></p><p><span style="background-color: transparent;">(27:18) Accountability in AI deployment.</span></p><p><span style="background-color: transparent;">(30:53) The role of open-source models in AI development.</span></p><p><span style="background-color: transparent;">(36:24) Businesses seek clear regulatory guidelines.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lexi Kassan</a> - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk</p><p><a href="https://www.linkedin.com/company/dsethics/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Data Science Ethics Podcast</a> - https://www.linkedin.com/company/dsethics/</p><p><a href="https://artificialintelligenceact.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://artificialintelligenceact.eu/</p><p><a href="https://www.databricks.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a> - https://www.databricks.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/1978782c-9d24-4891-af4c-a1dc407be362/5550bf0dc4.jpg" />
  <pubDate>Thu, 29 Feb 2024 07:48:49 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="37583768" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/1978782c-9d24-4891-af4c-a1dc407be362/episode.mp3" />
  <itunes:title><![CDATA[The Role of AI in Society with Lexy Kassan, Lead Data and AI Strategist of Databricks]]></itunes:title>
  <itunes:duration>39:08</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lexi Kassan</a><span style="background-color: transparent;">, Lead Data and AI Strategist of </span><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a><span style="background-color: transparent;"> and Founder and Host of the </span><a href="https://www.linkedin.com/company/dsethics/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Data Science Ethics Podcast</a><span style="background-color: transparent;">. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:44) The global impact of the EU AI Act.</span></p><p><span style="background-color: transparent;">(03:46) The necessity for risk-based AI model assessments.</span></p><p><span style="background-color: transparent;">(08:20) Ethical challenges hidden within AI applications.</span></p><p><span style="background-color: transparent;">(11:45) Strategies for inclusive AI benefiting marginalized communities.</span></p><p><span style="background-color: transparent;">(13:29) Core ethical principles for AI systems.</span></p><p><span style="background-color: transparent;">(19:50) The complexity of creating unbiased AI data sets.</span></p><p><span style="background-color: transparent;">(21:58) Categories of unacceptable risks in AI according to the EU Act.</span></p><p><span style="background-color: transparent;">(27:18) Accountability in AI deployment.</span></p><p><span style="background-color: transparent;">(30:53) The role of open-source models in AI development.</span></p><p><span style="background-color: transparent;">(36:24) Businesses seek clear regulatory guidelines.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lexi Kassan</a> - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk</p><p><a href="https://www.linkedin.com/company/dsethics/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Data Science Ethics Podcast</a> - https://www.linkedin.com/company/dsethics/</p><p><a href="https://artificialintelligenceact.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://artificialintelligenceact.eu/</p><p><a href="https://www.databricks.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a> - https://www.databricks.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lexi Kassan</a><span style="background-color: transparent;">, Lead Data and AI Strategist of </span><a href="https://www.linkedin.com/company/databricks/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a><span style="background-color: transparent;"> and Founder and Host of the </span><a href="https://www.linkedin.com/company/dsethics/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Data Science Ethics Podcast</a><span style="background-color: transparent;">. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:44) The global impact of the EU AI Act.</span></p><p><span style="background-color: transparent;">(03:46) The necessity for risk-based AI model assessments.</span></p><p><span style="background-color: transparent;">(08:20) Ethical challenges hidden within AI applications.</span></p><p><span style="background-color: transparent;">(11:45) Strategies for inclusive AI benefiting marginalized communities.</span></p><p><span style="background-color: transparent;">(13:29) Core ethical principles for AI systems.</span></p><p><span style="background-color: transparent;">(19:50) The complexity of creating unbiased AI data sets.</span></p><p><span style="background-color: transparent;">(21:58) Categories of unacceptable risks in AI according to the EU Act.</span></p><p><span style="background-color: transparent;">(27:18) Accountability in AI deployment.</span></p><p><span style="background-color: transparent;">(30:53) The role of open-source models in AI development.</span></p><p><span style="background-color: transparent;">(36:24) Businesses seek clear regulatory guidelines.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Lexi Kassan</a> - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk</p><p><a href="https://www.linkedin.com/company/dsethics/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Data Science Ethics Podcast</a> - https://www.linkedin.com/company/dsethics/</p><p><a href="https://artificialintelligenceact.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://artificialintelligenceact.eu/</p><p><a href="https://www.databricks.com/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Databricks</a> - https://www.databricks.com/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I’m joined by Lexi Kassan, Lead Data and AI Strategist of Databricks and Founder and Host of the Data Science Ethics Podcast. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>19</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[e22a1a7b-b020-42ad-8794-68e11bf714d3]]></guid>
  <title><![CDATA[Existential Risk in AI with Otto Barten]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Otto Barten</a><span style="background-color: transparent;">, Founder of the </span><a href="https://www.linkedin.com/company/existential-risk-observatory/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Existential Risk Observatory</a><span style="background-color: transparent;">. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:18) Public awareness of AI risks is rising rapidly.</span></p><p><span style="background-color: transparent;">(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.</span></p><p><span style="background-color: transparent;">(02:51) The European Union’s political consensus on the EU AI Act.</span></p><p><span style="background-color: transparent;">(04:11) Otto explains multiple AI threat models leading to existential risks.</span></p><p><span style="background-color: transparent;">(07:01) Why distinguish between AGI and current AI capabilities?</span></p><p><span style="background-color: transparent;">(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.</span></p><p><span style="background-color: transparent;">(12:15) The potential dangers of open-sourcing AGI.</span></p><p><span style="background-color: transparent;">(14:17) The current regulatory landscapes and potential improvements.</span></p><p><span style="background-color: transparent;">(17:01) The concept of a “pause button” for AI development is introduced.</span></p><p><span style="background-color: transparent;">(20:13) Balancing AI development with ethical considerations and existential risks.</span></p><p><span style="background-color: transparent;">(23:51) Increasing public and legislative awareness of AI risks.</span></p><p><span style="background-color: transparent;">(29:01) The significance of transparency and accountability in AI development.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Otto Barten</a> - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl</p><p><a href="https://www.linkedin.com/company/existential-risk-observatory/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Existential Risk Observatory</a> - https://www.linkedin.com/company/existential-risk-observatory/</p><p><span style="background-color: transparent;">European Union AI Act - </span></p><p><span style="background-color: transparent;">The Bletchley Process for global AI safety summits - </span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/07daed2a-d049-4906-82d7-f49ed6d74e40/06e4edb3c1.jpg" />
  <pubDate>Tue, 27 Feb 2024 20:01:21 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="36339086" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/07daed2a-d049-4906-82d7-f49ed6d74e40/episode.mp3" />
  <itunes:title><![CDATA[Existential Risk in AI with Otto Barten]]></itunes:title>
  <itunes:duration>37:51</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Otto Barten</a><span style="background-color: transparent;">, Founder of the </span><a href="https://www.linkedin.com/company/existential-risk-observatory/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Existential Risk Observatory</a><span style="background-color: transparent;">. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:18) Public awareness of AI risks is rising rapidly.</span></p><p><span style="background-color: transparent;">(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.</span></p><p><span style="background-color: transparent;">(02:51) The European Union’s political consensus on the EU AI Act.</span></p><p><span style="background-color: transparent;">(04:11) Otto explains multiple AI threat models leading to existential risks.</span></p><p><span style="background-color: transparent;">(07:01) Why distinguish between AGI and current AI capabilities?</span></p><p><span style="background-color: transparent;">(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.</span></p><p><span style="background-color: transparent;">(12:15) The potential dangers of open-sourcing AGI.</span></p><p><span style="background-color: transparent;">(14:17) The current regulatory landscapes and potential improvements.</span></p><p><span style="background-color: transparent;">(17:01) The concept of a “pause button” for AI development is introduced.</span></p><p><span style="background-color: transparent;">(20:13) Balancing AI development with ethical considerations and existential risks.</span></p><p><span style="background-color: transparent;">(23:51) Increasing public and legislative awareness of AI risks.</span></p><p><span style="background-color: transparent;">(29:01) The significance of transparency and accountability in AI development.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Otto Barten</a> - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl</p><p><a href="https://www.linkedin.com/company/existential-risk-observatory/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Existential Risk Observatory</a> - https://www.linkedin.com/company/existential-risk-observatory/</p><p><span style="background-color: transparent;">European Union AI Act - </span></p><p><span style="background-color: transparent;">The Bletchley Process for global AI safety summits - </span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by </span><a href="https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Otto Barten</a><span style="background-color: transparent;">, Founder of the </span><a href="https://www.linkedin.com/company/existential-risk-observatory/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Existential Risk Observatory</a><span style="background-color: transparent;">. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:18) Public awareness of AI risks is rising rapidly.</span></p><p><span style="background-color: transparent;">(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.</span></p><p><span style="background-color: transparent;">(02:51) The European Union’s political consensus on the EU AI Act.</span></p><p><span style="background-color: transparent;">(04:11) Otto explains multiple AI threat models leading to existential risks.</span></p><p><span style="background-color: transparent;">(07:01) Why distinguish between AGI and current AI capabilities?</span></p><p><span style="background-color: transparent;">(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.</span></p><p><span style="background-color: transparent;">(12:15) The potential dangers of open-sourcing AGI.</span></p><p><span style="background-color: transparent;">(14:17) The current regulatory landscapes and potential improvements.</span></p><p><span style="background-color: transparent;">(17:01) The concept of a “pause button” for AI development is introduced.</span></p><p><span style="background-color: transparent;">(20:13) Balancing AI development with ethical considerations and existential risks.</span></p><p><span style="background-color: transparent;">(23:51) Increasing public and legislative awareness of AI risks.</span></p><p><span style="background-color: transparent;">(29:01) The significance of transparency and accountability in AI development.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Otto Barten</a> - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl</p><p><a href="https://www.linkedin.com/company/existential-risk-observatory/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Existential Risk Observatory</a> - https://www.linkedin.com/company/existential-risk-observatory/</p><p><span style="background-color: transparent;">European Union AI Act - </span></p><p><span style="background-color: transparent;">The Bletchley Process for global AI safety summits - </span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined by Otto Barten, Founder of the Existential Risk Obser...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>20</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[8a93eb95-b5ef-4d71-93a5-fafb1c9482c1]]></guid>
  <title><![CDATA[A Vision for a Balanced AI Future with Daniel Jeffries of AI Infrastructure Alliance and Kentauros AI]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by </span><a href="https://www.linkedin.com/in/danjeffries/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Jeffries</a><span style="background-color: transparent;">, Managing Director of the </span><a href="https://www.linkedin.com/company/ai-infrastructure-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Infrastructure Alliance</a><span style="background-color: transparent;"> and CEO of </span><a href="https://www.linkedin.com/company/kentauros-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kentauros</a><span style="background-color: transparent;">, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:05) Recent executive orders on AI, watermarking and model size regulation.</span></p><p><span style="background-color: transparent;">(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.</span></p><p><span style="background-color: transparent;">(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.</span></p><p><span style="background-color: transparent;">(07:52) The rapid evolution of AI and the legislative challenge to keep pace.</span></p><p><span style="background-color: transparent;">(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.</span></p><p><span style="background-color: transparent;">(13:29) The role of open source in fostering innovation.</span></p><p><span style="background-color: transparent;">(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.</span></p><p><span style="background-color: transparent;">(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.</span></p><p><span style="background-color: transparent;">(22:33) Recommendations for policymakers to focus on real-world problems.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/danjeffries/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Jeffries</a> - https://www.linkedin.com/in/danjeffries/</p><p><a href="https://www.linkedin.com/company/ai-infrastructure-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Infrastructure Alliance</a> - https://www.linkedin.com/company/ai-infrastructure-alliance/</p><p><a href="https://www.linkedin.com/company/kentauros-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kentauros</a> - https://www.linkedin.com/company/kentauros-ai/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/60ece08b-93e2-4c29-8d75-4573fc189103/d1b08ac87e.jpg" />
  <pubDate>Fri, 16 Feb 2024 21:42:21 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="28254501" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/60ece08b-93e2-4c29-8d75-4573fc189103/episode.mp3" />
  <itunes:title><![CDATA[A Vision for a Balanced AI Future with Daniel Jeffries of AI Infrastructure Alliance and Kentauros AI]]></itunes:title>
  <itunes:duration>29:25</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by </span><a href="https://www.linkedin.com/in/danjeffries/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Jeffries</a><span style="background-color: transparent;">, Managing Director of the </span><a href="https://www.linkedin.com/company/ai-infrastructure-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Infrastructure Alliance</a><span style="background-color: transparent;"> and CEO of </span><a href="https://www.linkedin.com/company/kentauros-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kentauros</a><span style="background-color: transparent;">, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:05) Recent executive orders on AI, watermarking and model size regulation.</span></p><p><span style="background-color: transparent;">(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.</span></p><p><span style="background-color: transparent;">(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.</span></p><p><span style="background-color: transparent;">(07:52) The rapid evolution of AI and the legislative challenge to keep pace.</span></p><p><span style="background-color: transparent;">(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.</span></p><p><span style="background-color: transparent;">(13:29) The role of open source in fostering innovation.</span></p><p><span style="background-color: transparent;">(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.</span></p><p><span style="background-color: transparent;">(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.</span></p><p><span style="background-color: transparent;">(22:33) Recommendations for policymakers to focus on real-world problems.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/danjeffries/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Jeffries</a> - https://www.linkedin.com/in/danjeffries/</p><p><a href="https://www.linkedin.com/company/ai-infrastructure-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Infrastructure Alliance</a> - https://www.linkedin.com/company/ai-infrastructure-alliance/</p><p><a href="https://www.linkedin.com/company/kentauros-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kentauros</a> - https://www.linkedin.com/company/kentauros-ai/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by </span><a href="https://www.linkedin.com/in/danjeffries/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Jeffries</a><span style="background-color: transparent;">, Managing Director of the </span><a href="https://www.linkedin.com/company/ai-infrastructure-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Infrastructure Alliance</a><span style="background-color: transparent;"> and CEO of </span><a href="https://www.linkedin.com/company/kentauros-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kentauros</a><span style="background-color: transparent;">, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:05) Recent executive orders on AI, watermarking and model size regulation.</span></p><p><span style="background-color: transparent;">(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.</span></p><p><span style="background-color: transparent;">(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.</span></p><p><span style="background-color: transparent;">(07:52) The rapid evolution of AI and the legislative challenge to keep pace.</span></p><p><span style="background-color: transparent;">(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.</span></p><p><span style="background-color: transparent;">(13:29) The role of open source in fostering innovation.</span></p><p><span style="background-color: transparent;">(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.</span></p><p><span style="background-color: transparent;">(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.</span></p><p><span style="background-color: transparent;">(22:33) Recommendations for policymakers to focus on real-world problems.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/danjeffries/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Daniel Jeffries</a> - https://www.linkedin.com/in/danjeffries/</p><p><a href="https://www.linkedin.com/company/ai-infrastructure-alliance/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI Infrastructure Alliance</a> - https://www.linkedin.com/company/ai-infrastructure-alliance/</p><p><a href="https://www.linkedin.com/company/kentauros-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Kentauros</a> - https://www.linkedin.com/company/kentauros-ai/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I'm joined by Daniel Jeffries, Managing Director of the AI Infrastructure Alliance and CEO of Kentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.Key Takeaways:(...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>18</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[e2bb93e3-fab4-4c77-8ba8-5e15a2741ae6]]></guid>
  <title><![CDATA[Crafting Equitable AI Policies for Work and Education with Alex Swartsel]]></title>
  <description><![CDATA[<p><span style="background-color: transparent; color: rgb(13, 13, 13);">On this episode, I welcome </span><a href="https://www.linkedin.com/in/alexswartsel/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Alex Swartsel</a><span style="background-color: transparent; color: rgb(13, 13, 13);">, Managing Director of Insights</span><span style="background-color: transparent;"> at </span><span style="background-color: transparent; color: rgb(13, 13, 13);">JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:16) AI’s transformative impact on employment.</span></p><p><span style="background-color: transparent;">(02:35) The role AI plays in job transformation and skill enhancement.</span></p><p><span style="background-color: transparent;">(04:30) The automation and augmentation of tasks by AI.</span></p><p><span style="background-color: transparent;">(06:10) Rethinking education and skill development in the age of AI.</span></p><p><span style="background-color: transparent;">(09:22) The significance of soft skills in conjunction with technical knowledge.</span></p><p><span style="background-color: transparent;">(11:00) AI’s potential to customize learning experiences.</span></p><p><span style="background-color: transparent;">(17:20) The pivotal role of community colleges in workforce training.</span></p><p><span style="background-color: transparent;">(21:33) The imperative of reskilling and the government’s role.</span></p><p><span style="background-color: transparent;">(29:51) Using AI for personalized education and career guidance.</span></p><p><span style="background-color: transparent;">(35:09) Promoting AI as a tool for human advancement.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/alexswartsel/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Alex Swartsel</a> - https://www.linkedin.com/in/alexswartsel/</p><p><a href="https://www.jff.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">JFFLabs’ New Center for Artificial Intelligence and the Future of Work</a> - https://www.jff.org/</p><p><a href="https://info.jff.org/ai-ready" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The AI-Ready Workforce</a><span style="background-color: transparent;"> report - </span>https://info.jff.org/ai-ready</p><p><a href="https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">IMF Report on AI’s Impact on Jobs</a> - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/c032303f-5e5b-4877-8f5b-673a11925e56/594edd6ade.jpg" />
  <pubDate>Wed, 14 Feb 2024 10:39:26 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="34395195" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/c032303f-5e5b-4877-8f5b-673a11925e56/episode.mp3" />
  <itunes:title><![CDATA[Crafting Equitable AI Policies for Work and Education with Alex Swartsel]]></itunes:title>
  <itunes:duration>35:49</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent; color: rgb(13, 13, 13);">On this episode, I welcome </span><a href="https://www.linkedin.com/in/alexswartsel/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Alex Swartsel</a><span style="background-color: transparent; color: rgb(13, 13, 13);">, Managing Director of Insights</span><span style="background-color: transparent;"> at </span><span style="background-color: transparent; color: rgb(13, 13, 13);">JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:16) AI’s transformative impact on employment.</span></p><p><span style="background-color: transparent;">(02:35) The role AI plays in job transformation and skill enhancement.</span></p><p><span style="background-color: transparent;">(04:30) The automation and augmentation of tasks by AI.</span></p><p><span style="background-color: transparent;">(06:10) Rethinking education and skill development in the age of AI.</span></p><p><span style="background-color: transparent;">(09:22) The significance of soft skills in conjunction with technical knowledge.</span></p><p><span style="background-color: transparent;">(11:00) AI’s potential to customize learning experiences.</span></p><p><span style="background-color: transparent;">(17:20) The pivotal role of community colleges in workforce training.</span></p><p><span style="background-color: transparent;">(21:33) The imperative of reskilling and the government’s role.</span></p><p><span style="background-color: transparent;">(29:51) Using AI for personalized education and career guidance.</span></p><p><span style="background-color: transparent;">(35:09) Promoting AI as a tool for human advancement.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/alexswartsel/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Alex Swartsel</a> - https://www.linkedin.com/in/alexswartsel/</p><p><a href="https://www.jff.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">JFFLabs’ New Center for Artificial Intelligence and the Future of Work</a> - https://www.jff.org/</p><p><a href="https://info.jff.org/ai-ready" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The AI-Ready Workforce</a><span style="background-color: transparent;"> report - </span>https://info.jff.org/ai-ready</p><p><a href="https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">IMF Report on AI’s Impact on Jobs</a> - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent; color: rgb(13, 13, 13);">On this episode, I welcome </span><a href="https://www.linkedin.com/in/alexswartsel/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Alex Swartsel</a><span style="background-color: transparent; color: rgb(13, 13, 13);">, Managing Director of Insights</span><span style="background-color: transparent;"> at </span><span style="background-color: transparent; color: rgb(13, 13, 13);">JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:16) AI’s transformative impact on employment.</span></p><p><span style="background-color: transparent;">(02:35) The role AI plays in job transformation and skill enhancement.</span></p><p><span style="background-color: transparent;">(04:30) The automation and augmentation of tasks by AI.</span></p><p><span style="background-color: transparent;">(06:10) Rethinking education and skill development in the age of AI.</span></p><p><span style="background-color: transparent;">(09:22) The significance of soft skills in conjunction with technical knowledge.</span></p><p><span style="background-color: transparent;">(11:00) AI’s potential to customize learning experiences.</span></p><p><span style="background-color: transparent;">(17:20) The pivotal role of community colleges in workforce training.</span></p><p><span style="background-color: transparent;">(21:33) The imperative of reskilling and the government’s role.</span></p><p><span style="background-color: transparent;">(29:51) Using AI for personalized education and career guidance.</span></p><p><span style="background-color: transparent;">(35:09) Promoting AI as a tool for human advancement.</span></p><p><br></p><p><strong style="background-color: transparent; color: rgb(13, 13, 13);">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/alexswartsel/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Alex Swartsel</a> - https://www.linkedin.com/in/alexswartsel/</p><p><a href="https://www.jff.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">JFFLabs’ New Center for Artificial Intelligence and the Future of Work</a> - https://www.jff.org/</p><p><a href="https://info.jff.org/ai-ready" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">The AI-Ready Workforce</a><span style="background-color: transparent;"> report - </span>https://info.jff.org/ai-ready</p><p><a href="https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">IMF Report on AI’s Impact on Jobs</a> - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I welcome Alex Swartsel, Managing Director of Insights at JFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential d...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>17</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[0d0f717c-9fc7-4d12-84fe-5f494425ff48]]></guid>
  <title><![CDATA[Envisioning a Harmonious Future Between AI and Humanity with Avi Loeb]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:36) The essential role of academia in fostering dialogue across differing viewpoints.</span></p><p><span style="background-color: transparent;">(06:58) Professor Loeb's concerns about AI's unpredictability.</span></p><p><span style="background-color: transparent;">(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.</span></p><p><span style="background-color: transparent;">(10:59) Assigning responsibility for AI's actions.</span></p><p><span style="background-color: transparent;">(14:29) The need for international treaties to regulate AI's use in national security and warfare.</span></p><p><span style="background-color: transparent;">(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.</span></p><p><span style="background-color: transparent;">(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.</span></p><p><span style="background-color: transparent;">(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><br></p><p><a href="https://projects.iq.harvard.edu/galileo/home" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard University's Galileo Project</a> - https://projects.iq.harvard.edu/galileo/home</p><p><a href="https://rubinobservatory.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Rubin Observatory</a> - https://rubinobservatory.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/377909c5-107d-4392-8b66-cf148dc9cc81/5b8073ed01.jpg" />
  <pubDate>Thu, 08 Feb 2024 22:39:54 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="34155286" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/377909c5-107d-4392-8b66-cf148dc9cc81/episode.mp3" />
  <itunes:title><![CDATA[Envisioning a Harmonious Future Between AI and Humanity with Avi Loeb]]></itunes:title>
  <itunes:duration>35:34</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:36) The essential role of academia in fostering dialogue across differing viewpoints.</span></p><p><span style="background-color: transparent;">(06:58) Professor Loeb's concerns about AI's unpredictability.</span></p><p><span style="background-color: transparent;">(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.</span></p><p><span style="background-color: transparent;">(10:59) Assigning responsibility for AI's actions.</span></p><p><span style="background-color: transparent;">(14:29) The need for international treaties to regulate AI's use in national security and warfare.</span></p><p><span style="background-color: transparent;">(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.</span></p><p><span style="background-color: transparent;">(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.</span></p><p><span style="background-color: transparent;">(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><br></p><p><a href="https://projects.iq.harvard.edu/galileo/home" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard University's Galileo Project</a> - https://projects.iq.harvard.edu/galileo/home</p><p><a href="https://rubinobservatory.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Rubin Observatory</a> - https://rubinobservatory.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:36) The essential role of academia in fostering dialogue across differing viewpoints.</span></p><p><span style="background-color: transparent;">(06:58) Professor Loeb's concerns about AI's unpredictability.</span></p><p><span style="background-color: transparent;">(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.</span></p><p><span style="background-color: transparent;">(10:59) Assigning responsibility for AI's actions.</span></p><p><span style="background-color: transparent;">(14:29) The need for international treaties to regulate AI's use in national security and warfare.</span></p><p><span style="background-color: transparent;">(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.</span></p><p><span style="background-color: transparent;">(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.</span></p><p><span style="background-color: transparent;">(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><br></p><p><a href="https://projects.iq.harvard.edu/galileo/home" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard University's Galileo Project</a> - https://projects.iq.harvard.edu/galileo/home</p><p><a href="https://rubinobservatory.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Rubin Observatory</a> - https://rubinobservatory.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvar...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>16</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[42417909-685b-429d-a711-d45f91d06320]]></guid>
  <title><![CDATA[The Potential Effect of AI and Autonomous Flying Robots on National Security  with Timothy Bean of Fortem Technologies ]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this latest episode, I'm joined by </span><a href="https://www.linkedin.com/in/meghalred/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Timothy Bean</a><span style="background-color: transparent;">, President and COO of </span><a href="https://www.linkedin.com/company/fortem-technologies/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Fortem Technologies</a><span style="background-color: transparent;">, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:42) The evolution of national security tools and the advent of AI.</span></p><p><span style="background-color: transparent;">(03:49) The importance of data privacy in AI legislation and national security.</span></p><p><span style="background-color: transparent;">(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.</span></p><p><span style="background-color: transparent;">(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.</span></p><p><span style="background-color: transparent;">(12:13) The impending impact of quantum computing on AI and national security.</span></p><p><span style="background-color: transparent;">(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia.&nbsp;</span></p><p><span style="background-color: transparent;">(17:25) Public-private partnerships in enhancing national security through AI.</span></p><p><span style="background-color: transparent;">(18:39) The role of transparency and accountability in AI applications for national security.</span></p><p><span style="background-color: transparent;">(22:16) Debating the merits of open-sourcing AI models in the context of national security.</span></p><p><span style="background-color: transparent;">(24:55) The significance of educating the public on data privacy and the potential of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/meghalred/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Timothy Bean</a> -</p><p>https://www.linkedin.com/in/meghalred/</p><p><a href="https://www.linkedin.com/company/fortem-technologies/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Fortem Technologies</a> -</p><p>https://www.linkedin.com/company/fortem-technologies/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Department of Defense AI Ethics Principles</a> -</p><p>https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/9758cf23-2a4b-4e06-9f78-552981c062b0/8010ab5d4d.jpg" />
  <pubDate>Tue, 06 Feb 2024 04:30:09 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="32132782" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/9758cf23-2a4b-4e06-9f78-552981c062b0/episode.mp3" />
  <itunes:title><![CDATA[The Potential Effect of AI and Autonomous Flying Robots on National Security  with Timothy Bean of Fortem Technologies ]]></itunes:title>
  <itunes:duration>33:28</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this latest episode, I'm joined by </span><a href="https://www.linkedin.com/in/meghalred/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Timothy Bean</a><span style="background-color: transparent;">, President and COO of </span><a href="https://www.linkedin.com/company/fortem-technologies/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Fortem Technologies</a><span style="background-color: transparent;">, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:42) The evolution of national security tools and the advent of AI.</span></p><p><span style="background-color: transparent;">(03:49) The importance of data privacy in AI legislation and national security.</span></p><p><span style="background-color: transparent;">(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.</span></p><p><span style="background-color: transparent;">(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.</span></p><p><span style="background-color: transparent;">(12:13) The impending impact of quantum computing on AI and national security.</span></p><p><span style="background-color: transparent;">(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia.&nbsp;</span></p><p><span style="background-color: transparent;">(17:25) Public-private partnerships in enhancing national security through AI.</span></p><p><span style="background-color: transparent;">(18:39) The role of transparency and accountability in AI applications for national security.</span></p><p><span style="background-color: transparent;">(22:16) Debating the merits of open-sourcing AI models in the context of national security.</span></p><p><span style="background-color: transparent;">(24:55) The significance of educating the public on data privacy and the potential of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/meghalred/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Timothy Bean</a> -</p><p>https://www.linkedin.com/in/meghalred/</p><p><a href="https://www.linkedin.com/company/fortem-technologies/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Fortem Technologies</a> -</p><p>https://www.linkedin.com/company/fortem-technologies/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Department of Defense AI Ethics Principles</a> -</p><p>https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this latest episode, I'm joined by </span><a href="https://www.linkedin.com/in/meghalred/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Timothy Bean</a><span style="background-color: transparent;">, President and COO of </span><a href="https://www.linkedin.com/company/fortem-technologies/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Fortem Technologies</a><span style="background-color: transparent;">, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:42) The evolution of national security tools and the advent of AI.</span></p><p><span style="background-color: transparent;">(03:49) The importance of data privacy in AI legislation and national security.</span></p><p><span style="background-color: transparent;">(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.</span></p><p><span style="background-color: transparent;">(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.</span></p><p><span style="background-color: transparent;">(12:13) The impending impact of quantum computing on AI and national security.</span></p><p><span style="background-color: transparent;">(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia.&nbsp;</span></p><p><span style="background-color: transparent;">(17:25) Public-private partnerships in enhancing national security through AI.</span></p><p><span style="background-color: transparent;">(18:39) The role of transparency and accountability in AI applications for national security.</span></p><p><span style="background-color: transparent;">(22:16) Debating the merits of open-sourcing AI models in the context of national security.</span></p><p><span style="background-color: transparent;">(24:55) The significance of educating the public on data privacy and the potential of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/meghalred/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Timothy Bean</a> -</p><p>https://www.linkedin.com/in/meghalred/</p><p><a href="https://www.linkedin.com/company/fortem-technologies/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Fortem Technologies</a> -</p><p>https://www.linkedin.com/company/fortem-technologies/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Department of Defense AI Ethics Principles</a> -</p><p>https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this latest episode, I'm joined by Timothy Bean, President and COO of Fortem Technologies, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.Key Takeaways:(02:42...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>15</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[895d7dbc-37e4-478f-9983-a4947804e337]]></guid>
  <title><![CDATA[AI Education and Policy with Nathan Grant of Teach AI]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm thrilled to chat with </span><a href="https://www.linkedin.com/in/nathan-grant-t/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Grant</a><span style="background-color: transparent;">, Policy Fellow of TeachAI, an initiative championed by notable organizations including </span><a href="https://www.linkedin.com/company/code-org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Code.org</a><span style="background-color: transparent;">, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:16) Introduction of Nathan Grant and the TeachAI initiative.</span></p><p><span style="background-color: transparent;">(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.</span></p><p><span style="background-color: transparent;">(03:45) Perspectives on President Biden's Executive Order on AI.</span></p><p><span style="background-color: transparent;">(06:27) AI literacy's critical role across all subjects in K-12 education.</span></p><p><span style="background-color: transparent;">(07:30) Addressing the digital and AI divide for equitable education.</span></p><p><span style="background-color: transparent;">(09:03) Engaging students in the AI legislation dialogue.</span></p><p><span style="background-color: transparent;">(12:44) Concerns over banning AI tools like ChatGPT in schools.</span></p><p><span style="background-color: transparent;">(14:33) The risk of AI tool monopolization by a few large tech companies.</span></p><p><span style="background-color: transparent;">(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.</span></p><p><span style="background-color: transparent;">(18:59) The potential for standardized AI education guidelines.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nathan-grant-t/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Grant</a> - https://www.linkedin.com/in/nathan-grant-t/</p><p><a href="https://www.linkedin.com/company/code-org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Code.org</a> - https://www.linkedin.com/company/code-org/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://www.teachai.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TeachAI initiative</a> - https://www.teachai.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2fa20a99-2e96-46ab-901b-ff3b29600a55/818abe970e.jpg" />
  <pubDate>Fri, 02 Feb 2024 23:55:40 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="25809477" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2fa20a99-2e96-46ab-901b-ff3b29600a55/episode.mp3" />
  <itunes:title><![CDATA[AI Education and Policy with Nathan Grant of Teach AI]]></itunes:title>
  <itunes:duration>26:53</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm thrilled to chat with </span><a href="https://www.linkedin.com/in/nathan-grant-t/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Grant</a><span style="background-color: transparent;">, Policy Fellow of TeachAI, an initiative championed by notable organizations including </span><a href="https://www.linkedin.com/company/code-org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Code.org</a><span style="background-color: transparent;">, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:16) Introduction of Nathan Grant and the TeachAI initiative.</span></p><p><span style="background-color: transparent;">(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.</span></p><p><span style="background-color: transparent;">(03:45) Perspectives on President Biden's Executive Order on AI.</span></p><p><span style="background-color: transparent;">(06:27) AI literacy's critical role across all subjects in K-12 education.</span></p><p><span style="background-color: transparent;">(07:30) Addressing the digital and AI divide for equitable education.</span></p><p><span style="background-color: transparent;">(09:03) Engaging students in the AI legislation dialogue.</span></p><p><span style="background-color: transparent;">(12:44) Concerns over banning AI tools like ChatGPT in schools.</span></p><p><span style="background-color: transparent;">(14:33) The risk of AI tool monopolization by a few large tech companies.</span></p><p><span style="background-color: transparent;">(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.</span></p><p><span style="background-color: transparent;">(18:59) The potential for standardized AI education guidelines.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nathan-grant-t/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Grant</a> - https://www.linkedin.com/in/nathan-grant-t/</p><p><a href="https://www.linkedin.com/company/code-org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Code.org</a> - https://www.linkedin.com/company/code-org/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://www.teachai.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TeachAI initiative</a> - https://www.teachai.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm thrilled to chat with </span><a href="https://www.linkedin.com/in/nathan-grant-t/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Grant</a><span style="background-color: transparent;">, Policy Fellow of TeachAI, an initiative championed by notable organizations including </span><a href="https://www.linkedin.com/company/code-org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Code.org</a><span style="background-color: transparent;">, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:16) Introduction of Nathan Grant and the TeachAI initiative.</span></p><p><span style="background-color: transparent;">(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.</span></p><p><span style="background-color: transparent;">(03:45) Perspectives on President Biden's Executive Order on AI.</span></p><p><span style="background-color: transparent;">(06:27) AI literacy's critical role across all subjects in K-12 education.</span></p><p><span style="background-color: transparent;">(07:30) Addressing the digital and AI divide for equitable education.</span></p><p><span style="background-color: transparent;">(09:03) Engaging students in the AI legislation dialogue.</span></p><p><span style="background-color: transparent;">(12:44) Concerns over banning AI tools like ChatGPT in schools.</span></p><p><span style="background-color: transparent;">(14:33) The risk of AI tool monopolization by a few large tech companies.</span></p><p><span style="background-color: transparent;">(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.</span></p><p><span style="background-color: transparent;">(18:59) The potential for standardized AI education guidelines.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nathan-grant-t/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Grant</a> - https://www.linkedin.com/in/nathan-grant-t/</p><p><a href="https://www.linkedin.com/company/code-org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Code.org</a> - https://www.linkedin.com/company/code-org/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://www.teachai.org/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TeachAI initiative</a> - https://www.teachai.org/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I'm thrilled to chat with Nathan Grant, Policy Fellow of TeachAI, an initiative championed by notable organizations including Code.org, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on inte...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>14</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[91219c96-7bb8-44f6-a38a-7111c493055f]]></guid>
  <title><![CDATA[Unpacking AI's Ethical Implications and Future with Expert Beth Rudden]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking with </span><a href="https://www.linkedin.com/in/brudden/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Beth Rudden</a><span style="background-color: transparent;">, CEO of </span><a href="https://www.linkedin.com/company/bast-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bast AI</a><span style="background-color: transparent;"> and a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a unique lens on the intricate relationship between AI, ethics and technology.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:25) Insights into diverse perspectives on AI regulation.</span></p><p><span style="background-color: transparent;">(02:24) Beth discusses the ethical risks in AI development.</span></p><p><span style="background-color: transparent;">(03:38) The importance of education in AI ethics and technology.</span></p><p><span style="background-color: transparent;">(05:05) Emphasizing explainable AI in regulation.</span></p><p><span style="background-color: transparent;">(06:35) Discussing the role of data privacy and dignity.</span></p><p><span style="background-color: transparent;">(09:01) The necessity of transparency in AI systems.</span></p><p><span style="background-color: transparent;">(12:16) The impact of AI on social media and communication.</span></p><p><span style="background-color: transparent;">(15:33) Core ethical principles in AI development.</span></p><p><span style="background-color: transparent;">(19:25) The role of accountability in AI systems.</span></p><p><span style="background-color: transparent;">(22:09) The concept of AI as a community utility.</span></p><p><span style="background-color: transparent;">(26:39) Beth's views on creating unbiased AI systems.</span></p><p><span style="background-color: transparent;">(30:17) The importance of human rights and privacy in AI.</span></p><p><span style="background-color: transparent;">(34:27) Addressing AI's role in societal issues.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/brudden/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Beth Rudden</a> - https://www.linkedin.com/in/brudden/</p><p><a href="https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Joy Boulamwini's "Unmasking AI"</a> - https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><a href="https://bast.ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bast AI Website</a> - https://bast.ai/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/00a6db2b-b01d-45ed-9a14-47c47fbdd6ca/94acb70526.jpg" />
  <pubDate>Thu, 01 Feb 2024 21:34:54 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="36851541" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/00a6db2b-b01d-45ed-9a14-47c47fbdd6ca/episode.mp3" />
  <itunes:title><![CDATA[Unpacking AI's Ethical Implications and Future with Expert Beth Rudden]]></itunes:title>
  <itunes:duration>38:23</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking with </span><a href="https://www.linkedin.com/in/brudden/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Beth Rudden</a><span style="background-color: transparent;">, CEO of </span><a href="https://www.linkedin.com/company/bast-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bast AI</a><span style="background-color: transparent;"> and a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a unique lens on the intricate relationship between AI, ethics and technology.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:25) Insights into diverse perspectives on AI regulation.</span></p><p><span style="background-color: transparent;">(02:24) Beth discusses the ethical risks in AI development.</span></p><p><span style="background-color: transparent;">(03:38) The importance of education in AI ethics and technology.</span></p><p><span style="background-color: transparent;">(05:05) Emphasizing explainable AI in regulation.</span></p><p><span style="background-color: transparent;">(06:35) Discussing the role of data privacy and dignity.</span></p><p><span style="background-color: transparent;">(09:01) The necessity of transparency in AI systems.</span></p><p><span style="background-color: transparent;">(12:16) The impact of AI on social media and communication.</span></p><p><span style="background-color: transparent;">(15:33) Core ethical principles in AI development.</span></p><p><span style="background-color: transparent;">(19:25) The role of accountability in AI systems.</span></p><p><span style="background-color: transparent;">(22:09) The concept of AI as a community utility.</span></p><p><span style="background-color: transparent;">(26:39) Beth's views on creating unbiased AI systems.</span></p><p><span style="background-color: transparent;">(30:17) The importance of human rights and privacy in AI.</span></p><p><span style="background-color: transparent;">(34:27) Addressing AI's role in societal issues.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/brudden/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Beth Rudden</a> - https://www.linkedin.com/in/brudden/</p><p><a href="https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Joy Boulamwini's "Unmasking AI"</a> - https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><a href="https://bast.ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bast AI Website</a> - https://bast.ai/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking with </span><a href="https://www.linkedin.com/in/brudden/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Beth Rudden</a><span style="background-color: transparent;">, CEO of </span><a href="https://www.linkedin.com/company/bast-ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bast AI</a><span style="background-color: transparent;"> and a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a unique lens on the intricate relationship between AI, ethics and technology.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:25) Insights into diverse perspectives on AI regulation.</span></p><p><span style="background-color: transparent;">(02:24) Beth discusses the ethical risks in AI development.</span></p><p><span style="background-color: transparent;">(03:38) The importance of education in AI ethics and technology.</span></p><p><span style="background-color: transparent;">(05:05) Emphasizing explainable AI in regulation.</span></p><p><span style="background-color: transparent;">(06:35) Discussing the role of data privacy and dignity.</span></p><p><span style="background-color: transparent;">(09:01) The necessity of transparency in AI systems.</span></p><p><span style="background-color: transparent;">(12:16) The impact of AI on social media and communication.</span></p><p><span style="background-color: transparent;">(15:33) Core ethical principles in AI development.</span></p><p><span style="background-color: transparent;">(19:25) The role of accountability in AI systems.</span></p><p><span style="background-color: transparent;">(22:09) The concept of AI as a community utility.</span></p><p><span style="background-color: transparent;">(26:39) Beth's views on creating unbiased AI systems.</span></p><p><span style="background-color: transparent;">(30:17) The importance of human rights and privacy in AI.</span></p><p><span style="background-color: transparent;">(34:27) Addressing AI's role in societal issues.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/brudden/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Beth Rudden</a> - https://www.linkedin.com/in/brudden/</p><p><a href="https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Joy Boulamwini's "Unmasking AI"</a> - https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><a href="https://bast.ai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bast AI Website</a> - https://bast.ai/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking with Beth Rudden, CEO of Bast AI and a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a uni...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>13</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[1bfd3e65-5f3e-47d5-a1fd-4dfdb4860cb3]]></guid>
  <title><![CDATA[Educating Society on Responsible Use of AI with Haniyeh Mahmoudian at DataRobot and NAIAC]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking with </span><a href="https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Haniyeh Mahmoudian</a><span style="background-color: transparent;">, Ph.D., distinguished Global AI Ethicist at </span><a href="https://www.linkedin.com/company/datarobot" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">DataRobot</a><span style="background-color: transparent;"> and Advisor to NAIAC (National AI Advisory Committee). We discuss AI regulation, ethical considerations and the importance of education around responsible use of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:09) Insights into President Biden’s AI Executive Order.</span></p><p><span style="background-color: transparent;">(04:32) The importance of private-public partnerships in AI education and workforce upskilling.</span></p><p><span style="background-color: transparent;">(06:35) The need for realistic job qualifications in AI-related fields.</span></p><p><span style="background-color: transparent;">(08:23) The EU AI Act, its risk framework for AI use cases and the need for flexible and adaptable legislative frameworks in AI regulation.</span></p><p><span style="background-color: transparent;">(11:42) The US's approach to AI regulation compared to the EU.</span></p><p><span style="background-color: transparent;">(15:59) Ethical risks in AI development, particularly the lack of education in AI literacy.&nbsp;</span></p><p><span style="background-color: transparent;">(18:55) Ensuring historically marginalized communities can participate in and benefit from AI advancements.</span></p><p><span style="background-color: transparent;">(21:04) The need for robust governance processes and accountability at every stage of AI development and deployment.</span></p><p><span style="background-color: transparent;">(23:53) Challenges and benefits of democratizing AI technology access.</span></p><p><span style="background-color: transparent;">(25:50) The necessity of companies disclosing their use of AI systems to end-users.&nbsp;</span></p><p><span style="background-color: transparent;">(27:12) Concerns about the impact of AI, particularly deepfakes, on democracy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Haniyeh Mahmoudian</a> - https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072</p><p><a href="https://www.linkedin.com/company/datarobot" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">DataRobot</a> - https://www.linkedin.com/company/datarobot</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><a href="https://ai.gov/naiac/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Advisory Committee Recommendations</a> - https://ai.gov/naiac/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/2d27f4d4-c03d-4c63-b0a3-052a0108b869/b54fc36987.jpg" />
  <pubDate>Thu, 25 Jan 2024 14:21:53 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/2d27f4d4-c03d-4c63-b0a3-052a0108b869/episode.mp3" />
  <itunes:title><![CDATA[Educating Society on Responsible Use of AI with Haniyeh Mahmoudian at DataRobot and NAIAC]]></itunes:title>
  <itunes:duration>31:06</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking with </span><a href="https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Haniyeh Mahmoudian</a><span style="background-color: transparent;">, Ph.D., distinguished Global AI Ethicist at </span><a href="https://www.linkedin.com/company/datarobot" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">DataRobot</a><span style="background-color: transparent;"> and Advisor to NAIAC (National AI Advisory Committee). We discuss AI regulation, ethical considerations and the importance of education around responsible use of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:09) Insights into President Biden’s AI Executive Order.</span></p><p><span style="background-color: transparent;">(04:32) The importance of private-public partnerships in AI education and workforce upskilling.</span></p><p><span style="background-color: transparent;">(06:35) The need for realistic job qualifications in AI-related fields.</span></p><p><span style="background-color: transparent;">(08:23) The EU AI Act, its risk framework for AI use cases and the need for flexible and adaptable legislative frameworks in AI regulation.</span></p><p><span style="background-color: transparent;">(11:42) The US's approach to AI regulation compared to the EU.</span></p><p><span style="background-color: transparent;">(15:59) Ethical risks in AI development, particularly the lack of education in AI literacy.&nbsp;</span></p><p><span style="background-color: transparent;">(18:55) Ensuring historically marginalized communities can participate in and benefit from AI advancements.</span></p><p><span style="background-color: transparent;">(21:04) The need for robust governance processes and accountability at every stage of AI development and deployment.</span></p><p><span style="background-color: transparent;">(23:53) Challenges and benefits of democratizing AI technology access.</span></p><p><span style="background-color: transparent;">(25:50) The necessity of companies disclosing their use of AI systems to end-users.&nbsp;</span></p><p><span style="background-color: transparent;">(27:12) Concerns about the impact of AI, particularly deepfakes, on democracy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Haniyeh Mahmoudian</a> - https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072</p><p><a href="https://www.linkedin.com/company/datarobot" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">DataRobot</a> - https://www.linkedin.com/company/datarobot</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><a href="https://ai.gov/naiac/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Advisory Committee Recommendations</a> - https://ai.gov/naiac/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking with </span><a href="https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Haniyeh Mahmoudian</a><span style="background-color: transparent;">, Ph.D., distinguished Global AI Ethicist at </span><a href="https://www.linkedin.com/company/datarobot" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">DataRobot</a><span style="background-color: transparent;"> and Advisor to NAIAC (National AI Advisory Committee). We discuss AI regulation, ethical considerations and the importance of education around responsible use of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:09) Insights into President Biden’s AI Executive Order.</span></p><p><span style="background-color: transparent;">(04:32) The importance of private-public partnerships in AI education and workforce upskilling.</span></p><p><span style="background-color: transparent;">(06:35) The need for realistic job qualifications in AI-related fields.</span></p><p><span style="background-color: transparent;">(08:23) The EU AI Act, its risk framework for AI use cases and the need for flexible and adaptable legislative frameworks in AI regulation.</span></p><p><span style="background-color: transparent;">(11:42) The US's approach to AI regulation compared to the EU.</span></p><p><span style="background-color: transparent;">(15:59) Ethical risks in AI development, particularly the lack of education in AI literacy.&nbsp;</span></p><p><span style="background-color: transparent;">(18:55) Ensuring historically marginalized communities can participate in and benefit from AI advancements.</span></p><p><span style="background-color: transparent;">(21:04) The need for robust governance processes and accountability at every stage of AI development and deployment.</span></p><p><span style="background-color: transparent;">(23:53) Challenges and benefits of democratizing AI technology access.</span></p><p><span style="background-color: transparent;">(25:50) The necessity of companies disclosing their use of AI systems to end-users.&nbsp;</span></p><p><span style="background-color: transparent;">(27:12) Concerns about the impact of AI, particularly deepfakes, on democracy.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Haniyeh Mahmoudian</a> - https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072</p><p><a href="https://www.linkedin.com/company/datarobot" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">DataRobot</a> - https://www.linkedin.com/company/datarobot</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><a href="https://ai.gov/naiac/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Advisory Committee Recommendations</a> - https://ai.gov/naiac/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span>Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking with Haniyeh Mahmoudian, Ph.D., distinguished Global AI Ethicist at DataRobot and Advisor to NAIAC (National AI Advisory Committee). We...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>12</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[2ab077e1-fd59-440a-81d9-ff0c5c50f0b8]]></guid>
  <title><![CDATA[Delving Into the Future of Responsible AI with Dr. Ravit Dotan]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/ravit-dotan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Ravit Dotan</a><span style="background-color: transparent;">, Founder and CEO of </span><a href="https://www.linkedin.com/company/techbetter/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TechBetter</a><span style="background-color: transparent;">, Responsible AI Advocate of </span><a href="https://www.linkedin.com/company/briaai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bria</a><span style="background-color: transparent;"> and AI Ethicist. We discuss the complexities of AI regulation in our modern world. We also focus on the pivotal role policies and ethics play in steering the course of AI toward a future that benefits all.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:18) Discussing President Biden’s Executive Order on AI and its implications for a new era of regulation.</span></p><p><span style="background-color: transparent;">(03:02) Contrasting the divergent paths of the US and UK in AI regulation.</span></p><p><span style="background-color: transparent;">(07:18) Investigating AI regulation’s influence on innovation.</span></p><p><span style="background-color: transparent;">(08:22) Assessing the ethical risks of misinformation within AI systems.</span></p><p><span style="background-color: transparent;">(12:13) Addressing the amplification of biases in AI decision-making.</span></p><p><span style="background-color: transparent;">(16:42) The challenge of achieving fairness in AI.</span></p><p><span style="background-color: transparent;">(17:40) The necessity of banning harmful AI applications.</span></p><p><span style="background-color: transparent;">(19:52) The role of AI ethics officers in organizations.</span></p><p><span style="background-color: transparent;">(21:30) Analyzing responsibility in AI-related incidents.</span></p><p><span style="background-color: transparent;">(24:26) The influence of major tech companies on AI’s direction.</span></p><p><span style="background-color: transparent;">(30:50) Discussing strategies against AI deepfakes in political campaigns.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/ravit-dotan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Ravit Dotan</a> - https://www.linkedin.com/in/ravit-dotan/</p><p><a href="https://www.linkedin.com/company/techbetter/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TechBetter</a> - https://www.linkedin.com/company/techbetter/</p><p><a href="https://www.linkedin.com/company/briaai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bria</a> - https://www.linkedin.com/company/briaai/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/5bc08d95-4ac8-496f-8d07-40679a3b673c/c7f687906c.jpg" />
  <pubDate>Mon, 22 Jan 2024 13:35:33 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/5bc08d95-4ac8-496f-8d07-40679a3b673c/episode.mp3" />
  <itunes:title><![CDATA[Delving Into the Future of Responsible AI with Dr. Ravit Dotan]]></itunes:title>
  <itunes:duration>33:48</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/ravit-dotan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Ravit Dotan</a><span style="background-color: transparent;">, Founder and CEO of </span><a href="https://www.linkedin.com/company/techbetter/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TechBetter</a><span style="background-color: transparent;">, Responsible AI Advocate of </span><a href="https://www.linkedin.com/company/briaai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bria</a><span style="background-color: transparent;"> and AI Ethicist. We discuss the complexities of AI regulation in our modern world. We also focus on the pivotal role policies and ethics play in steering the course of AI toward a future that benefits all.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:18) Discussing President Biden’s Executive Order on AI and its implications for a new era of regulation.</span></p><p><span style="background-color: transparent;">(03:02) Contrasting the divergent paths of the US and UK in AI regulation.</span></p><p><span style="background-color: transparent;">(07:18) Investigating AI regulation’s influence on innovation.</span></p><p><span style="background-color: transparent;">(08:22) Assessing the ethical risks of misinformation within AI systems.</span></p><p><span style="background-color: transparent;">(12:13) Addressing the amplification of biases in AI decision-making.</span></p><p><span style="background-color: transparent;">(16:42) The challenge of achieving fairness in AI.</span></p><p><span style="background-color: transparent;">(17:40) The necessity of banning harmful AI applications.</span></p><p><span style="background-color: transparent;">(19:52) The role of AI ethics officers in organizations.</span></p><p><span style="background-color: transparent;">(21:30) Analyzing responsibility in AI-related incidents.</span></p><p><span style="background-color: transparent;">(24:26) The influence of major tech companies on AI’s direction.</span></p><p><span style="background-color: transparent;">(30:50) Discussing strategies against AI deepfakes in political campaigns.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/ravit-dotan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Ravit Dotan</a> - https://www.linkedin.com/in/ravit-dotan/</p><p><a href="https://www.linkedin.com/company/techbetter/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TechBetter</a> - https://www.linkedin.com/company/techbetter/</p><p><a href="https://www.linkedin.com/company/briaai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bria</a> - https://www.linkedin.com/company/briaai/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/ravit-dotan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Ravit Dotan</a><span style="background-color: transparent;">, Founder and CEO of </span><a href="https://www.linkedin.com/company/techbetter/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TechBetter</a><span style="background-color: transparent;">, Responsible AI Advocate of </span><a href="https://www.linkedin.com/company/briaai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bria</a><span style="background-color: transparent;"> and AI Ethicist. We discuss the complexities of AI regulation in our modern world. We also focus on the pivotal role policies and ethics play in steering the course of AI toward a future that benefits all.&nbsp;</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:18) Discussing President Biden’s Executive Order on AI and its implications for a new era of regulation.</span></p><p><span style="background-color: transparent;">(03:02) Contrasting the divergent paths of the US and UK in AI regulation.</span></p><p><span style="background-color: transparent;">(07:18) Investigating AI regulation’s influence on innovation.</span></p><p><span style="background-color: transparent;">(08:22) Assessing the ethical risks of misinformation within AI systems.</span></p><p><span style="background-color: transparent;">(12:13) Addressing the amplification of biases in AI decision-making.</span></p><p><span style="background-color: transparent;">(16:42) The challenge of achieving fairness in AI.</span></p><p><span style="background-color: transparent;">(17:40) The necessity of banning harmful AI applications.</span></p><p><span style="background-color: transparent;">(19:52) The role of AI ethics officers in organizations.</span></p><p><span style="background-color: transparent;">(21:30) Analyzing responsibility in AI-related incidents.</span></p><p><span style="background-color: transparent;">(24:26) The influence of major tech companies on AI’s direction.</span></p><p><span style="background-color: transparent;">(30:50) Discussing strategies against AI deepfakes in political campaigns.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/ravit-dotan/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Dr. Ravit Dotan</a> - https://www.linkedin.com/in/ravit-dotan/</p><p><a href="https://www.linkedin.com/company/techbetter/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">TechBetter</a> - https://www.linkedin.com/company/techbetter/</p><p><a href="https://www.linkedin.com/company/briaai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Bria</a> - https://www.linkedin.com/company/briaai/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined by Dr. Ravit Dotan, Founder and CEO of TechBetter, Responsible AI Advocate of Bria and...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>11</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[3d961de2-d476-46da-9b37-0f7a953b8671]]></guid>
  <title><![CDATA[Balancing AI Innovation and Civil Liberties with Esha Bhandari of ACLU ]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode of Regulating AI: Innovate Responsibly, I am thrilled to host</span><a href="https://www.linkedin.com/in/eshabhandari/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Esha Bhandari</a><span style="background-color: transparent;">, the Deputy Project Director of the </span><a href="https://www.linkedin.com/company/aclu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ACLU (American Civil Liberties Union)</a><span style="background-color: transparent;">, who shares her expertise in AI and civil liberties​​. Esha is also a </span>Member of the Law Enforcement Subcommittee of the National AI Advisory and Adjunct Professor of Clinical Law at the New York University School of Law.</p><p><br></p><p><span style="background-color: transparent;">We explore the complex relationship between artificial intelligence and civil liberties, discussing the implications of AI regulation, the challenges posed by algorithmic bias and the potential impact of AI on various sectors, including law enforcement, housing and employment.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:59) Esha’s perspective on President Biden’s Executive Order on AI, emphasizing the inclusion of civil liberties and civil rights.</span></p><p><span style="background-color: transparent;">(04:01) Challenges in law enforcement and national security contexts regarding AI.</span></p><p><span style="background-color: transparent;">(07:56) A discussion on the potential of a separate government agency for AI regulation.</span></p><p><span style="background-color: transparent;">(10:41) The balancing act between preventing AI from replicating societal biases and fostering innovation.</span></p><p><span style="background-color: transparent;">(12:53) The question of liability in AI systems: developer, deployer, or user?</span></p><p><span style="background-color: transparent;">(14:21) Keeping pace with rapid AI advancements in policy and legislation.</span></p><p><span style="background-color: transparent;">(18:51) The ACLU’s stance on open-source technology and AI.</span></p><p><span style="background-color: transparent;">(25:01) The role AI regulation plays on a global scale.</span></p><p><span style="background-color: transparent;">(26:44) Addressing the potential impacts of AI on upcoming elections and protecting civil liberties.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/eshabhandari/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Esha Bhandari</a> -</p><p>https://www.linkedin.com/in/eshabhandari/</p><p><a href="https://www.linkedin.com/company/aclu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ACLU (American Civil Liberties Union)</a> -</p><p>https://www.linkedin.com/company/aclu/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI Regulation in the EU</a> -</p><p>https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d67b38fb-3a40-4386-9286-33efa4485d9c/583929c56c.jpg" />
  <pubDate>Thu, 18 Jan 2024 12:13:03 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d67b38fb-3a40-4386-9286-33efa4485d9c/episode.mp3" />
  <itunes:title><![CDATA[Balancing AI Innovation and Civil Liberties with Esha Bhandari of ACLU ]]></itunes:title>
  <itunes:duration>30:19</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode of Regulating AI: Innovate Responsibly, I am thrilled to host</span><a href="https://www.linkedin.com/in/eshabhandari/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Esha Bhandari</a><span style="background-color: transparent;">, the Deputy Project Director of the </span><a href="https://www.linkedin.com/company/aclu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ACLU (American Civil Liberties Union)</a><span style="background-color: transparent;">, who shares her expertise in AI and civil liberties​​. Esha is also a </span>Member of the Law Enforcement Subcommittee of the National AI Advisory and Adjunct Professor of Clinical Law at the New York University School of Law.</p><p><br></p><p><span style="background-color: transparent;">We explore the complex relationship between artificial intelligence and civil liberties, discussing the implications of AI regulation, the challenges posed by algorithmic bias and the potential impact of AI on various sectors, including law enforcement, housing and employment.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:59) Esha’s perspective on President Biden’s Executive Order on AI, emphasizing the inclusion of civil liberties and civil rights.</span></p><p><span style="background-color: transparent;">(04:01) Challenges in law enforcement and national security contexts regarding AI.</span></p><p><span style="background-color: transparent;">(07:56) A discussion on the potential of a separate government agency for AI regulation.</span></p><p><span style="background-color: transparent;">(10:41) The balancing act between preventing AI from replicating societal biases and fostering innovation.</span></p><p><span style="background-color: transparent;">(12:53) The question of liability in AI systems: developer, deployer, or user?</span></p><p><span style="background-color: transparent;">(14:21) Keeping pace with rapid AI advancements in policy and legislation.</span></p><p><span style="background-color: transparent;">(18:51) The ACLU’s stance on open-source technology and AI.</span></p><p><span style="background-color: transparent;">(25:01) The role AI regulation plays on a global scale.</span></p><p><span style="background-color: transparent;">(26:44) Addressing the potential impacts of AI on upcoming elections and protecting civil liberties.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/eshabhandari/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Esha Bhandari</a> -</p><p>https://www.linkedin.com/in/eshabhandari/</p><p><a href="https://www.linkedin.com/company/aclu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ACLU (American Civil Liberties Union)</a> -</p><p>https://www.linkedin.com/company/aclu/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI Regulation in the EU</a> -</p><p>https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode of Regulating AI: Innovate Responsibly, I am thrilled to host</span><a href="https://www.linkedin.com/in/eshabhandari/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Esha Bhandari</a><span style="background-color: transparent;">, the Deputy Project Director of the </span><a href="https://www.linkedin.com/company/aclu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ACLU (American Civil Liberties Union)</a><span style="background-color: transparent;">, who shares her expertise in AI and civil liberties​​. Esha is also a </span>Member of the Law Enforcement Subcommittee of the National AI Advisory and Adjunct Professor of Clinical Law at the New York University School of Law.</p><p><br></p><p><span style="background-color: transparent;">We explore the complex relationship between artificial intelligence and civil liberties, discussing the implications of AI regulation, the challenges posed by algorithmic bias and the potential impact of AI on various sectors, including law enforcement, housing and employment.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:59) Esha’s perspective on President Biden’s Executive Order on AI, emphasizing the inclusion of civil liberties and civil rights.</span></p><p><span style="background-color: transparent;">(04:01) Challenges in law enforcement and national security contexts regarding AI.</span></p><p><span style="background-color: transparent;">(07:56) A discussion on the potential of a separate government agency for AI regulation.</span></p><p><span style="background-color: transparent;">(10:41) The balancing act between preventing AI from replicating societal biases and fostering innovation.</span></p><p><span style="background-color: transparent;">(12:53) The question of liability in AI systems: developer, deployer, or user?</span></p><p><span style="background-color: transparent;">(14:21) Keeping pace with rapid AI advancements in policy and legislation.</span></p><p><span style="background-color: transparent;">(18:51) The ACLU’s stance on open-source technology and AI.</span></p><p><span style="background-color: transparent;">(25:01) The role AI regulation plays on a global scale.</span></p><p><span style="background-color: transparent;">(26:44) Addressing the potential impacts of AI on upcoming elections and protecting civil liberties.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/eshabhandari/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Esha Bhandari</a> -</p><p>https://www.linkedin.com/in/eshabhandari/</p><p><a href="https://www.linkedin.com/company/aclu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">ACLU (American Civil Liberties Union)</a> -</p><p>https://www.linkedin.com/company/aclu/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI Regulation in the EU</a> -</p><p>https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode of Regulating AI: Innovate Responsibly, I am thrilled to host Esha Bhandari, the Deputy Project Director of the ACLU (American Civil Liberties Union), who shares her expertise in AI and civil liberties​​. Esha is also a Member of th...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>10</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[c4b64ac9-4335-4c8f-aefb-dc4dafc72a08]]></guid>
  <title><![CDATA[Delving Into AI Ethics, Safety and Global Regulations with Stuart Russell ]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm delighted to be joined by a leading mind in AI, </span><a href="https://www.linkedin.com/in/stuartjonathanrussell/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Stuart Russell</a><span style="background-color: transparent;">, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.</span></p><p><br></p><p><span style="background-color: transparent;">Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.</span></p><p><span style="background-color: transparent;">(02:22) Analysis of the Biden Executive Order on AI and its limitations.</span></p><p><span style="background-color: transparent;">(03:49) Evolution and current status of the EU AI Act.</span></p><p><span style="background-color: transparent;">(07:31) The paradox of open-source AI in regulatory contexts.</span></p><p><span style="background-color: transparent;">(08:31) The challenge of controlling AI systems that are more powerful than humans.</span></p><p><span style="background-color: transparent;">(13:08) The necessity of proactive safety measures in AI development.</span></p><p><span style="background-color: transparent;">(15:12) The potential risks and concerns around AI agents.</span></p><p><span style="background-color: transparent;">(17:02) Balancing innovation and regulation in AI.</span></p><p><span style="background-color: transparent;">(19:20) Adapting AI legislation to technological advancements.</span></p><p><span style="background-color: transparent;">(21:49) The need for a dedicated regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(26:08) Global collaboration on AI safety and national security.</span></p><p><span style="background-color: transparent;">(30:33) Public perception and education on AI safety.</span></p><p><span style="background-color: transparent;">(34:23) The role of AI in national security and ethical concerns.</span></p><p><span style="background-color: transparent;">(37:04) The impact of AI and deepfakes on the 2024 elections.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/stuartjonathanrussell/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Stuart Russell</a> - https://www.linkedin.com/in/stuartjonathanrussell/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/48aaa5d5-1325-4852-a41d-8c3bbe94c257/27e7180724.jpg" />
  <pubDate>Mon, 15 Jan 2024 20:50:26 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/48aaa5d5-1325-4852-a41d-8c3bbe94c257/episode.mp3" />
  <itunes:title><![CDATA[Delving Into AI Ethics, Safety and Global Regulations with Stuart Russell ]]></itunes:title>
  <itunes:duration>38:20</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm delighted to be joined by a leading mind in AI, </span><a href="https://www.linkedin.com/in/stuartjonathanrussell/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Stuart Russell</a><span style="background-color: transparent;">, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.</span></p><p><br></p><p><span style="background-color: transparent;">Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.</span></p><p><span style="background-color: transparent;">(02:22) Analysis of the Biden Executive Order on AI and its limitations.</span></p><p><span style="background-color: transparent;">(03:49) Evolution and current status of the EU AI Act.</span></p><p><span style="background-color: transparent;">(07:31) The paradox of open-source AI in regulatory contexts.</span></p><p><span style="background-color: transparent;">(08:31) The challenge of controlling AI systems that are more powerful than humans.</span></p><p><span style="background-color: transparent;">(13:08) The necessity of proactive safety measures in AI development.</span></p><p><span style="background-color: transparent;">(15:12) The potential risks and concerns around AI agents.</span></p><p><span style="background-color: transparent;">(17:02) Balancing innovation and regulation in AI.</span></p><p><span style="background-color: transparent;">(19:20) Adapting AI legislation to technological advancements.</span></p><p><span style="background-color: transparent;">(21:49) The need for a dedicated regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(26:08) Global collaboration on AI safety and national security.</span></p><p><span style="background-color: transparent;">(30:33) Public perception and education on AI safety.</span></p><p><span style="background-color: transparent;">(34:23) The role of AI in national security and ethical concerns.</span></p><p><span style="background-color: transparent;">(37:04) The impact of AI and deepfakes on the 2024 elections.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/stuartjonathanrussell/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Stuart Russell</a> - https://www.linkedin.com/in/stuartjonathanrussell/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm delighted to be joined by a leading mind in AI, </span><a href="https://www.linkedin.com/in/stuartjonathanrussell/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Stuart Russell</a><span style="background-color: transparent;">, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.</span></p><p><br></p><p><span style="background-color: transparent;">Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.</span></p><p><span style="background-color: transparent;">(02:22) Analysis of the Biden Executive Order on AI and its limitations.</span></p><p><span style="background-color: transparent;">(03:49) Evolution and current status of the EU AI Act.</span></p><p><span style="background-color: transparent;">(07:31) The paradox of open-source AI in regulatory contexts.</span></p><p><span style="background-color: transparent;">(08:31) The challenge of controlling AI systems that are more powerful than humans.</span></p><p><span style="background-color: transparent;">(13:08) The necessity of proactive safety measures in AI development.</span></p><p><span style="background-color: transparent;">(15:12) The potential risks and concerns around AI agents.</span></p><p><span style="background-color: transparent;">(17:02) Balancing innovation and regulation in AI.</span></p><p><span style="background-color: transparent;">(19:20) Adapting AI legislation to technological advancements.</span></p><p><span style="background-color: transparent;">(21:49) The need for a dedicated regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(26:08) Global collaboration on AI safety and national security.</span></p><p><span style="background-color: transparent;">(30:33) Public perception and education on AI safety.</span></p><p><span style="background-color: transparent;">(34:23) The role of AI in national security and ethical concerns.</span></p><p><span style="background-color: transparent;">(37:04) The impact of AI and deepfakes on the 2024 elections.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/stuartjonathanrussell/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Stuart Russell</a> - https://www.linkedin.com/in/stuartjonathanrussell/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I'm delighted to be joined by a leading mind in AI, Stuart Russell, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Cha...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>9</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[9fda462a-4c37-48cb-85d2-2b1d28347b11]]></guid>
  <title><![CDATA[Balancing AI Risks and Promises with Congresswoman Anna Eshoo]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus. </span><em style="background-color: transparent;">Time Magazine</em><span style="background-color: transparent;"> has selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and opportunities in the world of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:23) The role of the National AI Research Resource in President Biden’s executive order.</span></p><p><span style="background-color: transparent;">(03:20) The urgency for Congress to enact durable AI statutes.</span></p><p><span style="background-color: transparent;">(05:31) Objectives of the Create AI Act in making AI accessible to diverse sectors.</span></p><p><span style="background-color: transparent;">(08:03) The dynamic nature of AI policy and state-level legislation's role.</span></p><p><span style="background-color: transparent;">(10:43) The security implications of open-source AI models.</span></p><p><span style="background-color: transparent;">(12:18) Addressing the threat of deepfakes in elections.</span></p><p><span style="background-color: transparent;">(14:29) Strategies for workforce reskilling and attracting global AI talent.</span></p><p><span style="background-color: transparent;">(18:15) Democratizing AI to avert monopolistic trends.</span></p><p><span style="background-color: transparent;">(20:38) US Rep. Eshoo's predictions on the AI legislative timeline.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/anna-eshoo-b0392095/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anna Eshoo</a> - https://www.linkedin.com/in/anna-eshoo-b0392095/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - https://www.whitehouse.gov/ostp/news-updates/2023/01/24/national-artificial-intelligence-research-resource-task-force-releases-final-report/</p><p><a href="https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&amp;s=1&amp;r=2" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keep STEM Talent Act 2021</a> - https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&amp;s=1&amp;r=2</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://eshoo.house.gov/sites/evo-subsites/eshoo.house.gov/files/evo-media-document/eshoo_043_xml.pdf</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/7d53065e-ba2f-4311-abcd-944b065560ba/b2b1ca98c7.jpg" />
  <pubDate>Thu, 11 Jan 2024 05:41:25 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/7d53065e-ba2f-4311-abcd-944b065560ba/episode.mp3" />
  <itunes:title><![CDATA[Balancing AI Risks and Promises with Congresswoman Anna Eshoo]]></itunes:title>
  <itunes:duration>21:32</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus. </span><em style="background-color: transparent;">Time Magazine</em><span style="background-color: transparent;"> has selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and opportunities in the world of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:23) The role of the National AI Research Resource in President Biden’s executive order.</span></p><p><span style="background-color: transparent;">(03:20) The urgency for Congress to enact durable AI statutes.</span></p><p><span style="background-color: transparent;">(05:31) Objectives of the Create AI Act in making AI accessible to diverse sectors.</span></p><p><span style="background-color: transparent;">(08:03) The dynamic nature of AI policy and state-level legislation's role.</span></p><p><span style="background-color: transparent;">(10:43) The security implications of open-source AI models.</span></p><p><span style="background-color: transparent;">(12:18) Addressing the threat of deepfakes in elections.</span></p><p><span style="background-color: transparent;">(14:29) Strategies for workforce reskilling and attracting global AI talent.</span></p><p><span style="background-color: transparent;">(18:15) Democratizing AI to avert monopolistic trends.</span></p><p><span style="background-color: transparent;">(20:38) US Rep. Eshoo's predictions on the AI legislative timeline.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/anna-eshoo-b0392095/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anna Eshoo</a> - https://www.linkedin.com/in/anna-eshoo-b0392095/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - https://www.whitehouse.gov/ostp/news-updates/2023/01/24/national-artificial-intelligence-research-resource-task-force-releases-final-report/</p><p><a href="https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&amp;s=1&amp;r=2" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keep STEM Talent Act 2021</a> - https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&amp;s=1&amp;r=2</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://eshoo.house.gov/sites/evo-subsites/eshoo.house.gov/files/evo-media-document/eshoo_043_xml.pdf</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus. </span><em style="background-color: transparent;">Time Magazine</em><span style="background-color: transparent;"> has selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and opportunities in the world of AI.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:23) The role of the National AI Research Resource in President Biden’s executive order.</span></p><p><span style="background-color: transparent;">(03:20) The urgency for Congress to enact durable AI statutes.</span></p><p><span style="background-color: transparent;">(05:31) Objectives of the Create AI Act in making AI accessible to diverse sectors.</span></p><p><span style="background-color: transparent;">(08:03) The dynamic nature of AI policy and state-level legislation's role.</span></p><p><span style="background-color: transparent;">(10:43) The security implications of open-source AI models.</span></p><p><span style="background-color: transparent;">(12:18) Addressing the threat of deepfakes in elections.</span></p><p><span style="background-color: transparent;">(14:29) Strategies for workforce reskilling and attracting global AI talent.</span></p><p><span style="background-color: transparent;">(18:15) Democratizing AI to avert monopolistic trends.</span></p><p><span style="background-color: transparent;">(20:38) US Rep. Eshoo's predictions on the AI legislative timeline.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/anna-eshoo-b0392095/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Anna Eshoo</a> - https://www.linkedin.com/in/anna-eshoo-b0392095/</p><p><a href="https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - https://www.whitehouse.gov/ostp/news-updates/2023/01/24/national-artificial-intelligence-research-resource-task-force-releases-final-report/</p><p><a href="https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&amp;s=1&amp;r=2" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Keep STEM Talent Act 2021</a> - https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&amp;s=1&amp;r=2</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://eshoo.house.gov/sites/evo-subsites/eshoo.house.gov/files/evo-media-document/eshoo_043_xml.pdf</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus. Time Magazine has selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and o...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>8</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[20f16db9-7772-445f-a9ab-4eaaae76c79b]]></guid>
  <title><![CDATA[Advocacy for Startups in AI Policy with Nathan Lindfors of Engine ]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world with </span><a href="https://www.linkedin.com/in/nathan-lindfors-24032b150/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Lindfors</a><span style="background-color: transparent;">, who brings unique insights from his role as Policy Director of </span><a href="https://www.linkedin.com/company/engine-advocacy/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine</a><span style="background-color: transparent;">, an organization at the forefront of advocating for startup interests in the AI realm.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:40) The mission and goals of Engine in advocating for startups.</span></p><p><span style="background-color: transparent;">(02:40) How startups differ from companies like OpenAI and Anthropic in the AI space.</span></p><p><span style="background-color: transparent;">(04:22) The role of Engine in educating startups on AI policy developments.</span></p><p><span style="background-color: transparent;">(05:33) Nathan’s take on President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(09:12) Concerns over regulatory capture impacting startup innovation.</span></p><p><span style="background-color: transparent;">(10:28) The debate around open-sourcing AI models.</span></p><p><span style="background-color: transparent;">(13:17) Addressing the risks of AI tools falling into the hands of bad actors.</span></p><p><span style="background-color: transparent;">(16:46) Liability issues in AI and their impact on startups.</span></p><p><span style="background-color: transparent;">(19:50) Preparing the workforce for the future of AI.</span></p><p><span style="background-color: transparent;">(23:25) The need for transparent AI usage disclosures by companies.</span></p><p><span style="background-color: transparent;">(25:28) Discussion on the complexities of global versus regional AI regulations.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nathan-lindfors-24032b150/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Lindfors</a> -</p><p>https://www.linkedin.com/in/nathan-lindfors-24032b150/</p><p><a href="https://www.linkedin.com/company/engine-advocacy/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine</a> -</p><p>https://www.linkedin.com/company/engine-advocacy/</p><p><a href="https://www.engine.is/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine Advocacy for Startups</a> -</p><p>https://www.linkedin.com/company/engine-advocacy/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/3edf263c-cd7c-4009-88a3-803ed6f86343/ea1945d47c.jpg" />
  <pubDate>Wed, 20 Dec 2023 21:06:38 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/3edf263c-cd7c-4009-88a3-803ed6f86343/episode.mp3" />
  <itunes:title><![CDATA[Advocacy for Startups in AI Policy with Nathan Lindfors of Engine ]]></itunes:title>
  <itunes:duration>32:08</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world with </span><a href="https://www.linkedin.com/in/nathan-lindfors-24032b150/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Lindfors</a><span style="background-color: transparent;">, who brings unique insights from his role as Policy Director of </span><a href="https://www.linkedin.com/company/engine-advocacy/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine</a><span style="background-color: transparent;">, an organization at the forefront of advocating for startup interests in the AI realm.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:40) The mission and goals of Engine in advocating for startups.</span></p><p><span style="background-color: transparent;">(02:40) How startups differ from companies like OpenAI and Anthropic in the AI space.</span></p><p><span style="background-color: transparent;">(04:22) The role of Engine in educating startups on AI policy developments.</span></p><p><span style="background-color: transparent;">(05:33) Nathan’s take on President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(09:12) Concerns over regulatory capture impacting startup innovation.</span></p><p><span style="background-color: transparent;">(10:28) The debate around open-sourcing AI models.</span></p><p><span style="background-color: transparent;">(13:17) Addressing the risks of AI tools falling into the hands of bad actors.</span></p><p><span style="background-color: transparent;">(16:46) Liability issues in AI and their impact on startups.</span></p><p><span style="background-color: transparent;">(19:50) Preparing the workforce for the future of AI.</span></p><p><span style="background-color: transparent;">(23:25) The need for transparent AI usage disclosures by companies.</span></p><p><span style="background-color: transparent;">(25:28) Discussion on the complexities of global versus regional AI regulations.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nathan-lindfors-24032b150/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Lindfors</a> -</p><p>https://www.linkedin.com/in/nathan-lindfors-24032b150/</p><p><a href="https://www.linkedin.com/company/engine-advocacy/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine</a> -</p><p>https://www.linkedin.com/company/engine-advocacy/</p><p><a href="https://www.engine.is/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine Advocacy for Startups</a> -</p><p>https://www.linkedin.com/company/engine-advocacy/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world with </span><a href="https://www.linkedin.com/in/nathan-lindfors-24032b150/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Lindfors</a><span style="background-color: transparent;">, who brings unique insights from his role as Policy Director of </span><a href="https://www.linkedin.com/company/engine-advocacy/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine</a><span style="background-color: transparent;">, an organization at the forefront of advocating for startup interests in the AI realm.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:40) The mission and goals of Engine in advocating for startups.</span></p><p><span style="background-color: transparent;">(02:40) How startups differ from companies like OpenAI and Anthropic in the AI space.</span></p><p><span style="background-color: transparent;">(04:22) The role of Engine in educating startups on AI policy developments.</span></p><p><span style="background-color: transparent;">(05:33) Nathan’s take on President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(09:12) Concerns over regulatory capture impacting startup innovation.</span></p><p><span style="background-color: transparent;">(10:28) The debate around open-sourcing AI models.</span></p><p><span style="background-color: transparent;">(13:17) Addressing the risks of AI tools falling into the hands of bad actors.</span></p><p><span style="background-color: transparent;">(16:46) Liability issues in AI and their impact on startups.</span></p><p><span style="background-color: transparent;">(19:50) Preparing the workforce for the future of AI.</span></p><p><span style="background-color: transparent;">(23:25) The need for transparent AI usage disclosures by companies.</span></p><p><span style="background-color: transparent;">(25:28) Discussion on the complexities of global versus regional AI regulations.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/nathan-lindfors-24032b150/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Nathan Lindfors</a> -</p><p>https://www.linkedin.com/in/nathan-lindfors-24032b150/</p><p><a href="https://www.linkedin.com/company/engine-advocacy/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine</a> -</p><p>https://www.linkedin.com/company/engine-advocacy/</p><p><a href="https://www.engine.is/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Engine Advocacy for Startups</a> -</p><p>https://www.linkedin.com/company/engine-advocacy/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> -</p><p>https://www.whitehouse.gov/</p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world with Nathan Lindfors, who brings unique insights from his role as Policy Director of Engine, an organization at the for...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>7</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[21a623fc-c7b3-424d-ae1d-60b0a14815fd]]></guid>
  <title><![CDATA[Balancing AI Advancements With Public Safety and Transparency with Senator Pete Ricketts]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges with </span><a href="https://www.linkedin.com/in/pete-ricketts-9090a0162/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Pete Ricketts</a><span style="background-color: transparent;"> from Nebraska. With his background in governance and entrepreneurship, Senator Ricketts offers invaluable insights into the legislative aspects of AI. Together, we delve into how to harness AI responsibly for the benefit of all.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:45) Introduction of a bill for watermarking AI-generated materials.</span></p><p><span style="background-color: transparent;">(03:15) Addressing the concerns of deepfakes and intellectual property in the AI sphere.</span></p><p><span style="background-color: transparent;">(04:01) AI’s transformative potential and the critical need for careful regulation.</span></p><p><span style="background-color: transparent;">(05:19) The impact of AI on national security and election processes.</span></p><p><span style="background-color: transparent;">(05:44) The importance of including small businesses and educational institutions in AI legislation.</span></p><p><span style="background-color: transparent;">(07:00) The need for federal preemption over state laws to avoid a patchwork of AI regulations.</span></p><p><span style="background-color: transparent;">(08:08) The role of workforce reskilling and talent attraction in AI development.</span></p><p><span style="background-color: transparent;">(10:03) Predictions for the timeline of comprehensive AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Ricketts’ AI Watermarking Bill</a> - https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/</p><p><a href="https://www.csis.org/analysis/addressing-national-security-implications-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Implications of AI</a> - https://www.csis.org/analysis/addressing-national-security-implications-ai</p><p><a href="https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI’s Role in Elections</a> - https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/b7b3b3af-2224-483f-81c1-c3000839d45c/77f7e12330.jpg" />
  <pubDate>Mon, 18 Dec 2023 10:36:13 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/b7b3b3af-2224-483f-81c1-c3000839d45c/episode.mp3" />
  <itunes:title><![CDATA[Balancing AI Advancements With Public Safety and Transparency with Senator Pete Ricketts]]></itunes:title>
  <itunes:duration>12:30</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges with </span><a href="https://www.linkedin.com/in/pete-ricketts-9090a0162/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Pete Ricketts</a><span style="background-color: transparent;"> from Nebraska. With his background in governance and entrepreneurship, Senator Ricketts offers invaluable insights into the legislative aspects of AI. Together, we delve into how to harness AI responsibly for the benefit of all.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:45) Introduction of a bill for watermarking AI-generated materials.</span></p><p><span style="background-color: transparent;">(03:15) Addressing the concerns of deepfakes and intellectual property in the AI sphere.</span></p><p><span style="background-color: transparent;">(04:01) AI’s transformative potential and the critical need for careful regulation.</span></p><p><span style="background-color: transparent;">(05:19) The impact of AI on national security and election processes.</span></p><p><span style="background-color: transparent;">(05:44) The importance of including small businesses and educational institutions in AI legislation.</span></p><p><span style="background-color: transparent;">(07:00) The need for federal preemption over state laws to avoid a patchwork of AI regulations.</span></p><p><span style="background-color: transparent;">(08:08) The role of workforce reskilling and talent attraction in AI development.</span></p><p><span style="background-color: transparent;">(10:03) Predictions for the timeline of comprehensive AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Ricketts’ AI Watermarking Bill</a> - https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/</p><p><a href="https://www.csis.org/analysis/addressing-national-security-implications-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Implications of AI</a> - https://www.csis.org/analysis/addressing-national-security-implications-ai</p><p><a href="https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI’s Role in Elections</a> - https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges with </span><a href="https://www.linkedin.com/in/pete-ricketts-9090a0162/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Pete Ricketts</a><span style="background-color: transparent;"> from Nebraska. With his background in governance and entrepreneurship, Senator Ricketts offers invaluable insights into the legislative aspects of AI. Together, we delve into how to harness AI responsibly for the benefit of all.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:45) Introduction of a bill for watermarking AI-generated materials.</span></p><p><span style="background-color: transparent;">(03:15) Addressing the concerns of deepfakes and intellectual property in the AI sphere.</span></p><p><span style="background-color: transparent;">(04:01) AI’s transformative potential and the critical need for careful regulation.</span></p><p><span style="background-color: transparent;">(05:19) The impact of AI on national security and election processes.</span></p><p><span style="background-color: transparent;">(05:44) The importance of including small businesses and educational institutions in AI legislation.</span></p><p><span style="background-color: transparent;">(07:00) The need for federal preemption over state laws to avoid a patchwork of AI regulations.</span></p><p><span style="background-color: transparent;">(08:08) The role of workforce reskilling and talent attraction in AI development.</span></p><p><span style="background-color: transparent;">(10:03) Predictions for the timeline of comprehensive AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Senator Ricketts’ AI Watermarking Bill</a> - https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/</p><p><a href="https://www.csis.org/analysis/addressing-national-security-implications-ai" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Security Implications of AI</a> - https://www.csis.org/analysis/addressing-national-security-implications-ai</p><p><a href="https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">AI’s Role in Elections</a> - https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges with Senator Pete Ricketts from Nebraska. With his ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>6</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[b72cb0ee-41f1-4717-a0b8-2fa6d74207d6]]></guid>
  <title><![CDATA[Exploring the Future of AI Regulation With a Congressional Insight]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined by</span><a href="https://www.linkedin.com/in/jayobernolte/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Congressman Jay Obernolte</a><span style="background-color: transparent;">, representing California’s 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:06) Assessing President Biden’s Executive Order on AI and concerns of regulatory overreach.</span></p><p><span style="background-color: transparent;">(04:54) Exploring the Create AI Act’s goal to democratize AI research across academia.</span></p><p><span style="background-color: transparent;">(06:41) Addressing the risk of regulatory capture in the AI industry.</span></p><p><span style="background-color: transparent;">(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.</span></p><p><span style="background-color: transparent;">(11:05) Debating the need for a new AI regulatory structure.</span></p><p><span style="background-color: transparent;">(14:25) Delving into the implications of open-source AI.</span></p><p><span style="background-color: transparent;">(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.</span></p><p><span style="background-color: transparent;">(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.</span></p><p><span style="background-color: transparent;">(19:44) Advocating for federal over regional or global AI regulation models.</span></p><p><span style="background-color: transparent;">(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/jayobernolte/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Jay Obernolte</a> - https://www.linkedin.com/in/jayobernolte/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://www.congress.gov/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/e94f0e85-5c51-44bd-9bdf-8133f95771f1/f2ec4d7186.jpg" />
  <pubDate>Wed, 13 Dec 2023 23:49:43 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/e94f0e85-5c51-44bd-9bdf-8133f95771f1/episode.mp3" />
  <itunes:title><![CDATA[Exploring the Future of AI Regulation With a Congressional Insight]]></itunes:title>
  <itunes:duration>23:57</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined by</span><a href="https://www.linkedin.com/in/jayobernolte/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Congressman Jay Obernolte</a><span style="background-color: transparent;">, representing California’s 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:06) Assessing President Biden’s Executive Order on AI and concerns of regulatory overreach.</span></p><p><span style="background-color: transparent;">(04:54) Exploring the Create AI Act’s goal to democratize AI research across academia.</span></p><p><span style="background-color: transparent;">(06:41) Addressing the risk of regulatory capture in the AI industry.</span></p><p><span style="background-color: transparent;">(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.</span></p><p><span style="background-color: transparent;">(11:05) Debating the need for a new AI regulatory structure.</span></p><p><span style="background-color: transparent;">(14:25) Delving into the implications of open-source AI.</span></p><p><span style="background-color: transparent;">(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.</span></p><p><span style="background-color: transparent;">(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.</span></p><p><span style="background-color: transparent;">(19:44) Advocating for federal over regional or global AI regulation models.</span></p><p><span style="background-color: transparent;">(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/jayobernolte/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Jay Obernolte</a> - https://www.linkedin.com/in/jayobernolte/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://www.congress.gov/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined by</span><a href="https://www.linkedin.com/in/jayobernolte/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Congressman Jay Obernolte</a><span style="background-color: transparent;">, representing California’s 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:06) Assessing President Biden’s Executive Order on AI and concerns of regulatory overreach.</span></p><p><span style="background-color: transparent;">(04:54) Exploring the Create AI Act’s goal to democratize AI research across academia.</span></p><p><span style="background-color: transparent;">(06:41) Addressing the risk of regulatory capture in the AI industry.</span></p><p><span style="background-color: transparent;">(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.</span></p><p><span style="background-color: transparent;">(11:05) Debating the need for a new AI regulatory structure.</span></p><p><span style="background-color: transparent;">(14:25) Delving into the implications of open-source AI.</span></p><p><span style="background-color: transparent;">(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.</span></p><p><span style="background-color: transparent;">(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.</span></p><p><span style="background-color: transparent;">(19:44) Advocating for federal over regional or global AI regulation models.</span></p><p><span style="background-color: transparent;">(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/jayobernolte/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Jay Obernolte</a> - https://www.linkedin.com/in/jayobernolte/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://www.congress.gov/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined by Congressman Jay Obernolte, representing California’s 23rd district and serving as the vice-chair of the congressional AI ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>5</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7b1f682b-8209-4df4-b183-010162cd2d32]]></guid>
  <title><![CDATA[The Role of MLOps Community in Influencing AI Policymaking]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/dpbrinkm/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Demetrios Brinkmann</a><span style="background-color: transparent;">, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and the challenges in creating effective regulatory frameworks.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:51) The dual role of AI in boosting GDP and posing a threat to workforce and national security.</span></p><p><span style="background-color: transparent;">(01:10) The US Congress' efforts to create a legislative framework for AI.</span></p><p><span style="background-color: transparent;">(02:14) The significance of the MLOps community in AI production.</span></p><p><span style="background-color: transparent;">(03:05) The impact of global AI regulations on the MLOps community.</span></p><p><span style="background-color: transparent;">(03:40) President Biden's Executive Order on AI and the challenges in regulating large language models.</span></p><p><span style="background-color: transparent;">(08:01) The EU's AI Act focusing on risk management and post-market monitoring.</span></p><p><span style="background-color: transparent;">(14:41) Identifying key risks from AI that require regulation.</span></p><p><span style="background-color: transparent;">(21:24) The debate over open-sourcing LLMs.</span></p><p><span style="background-color: transparent;">(26:15) Concerns about regulatory capture by big tech companies.</span></p><p><span style="background-color: transparent;">(30:38) The importance of global or regional AI regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/dpbrinkm/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Demetrios Brinkmann</a> - https://www.linkedin.com/in/dpbrinkm/</p><p><a href="https://ai-infrastructure.org/mlops-community-now/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">MLOps Community</a> - https://ai-infrastructure.org/mlops-community-now/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/ef98336b-dad9-4a6e-9c3a-a7e7274d1f16/f7d8e2dcff.jpg" />
  <pubDate>Thu, 07 Dec 2023 18:06:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/ef98336b-dad9-4a6e-9c3a-a7e7274d1f16/episode.mp3" />
  <itunes:title><![CDATA[The Role of MLOps Community in Influencing AI Policymaking]]></itunes:title>
  <itunes:duration>37:53</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/dpbrinkm/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Demetrios Brinkmann</a><span style="background-color: transparent;">, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and the challenges in creating effective regulatory frameworks.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:51) The dual role of AI in boosting GDP and posing a threat to workforce and national security.</span></p><p><span style="background-color: transparent;">(01:10) The US Congress' efforts to create a legislative framework for AI.</span></p><p><span style="background-color: transparent;">(02:14) The significance of the MLOps community in AI production.</span></p><p><span style="background-color: transparent;">(03:05) The impact of global AI regulations on the MLOps community.</span></p><p><span style="background-color: transparent;">(03:40) President Biden's Executive Order on AI and the challenges in regulating large language models.</span></p><p><span style="background-color: transparent;">(08:01) The EU's AI Act focusing on risk management and post-market monitoring.</span></p><p><span style="background-color: transparent;">(14:41) Identifying key risks from AI that require regulation.</span></p><p><span style="background-color: transparent;">(21:24) The debate over open-sourcing LLMs.</span></p><p><span style="background-color: transparent;">(26:15) Concerns about regulatory capture by big tech companies.</span></p><p><span style="background-color: transparent;">(30:38) The importance of global or regional AI regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/dpbrinkm/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Demetrios Brinkmann</a> - https://www.linkedin.com/in/dpbrinkm/</p><p><a href="https://ai-infrastructure.org/mlops-community-now/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">MLOps Community</a> - https://ai-infrastructure.org/mlops-community-now/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/dpbrinkm/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Demetrios Brinkmann</a><span style="background-color: transparent;">, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and the challenges in creating effective regulatory frameworks.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(00:51) The dual role of AI in boosting GDP and posing a threat to workforce and national security.</span></p><p><span style="background-color: transparent;">(01:10) The US Congress' efforts to create a legislative framework for AI.</span></p><p><span style="background-color: transparent;">(02:14) The significance of the MLOps community in AI production.</span></p><p><span style="background-color: transparent;">(03:05) The impact of global AI regulations on the MLOps community.</span></p><p><span style="background-color: transparent;">(03:40) President Biden's Executive Order on AI and the challenges in regulating large language models.</span></p><p><span style="background-color: transparent;">(08:01) The EU's AI Act focusing on risk management and post-market monitoring.</span></p><p><span style="background-color: transparent;">(14:41) Identifying key risks from AI that require regulation.</span></p><p><span style="background-color: transparent;">(21:24) The debate over open-sourcing LLMs.</span></p><p><span style="background-color: transparent;">(26:15) Concerns about regulatory capture by big tech companies.</span></p><p><span style="background-color: transparent;">(30:38) The importance of global or regional AI regulations.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/dpbrinkm/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Demetrios Brinkmann</a> - https://www.linkedin.com/in/dpbrinkm/</p><p><a href="https://ai-infrastructure.org/mlops-community-now/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">MLOps Community</a> - https://ai-infrastructure.org/mlops-community-now/</p><p><a href="https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden's Executive Order on AI</a> - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/</p><p><a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">EU AI Act</a> - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined by Demetrios Brinkmann, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and th...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>4</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[7113c38b-3bb0-4556-a8f2-5139dea8f84a]]></guid>
  <title><![CDATA[The Role of AI in Job Creation and Global Tech Leadership]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and entrepreneurship, Governor McAuliffe provides a unique perspective on the necessary steps the United States must make to take the lead in AI innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><strong style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿</span></strong></p><p><span style="background-color: transparent;">(02:08) The significance of President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(03:46) The need for long-term, consistent AI standards and legislation.</span></p><p><span style="background-color: transparent;">(04:25) Addressing public concerns about AI and job displacement.</span></p><p><span style="background-color: transparent;">(06:16) The importance of establishing a regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(07:37) Promoting AI education starting from kindergarten.</span></p><p><span style="background-color: transparent;">(09:18) Proposing a scholarship program for AI studies.</span></p><p><span style="background-color: transparent;">(10:19) AI’s role in maintaining global leadership and job growth.</span></p><p><span style="background-color: transparent;">(12:34) AI is a crucial aspect of national security.</span></p><p><br></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a></p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Science Foundation (NSF)</a></p><p><a href="https://www.nist.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Institute of Standards and Technology (NIST)</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/374ba8c7-cd44-4d10-92dd-15bc920ee80a/c5db153534.jpg" />
  <pubDate>Wed, 22 Nov 2023 02:00:00 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/374ba8c7-cd44-4d10-92dd-15bc920ee80a/episode.mp3" />
  <itunes:title><![CDATA[The Role of AI in Job Creation and Global Tech Leadership]]></itunes:title>
  <itunes:duration>13:12</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and entrepreneurship, Governor McAuliffe provides a unique perspective on the necessary steps the United States must make to take the lead in AI innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><strong style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿</span></strong></p><p><span style="background-color: transparent;">(02:08) The significance of President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(03:46) The need for long-term, consistent AI standards and legislation.</span></p><p><span style="background-color: transparent;">(04:25) Addressing public concerns about AI and job displacement.</span></p><p><span style="background-color: transparent;">(06:16) The importance of establishing a regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(07:37) Promoting AI education starting from kindergarten.</span></p><p><span style="background-color: transparent;">(09:18) Proposing a scholarship program for AI studies.</span></p><p><span style="background-color: transparent;">(10:19) AI’s role in maintaining global leadership and job growth.</span></p><p><span style="background-color: transparent;">(12:34) AI is a crucial aspect of national security.</span></p><p><br></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a></p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Science Foundation (NSF)</a></p><p><a href="https://www.nist.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Institute of Standards and Technology (NIST)</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and entrepreneurship, Governor McAuliffe provides a unique perspective on the necessary steps the United States must make to take the lead in AI innovation and regulation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><strong style="background-color: transparent;"><span class="ql-cursor">﻿﻿﻿</span></strong></p><p><span style="background-color: transparent;">(02:08) The significance of President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(03:46) The need for long-term, consistent AI standards and legislation.</span></p><p><span style="background-color: transparent;">(04:25) Addressing public concerns about AI and job displacement.</span></p><p><span style="background-color: transparent;">(06:16) The importance of establishing a regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(07:37) Promoting AI education starting from kindergarten.</span></p><p><span style="background-color: transparent;">(09:18) Proposing a scholarship program for AI studies.</span></p><p><span style="background-color: transparent;">(10:19) AI’s role in maintaining global leadership and job growth.</span></p><p><span style="background-color: transparent;">(12:34) AI is a crucial aspect of national security.</span></p><p><br></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a></p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Science Foundation (NSF)</a></p><p><a href="https://www.nist.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National Institute of Standards and Technology (NIST)</a></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and ent...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>3</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[0b3cc477-b3a4-48dc-813c-f89fdd4ade04]]></guid>
  <title><![CDATA[Balancing Innovation With Social Responsibility in the Age of AI]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/don-beyer-6b444b4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Don Beyer</a><span style="background-color: transparent;">, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus in the </span><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">House of Representatives</a><span style="background-color: transparent;">, who brings a unique perspective to the table with his dedication to understanding and shaping AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:29) Congressman Beyer’s unique approach to learning about AI.</span></p><p><span style="background-color: transparent;">(02:55) The significance of President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(03:46) The debate on creating a separate regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(06:36) The importance of democratizing AI through legislation like the Create AI Act.</span></p><p><span style="background-color: transparent;">(08:46) The pros and cons of open-sourcing AI models.</span></p><p><span style="background-color: transparent;">(12:10) AI’s role in political advertising and the need for ethical considerations.</span></p><p><span style="background-color: transparent;">(16:22) How AI will impact workforce and immigration policies.</span></p><p><span style="background-color: transparent;">(20:12) The priorities for AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/don-beyer-6b444b4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Don Beyer</a> - https://www.linkedin.com/in/don-beyer-6b444b4/</p><p><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">House of Representatives</a> - https://www.linkedin.com/company/u.s.-house-of-representatives/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://www.congress.gov/</p><p><a href="https://www.europarl.europa.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI with EU Parliamentarians</a> - https://www.europarl.europa.eu/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - https://www.nsf.gov/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/d4d3a190-4909-40b7-9811-13d6f8b47b46/6ba6f14e9f.jpg" />
  <pubDate>Fri, 17 Nov 2023 11:51:05 -0500</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/d4d3a190-4909-40b7-9811-13d6f8b47b46/episode.mp3" />
  <itunes:title><![CDATA[Balancing Innovation With Social Responsibility in the Age of AI]]></itunes:title>
  <itunes:duration>26:12</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/don-beyer-6b444b4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Don Beyer</a><span style="background-color: transparent;">, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus in the </span><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">House of Representatives</a><span style="background-color: transparent;">, who brings a unique perspective to the table with his dedication to understanding and shaping AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:29) Congressman Beyer’s unique approach to learning about AI.</span></p><p><span style="background-color: transparent;">(02:55) The significance of President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(03:46) The debate on creating a separate regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(06:36) The importance of democratizing AI through legislation like the Create AI Act.</span></p><p><span style="background-color: transparent;">(08:46) The pros and cons of open-sourcing AI models.</span></p><p><span style="background-color: transparent;">(12:10) AI’s role in political advertising and the need for ethical considerations.</span></p><p><span style="background-color: transparent;">(16:22) How AI will impact workforce and immigration policies.</span></p><p><span style="background-color: transparent;">(20:12) The priorities for AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/don-beyer-6b444b4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Don Beyer</a> - https://www.linkedin.com/in/don-beyer-6b444b4/</p><p><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">House of Representatives</a> - https://www.linkedin.com/company/u.s.-house-of-representatives/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://www.congress.gov/</p><p><a href="https://www.europarl.europa.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI with EU Parliamentarians</a> - https://www.europarl.europa.eu/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - https://www.nsf.gov/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined by </span><a href="https://www.linkedin.com/in/don-beyer-6b444b4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Don Beyer</a><span style="background-color: transparent;">, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus in the </span><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">House of Representatives</a><span style="background-color: transparent;">, who brings a unique perspective to the table with his dedication to understanding and shaping AI legislation.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(01:29) Congressman Beyer’s unique approach to learning about AI.</span></p><p><span style="background-color: transparent;">(02:55) The significance of President Biden’s Executive Order on AI.</span></p><p><span style="background-color: transparent;">(03:46) The debate on creating a separate regulatory agency for AI.</span></p><p><span style="background-color: transparent;">(06:36) The importance of democratizing AI through legislation like the Create AI Act.</span></p><p><span style="background-color: transparent;">(08:46) The pros and cons of open-sourcing AI models.</span></p><p><span style="background-color: transparent;">(12:10) AI’s role in political advertising and the need for ethical considerations.</span></p><p><span style="background-color: transparent;">(16:22) How AI will impact workforce and immigration policies.</span></p><p><span style="background-color: transparent;">(20:12) The priorities for AI legislation in Congress.</span></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/don-beyer-6b444b4/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Congressman Don Beyer</a> - https://www.linkedin.com/in/don-beyer-6b444b4/</p><p><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">House of Representatives</a> - https://www.linkedin.com/company/u.s.-house-of-representatives/</p><p><a href="https://www.whitehouse.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">President Biden’s Executive Order on AI</a> - https://www.whitehouse.gov/</p><p><a href="https://www.congress.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Create AI Act</a> - https://www.congress.gov/</p><p><a href="https://www.europarl.europa.eu/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Discussions on AI with EU Parliamentarians</a> - https://www.europarl.europa.eu/</p><p><a href="https://www.nsf.gov/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">National AI Research Resource</a> - https://www.nsf.gov/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined by Congressman Don Beyer, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>2</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[244cf1fe-f920-4c36-9f6b-2af9ada70b1c]]></guid>
  <title><![CDATA[Navigating the Challenges of AI Legislation]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.</span></p><p><br></p><p><span style="background-color: transparent;">In this episode, I sit down with </span><a href="https://www.linkedin.com/in/rajakrishnamoorthi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raja Krishnamoorthi</a><span style="background-color: transparent;">, US Congressman, Representing Illinois 8th District, to delve deep into the world of AI, its possibilities, its dangers and how the US is positioning itself in this global race.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:36) The necessity of AI regulation.&nbsp;</span></p><p><span style="background-color: transparent;">(03:06) Debating a potential AI regulatory agency.&nbsp;</span></p><p><span style="background-color: transparent;">(04:09) Concerns about global competitiveness, especially China’s AI advances.</span></p><p><span style="background-color: transparent;">(04:52) Introduction of the P.A.S.T. model for AI legislation: Privacy, Accountability, Security and Transparency.&nbsp;</span></p><p><span style="background-color: transparent;">(07:00) Concerns about regulatory capture by corporations and the need for diverse perspectives.&nbsp;</span></p><p><span style="background-color: transparent;">(08:35) Thoughts on open-sourcing large AI language models and implications.&nbsp;</span></p><p><span style="background-color: transparent;">(13:10) The geopolitical impact of AI development, especially in China’s context.&nbsp;</span></p><p><span style="background-color: transparent;">(15:48) Worries about deepfake technology and its election impact.</span></p><p><span style="background-color: transparent;">(21:34) Congressional challenges and ambitious goals for AI regulations, with potential timing considerations.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/rajakrishnamoorthi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raja Krishnamoorthi</a> - https://www.linkedin.com/in/rajakrishnamoorthi/</p><p><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">US Congressman</a> - https://www.linkedin.com/company/u.s.-house-of-representatives/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p><p><br></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/7dfc88a4-745b-466a-a072-fad13664e5a0/shows/099ef565-66b9-47ee-8d35-103cba64c0cb/episodes/7824ee19-a522-4002-be46-7f49e215e6d3/72c0d22fb5.jpg" />
  <pubDate>Wed, 25 Oct 2023 08:32:00 -0400</pubDate>
  <link>https://regulatingai.org/podcast/</link>
  <author><![CDATA[sanjaypuri.podcast@gmail.com (Sanjay Puri)]]></author>
  <enclosure length="1048577" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/7dfc88a4-745b-466a-a072-fad13664e5a0/episodes/7824ee19-a522-4002-be46-7f49e215e6d3/episode.mp3" />
  <itunes:title><![CDATA[Navigating the Challenges of AI Legislation]]></itunes:title>
  <itunes:duration>22:50</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.</span></p><p><br></p><p><span style="background-color: transparent;">In this episode, I sit down with </span><a href="https://www.linkedin.com/in/rajakrishnamoorthi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raja Krishnamoorthi</a><span style="background-color: transparent;">, US Congressman, Representing Illinois 8th District, to delve deep into the world of AI, its possibilities, its dangers and how the US is positioning itself in this global race.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:36) The necessity of AI regulation.&nbsp;</span></p><p><span style="background-color: transparent;">(03:06) Debating a potential AI regulatory agency.&nbsp;</span></p><p><span style="background-color: transparent;">(04:09) Concerns about global competitiveness, especially China’s AI advances.</span></p><p><span style="background-color: transparent;">(04:52) Introduction of the P.A.S.T. model for AI legislation: Privacy, Accountability, Security and Transparency.&nbsp;</span></p><p><span style="background-color: transparent;">(07:00) Concerns about regulatory capture by corporations and the need for diverse perspectives.&nbsp;</span></p><p><span style="background-color: transparent;">(08:35) Thoughts on open-sourcing large AI language models and implications.&nbsp;</span></p><p><span style="background-color: transparent;">(13:10) The geopolitical impact of AI development, especially in China’s context.&nbsp;</span></p><p><span style="background-color: transparent;">(15:48) Worries about deepfake technology and its election impact.</span></p><p><span style="background-color: transparent;">(21:34) Congressional challenges and ambitious goals for AI regulations, with potential timing considerations.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/rajakrishnamoorthi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raja Krishnamoorthi</a> - https://www.linkedin.com/in/rajakrishnamoorthi/</p><p><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">US Congressman</a> - https://www.linkedin.com/company/u.s.-house-of-representatives/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.</span></p><p><br></p><p><span style="background-color: transparent;">In this episode, I sit down with </span><a href="https://www.linkedin.com/in/rajakrishnamoorthi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raja Krishnamoorthi</a><span style="background-color: transparent;">, US Congressman, Representing Illinois 8th District, to delve deep into the world of AI, its possibilities, its dangers and how the US is positioning itself in this global race.</span></p><p><br></p><p><strong style="background-color: transparent;">Key Takeaways:</strong></p><p><br></p><p><span style="background-color: transparent;">(02:36) The necessity of AI regulation.&nbsp;</span></p><p><span style="background-color: transparent;">(03:06) Debating a potential AI regulatory agency.&nbsp;</span></p><p><span style="background-color: transparent;">(04:09) Concerns about global competitiveness, especially China’s AI advances.</span></p><p><span style="background-color: transparent;">(04:52) Introduction of the P.A.S.T. model for AI legislation: Privacy, Accountability, Security and Transparency.&nbsp;</span></p><p><span style="background-color: transparent;">(07:00) Concerns about regulatory capture by corporations and the need for diverse perspectives.&nbsp;</span></p><p><span style="background-color: transparent;">(08:35) Thoughts on open-sourcing large AI language models and implications.&nbsp;</span></p><p><span style="background-color: transparent;">(13:10) The geopolitical impact of AI development, especially in China’s context.&nbsp;</span></p><p><span style="background-color: transparent;">(15:48) Worries about deepfake technology and its election impact.</span></p><p><span style="background-color: transparent;">(21:34) Congressional challenges and ambitious goals for AI regulations, with potential timing considerations.</span></p><p><br></p><p><br></p><p><strong style="background-color: transparent;">Resources Mentioned:</strong></p><p><br></p><p><a href="https://www.linkedin.com/in/rajakrishnamoorthi/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Raja Krishnamoorthi</a> - https://www.linkedin.com/in/rajakrishnamoorthi/</p><p><a href="https://www.linkedin.com/company/u.s.-house-of-representatives/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">US Congressman</a> - https://www.linkedin.com/company/u.s.-house-of-representatives/</p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.</span></p><p><br></p><p><br></p><p><br></p><p><br></p><p><span style="background-color: transparent;">#AIRegulation #AISafety #AIStandard</span></p><p><br></p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.In this episode, I sit down with Raja ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:episode>1</itunes:episode>
  <itunes:season>1</itunes:season>
</item>
</channel>
</rss>