<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:podcast="https://podcastindex.org/namespace/1.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0">
<channel>
  <atom:link href="https://feeds.cohostpodcasting.com/fC5brLPr" rel="self" title="MP3 Audio" type="application/atom+xml"/>
  <atom:link href="https://pubsubhubbub.appspot.com/" rel="hub" xmlns="http://www.w3.org/2005/Atom" />
  <generator>https://cohostpodcasting.com</generator>
  <title><![CDATA[Before AGI]]></title>
  <description><![CDATA[Artificial General Intelligence — the type of AI that reaches or even surpasses human capabilities — is an exciting topic. But what is not explored as much is an equally important question: what happens before AGI is here? That is, what should we be doing to prepare?

In Before AGI, I hope to have honest conversations about what are the goals, what are the problems, and what lies ahead as we develop AI. 
]]></description>
  <itunes:summary><![CDATA[Artificial General Intelligence — the type of AI that reaches or even surpasses human capabilities — is an exciting topic. But what is not explored as much is an equally important question: what happens before AGI is here? That is, what should we be doing to prepare?

In Before AGI, I hope to have honest conversations about what are the goals, what are the problems, and what lies ahead as we develop AI. 
]]></itunes:summary>
  <language>en</language>
  <copyright><![CDATA[Copyright 2024]]></copyright>
<podcast:guid>15811fe7-b105-4791-bd5b-a39e202ae51b</podcast:guid>
  <pubDate>Fri, 31 May 2024 21:45:06 -0400</pubDate>
  <lastBuildDate>Wed, 29 Apr 2026 19:56:50 -0400</lastBuildDate>
  
  <link>https://www.cohostpodcasting.com</link>
  <itunes:type>episodic</itunes:type>
  <itunes:author><![CDATA[Aleksander Mądry]]></itunes:author>
  <itunes:explicit>false</itunes:explicit>
  <itunes:image href="https://files.cohostpodcasting.com/quill-file-prod/21e43657-35ea-4e65-8f5d-885e9d364c06/shows/15811fe7-b105-4791-bd5b-a39e202ae51b/cover-art/original_0c82671c6a7e782ee25cc2f19b4fc5c4.jpg"/>
  <itunes:new-feed-url>https://feeds.cohostpodcasting.com/fC5brLPr</itunes:new-feed-url>
  
  <itunes:owner>
    <itunes:name><![CDATA[TJ Bonaventura]]></itunes:name>
    <itunes:email>mit.officialpodcast@gmail.com</itunes:email>
  </itunes:owner>
  <itunes:category text="Technology"/>
  <itunes:category text="Business"/>
<item>
  <guid isPermaLink="false"><![CDATA[ca8ef949-282a-4a5c-8f85-5832cabd3154]]></guid>
  <title><![CDATA[Jakub Pachocki and Szymon Sidor: Building AI]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">Artificial intelligence is rapidly transforming the world, raising urgent questions about its impact, governance, and the future of human-machine collaboration. As AI systems become more capable, society faces challenges around safety and the balance of power. What does it mean to build and deploy technology that can reason, create, and potentially automate research itself? How do leading researchers navigate the technical and ethical frontiers of this new era?</span></p><p><br></p><p><a href="https://www.linkedin.com/in/jakub-pachocki/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jakub Pachocki</a><span style="background-color: transparent;">, Chief Scientist at </span><a href="https://www.linkedin.com/company/openai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a><span style="background-color: transparent;">, and</span><a href="https://www.linkedin.com/in/szymon-sidor-98164044/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Szymon Sidor</a><span style="background-color: transparent;">, Technical Fellow at </span><a href="https://www.linkedin.com/company/openai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a><span style="background-color: transparent;">, share their journeys from early programming competitions in Poland to shaping some of the most advanced AI systems in the world. They discuss the evolution of AI research, the technical and emotional challenges of building breakthrough models, and the profound societal questions that come with unprecedented progress.</span></p><p><br></p><p><span style="background-color: transparent;">2:42 - Origin story: high school to OpenAI</span></p><p><span style="background-color: transparent;">6:31 - “AI enlightenment” and AlphaGo moment</span></p><p><span style="background-color: transparent;">17:12 - Early OpenAI culture and impostor syndrome</span></p><p><span style="background-color: transparent;">23:30 - Power duo dynamic and collaboration</span></p><p><span style="background-color: transparent;">27:25 - Shift to reasoning models</span></p><p><span style="background-color: transparent;">36:23 - Possibilities of AGI</span></p><p><span style="background-color: transparent;">42:12 - OpenAI’s pandemic efforts showed AI’s immaturity</span></p><p><span style="background-color: transparent;">51:15 - Governance lessons from crisis</span></p><p><span style="background-color: transparent;">55:39 - AI safety and optimism for the future</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/21e43657-35ea-4e65-8f5d-885e9d364c06/shows/15811fe7-b105-4791-bd5b-a39e202ae51b/episodes/a49422cf-003e-4939-b949-d18d4ffd084c/6e228e9885.jpg" />
  <pubDate>Thu, 31 Jul 2025 09:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="52378308" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/a49422cf-003e-4939-b949-d18d4ffd084c/episode.mp3?v=1b76149032" />
  <itunes:title><![CDATA[Jakub Pachocki and Szymon Sidor: Building AI]]></itunes:title>
  <itunes:duration>54:33</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">Artificial intelligence is rapidly transforming the world, raising urgent questions about its impact, governance, and the future of human-machine collaboration. As AI systems become more capable, society faces challenges around safety and the balance of power. What does it mean to build and deploy technology that can reason, create, and potentially automate research itself? How do leading researchers navigate the technical and ethical frontiers of this new era?</span></p><p><br></p><p><a href="https://www.linkedin.com/in/jakub-pachocki/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jakub Pachocki</a><span style="background-color: transparent;">, Chief Scientist at </span><a href="https://www.linkedin.com/company/openai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a><span style="background-color: transparent;">, and</span><a href="https://www.linkedin.com/in/szymon-sidor-98164044/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Szymon Sidor</a><span style="background-color: transparent;">, Technical Fellow at </span><a href="https://www.linkedin.com/company/openai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a><span style="background-color: transparent;">, share their journeys from early programming competitions in Poland to shaping some of the most advanced AI systems in the world. They discuss the evolution of AI research, the technical and emotional challenges of building breakthrough models, and the profound societal questions that come with unprecedented progress.</span></p><p><br></p><p><span style="background-color: transparent;">2:42 - Origin story: high school to OpenAI</span></p><p><span style="background-color: transparent;">6:31 - “AI enlightenment” and AlphaGo moment</span></p><p><span style="background-color: transparent;">17:12 - Early OpenAI culture and impostor syndrome</span></p><p><span style="background-color: transparent;">23:30 - Power duo dynamic and collaboration</span></p><p><span style="background-color: transparent;">27:25 - Shift to reasoning models</span></p><p><span style="background-color: transparent;">36:23 - Possibilities of AGI</span></p><p><span style="background-color: transparent;">42:12 - OpenAI’s pandemic efforts showed AI’s immaturity</span></p><p><span style="background-color: transparent;">51:15 - Governance lessons from crisis</span></p><p><span style="background-color: transparent;">55:39 - AI safety and optimism for the future</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">Artificial intelligence is rapidly transforming the world, raising urgent questions about its impact, governance, and the future of human-machine collaboration. As AI systems become more capable, society faces challenges around safety and the balance of power. What does it mean to build and deploy technology that can reason, create, and potentially automate research itself? How do leading researchers navigate the technical and ethical frontiers of this new era?</span></p><p><br></p><p><a href="https://www.linkedin.com/in/jakub-pachocki/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jakub Pachocki</a><span style="background-color: transparent;">, Chief Scientist at </span><a href="https://www.linkedin.com/company/openai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a><span style="background-color: transparent;">, and</span><a href="https://www.linkedin.com/in/szymon-sidor-98164044/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);"> Szymon Sidor</a><span style="background-color: transparent;">, Technical Fellow at </span><a href="https://www.linkedin.com/company/openai/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">OpenAI</a><span style="background-color: transparent;">, share their journeys from early programming competitions in Poland to shaping some of the most advanced AI systems in the world. They discuss the evolution of AI research, the technical and emotional challenges of building breakthrough models, and the profound societal questions that come with unprecedented progress.</span></p><p><br></p><p><span style="background-color: transparent;">2:42 - Origin story: high school to OpenAI</span></p><p><span style="background-color: transparent;">6:31 - “AI enlightenment” and AlphaGo moment</span></p><p><span style="background-color: transparent;">17:12 - Early OpenAI culture and impostor syndrome</span></p><p><span style="background-color: transparent;">23:30 - Power duo dynamic and collaboration</span></p><p><span style="background-color: transparent;">27:25 - Shift to reasoning models</span></p><p><span style="background-color: transparent;">36:23 - Possibilities of AGI</span></p><p><span style="background-color: transparent;">42:12 - OpenAI’s pandemic efforts showed AI’s immaturity</span></p><p><span style="background-color: transparent;">51:15 - Governance lessons from crisis</span></p><p><span style="background-color: transparent;">55:39 - AI safety and optimism for the future</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Artificial intelligence is rapidly transforming the world, raising urgent questions about its impact, governance, and the future of human-machine collaboration. As AI systems become more capable, society faces challenges around safety and the balan...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[d852a6ac-5d04-47d1-a985-3f42c4e2ae56]]></guid>
  <title><![CDATA[Jonathan Zittrain: AI Agents and Trust]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">AI systems are beginning to act on our behalf — executing tasks, making decisions, and influencing daily life in ways that often escape direct human oversight. What are the consequences of delegating decision-making to systems we don’t fully understand? How do we design AI to enhance human capabilities without sidelining human judgment?</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span></span></p><p><span style="background-color: transparent;">In this episode, host Aleksander Mądry welcomes </span><a href="https://www.linkedin.com/in/zittrain/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jonathan Zittrain</a><span style="background-color: transparent;">, Professor at </span><a href="https://www.linkedin.com/school/harvard-law-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School</a><span style="background-color: transparent;"> and Co-Founder of the </span><a href="https://www.linkedin.com/company/bkcharvard/people/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Berkman Klein Center for Internet &amp; Society</a><span style="background-color: transparent;">. They explore the legal, ethical, and societal challenges of living alongside increasingly autonomous AI. Drawing on lessons from the internet’s evolution, Zittrain examines how we can structure responsibility, foster innovation, and build systems that truly serve the public good — all while navigating the profound opportunities and risks of this technological transformation.</span></p><p><br></p><p><span style="background-color: transparent;">09:20 - Three eras of internet evolution</span></p><p><span style="background-color: transparent;">14:56 - Generative tech and equilibrium challenges</span></p><p><span style="background-color: transparent;">20:59 - Generativity vs. closed systems</span></p><p><span style="background-color: transparent;">26:36 - The promise and pitfalls of AI assistants</span></p><p><span style="background-color: transparent;">32:56 - Privacy, privilege, and AI user rights</span></p><p><span style="background-color: transparent;">39:01 - Regulation, self-regulation, and global policy</span></p><p><span style="background-color: transparent;">48:28 - Defining AI agents and their real-world impact</span></p><p><span style="background-color: transparent;">52:26 - AI’s potential to empower the marginalized</span></p><p><span style="background-color: transparent;">1:15:41 - Acceleration, agency, and the future</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/21e43657-35ea-4e65-8f5d-885e9d364c06/shows/15811fe7-b105-4791-bd5b-a39e202ae51b/episodes/5d383a06-3496-40f0-9568-3a5edc56aead/2767c9da81.jpg" />
  <pubDate>Tue, 15 Jul 2025 09:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="77965769" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/5d383a06-3496-40f0-9568-3a5edc56aead/episode.mp3" />
  <itunes:title><![CDATA[Jonathan Zittrain: AI Agents and Trust]]></itunes:title>
  <itunes:duration>1:21:12</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">AI systems are beginning to act on our behalf — executing tasks, making decisions, and influencing daily life in ways that often escape direct human oversight. What are the consequences of delegating decision-making to systems we don’t fully understand? How do we design AI to enhance human capabilities without sidelining human judgment?</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span></span></p><p><span style="background-color: transparent;">In this episode, host Aleksander Mądry welcomes </span><a href="https://www.linkedin.com/in/zittrain/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jonathan Zittrain</a><span style="background-color: transparent;">, Professor at </span><a href="https://www.linkedin.com/school/harvard-law-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School</a><span style="background-color: transparent;"> and Co-Founder of the </span><a href="https://www.linkedin.com/company/bkcharvard/people/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Berkman Klein Center for Internet &amp; Society</a><span style="background-color: transparent;">. They explore the legal, ethical, and societal challenges of living alongside increasingly autonomous AI. Drawing on lessons from the internet’s evolution, Zittrain examines how we can structure responsibility, foster innovation, and build systems that truly serve the public good — all while navigating the profound opportunities and risks of this technological transformation.</span></p><p><br></p><p><span style="background-color: transparent;">09:20 - Three eras of internet evolution</span></p><p><span style="background-color: transparent;">14:56 - Generative tech and equilibrium challenges</span></p><p><span style="background-color: transparent;">20:59 - Generativity vs. closed systems</span></p><p><span style="background-color: transparent;">26:36 - The promise and pitfalls of AI assistants</span></p><p><span style="background-color: transparent;">32:56 - Privacy, privilege, and AI user rights</span></p><p><span style="background-color: transparent;">39:01 - Regulation, self-regulation, and global policy</span></p><p><span style="background-color: transparent;">48:28 - Defining AI agents and their real-world impact</span></p><p><span style="background-color: transparent;">52:26 - AI’s potential to empower the marginalized</span></p><p><span style="background-color: transparent;">1:15:41 - Acceleration, agency, and the future</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">AI systems are beginning to act on our behalf — executing tasks, making decisions, and influencing daily life in ways that often escape direct human oversight. What are the consequences of delegating decision-making to systems we don’t fully understand? How do we design AI to enhance human capabilities without sidelining human judgment?</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿﻿</span></span></p><p><span style="background-color: transparent;">In this episode, host Aleksander Mądry welcomes </span><a href="https://www.linkedin.com/in/zittrain/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Jonathan Zittrain</a><span style="background-color: transparent;">, Professor at </span><a href="https://www.linkedin.com/school/harvard-law-school/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Harvard Law School</a><span style="background-color: transparent;"> and Co-Founder of the </span><a href="https://www.linkedin.com/company/bkcharvard/people/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Berkman Klein Center for Internet &amp; Society</a><span style="background-color: transparent;">. They explore the legal, ethical, and societal challenges of living alongside increasingly autonomous AI. Drawing on lessons from the internet’s evolution, Zittrain examines how we can structure responsibility, foster innovation, and build systems that truly serve the public good — all while navigating the profound opportunities and risks of this technological transformation.</span></p><p><br></p><p><span style="background-color: transparent;">09:20 - Three eras of internet evolution</span></p><p><span style="background-color: transparent;">14:56 - Generative tech and equilibrium challenges</span></p><p><span style="background-color: transparent;">20:59 - Generativity vs. closed systems</span></p><p><span style="background-color: transparent;">26:36 - The promise and pitfalls of AI assistants</span></p><p><span style="background-color: transparent;">32:56 - Privacy, privilege, and AI user rights</span></p><p><span style="background-color: transparent;">39:01 - Regulation, self-regulation, and global policy</span></p><p><span style="background-color: transparent;">48:28 - Defining AI agents and their real-world impact</span></p><p><span style="background-color: transparent;">52:26 - AI’s potential to empower the marginalized</span></p><p><span style="background-color: transparent;">1:15:41 - Acceleration, agency, and the future</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI systems are beginning to act on our behalf — executing tasks, making decisions, and influencing daily life in ways that often escape direct human oversight. What are the consequences of delegating decision-making to systems we don’t fully unders...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[d722e8be-a7f4-404c-a5e6-859160adbbcc]]></guid>
  <title><![CDATA[Reid Hoffman: AI and Superagency]]></title>
  <description><![CDATA[<p><span style="background-color: transparent;">As AI becomes more integrated into daily life, it raises urgent questions about trust, accountability, and human agency. What happens when people over-rely on AI or when helpful tools are held back by perfectionism? Can we deploy imperfect systems while still earning public confidence?</span></p><p><br></p><p><a href="https://www.linkedin.com/in/reidhoffman/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Reid Hoffman</a><span style="background-color: transparent;">, Co-Author of “Superagency: What Could Possibly Go Right with Our AI Future,” joins this episode to explore how AI can enhance, not replace, human judgment. He discusses democratic leadership, global competition, and the design choices that shape user agency. Reid also shares his perspective on long-term AI investment beyond short-term hype.</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span></span></p><p><span style="background-color: transparent;">03:00 - Superagency and human-centered design</span></p><p><span style="background-color: transparent;">17:27 - Risk, regulation, and collective action</span></p><p><span style="background-color: transparent;">21:42 - Rethinking agency in the age of agents</span></p><p><span style="background-color: transparent;">31:18 - Trust, regulation, and system design</span></p><p><span style="background-color: transparent;">55:09 - Building trust through real-world AI benefits</span></p><p><span style="background-color: transparent;">1:06:07 - China’s ambition and multipolar competition</span></p><p><span style="background-color: transparent;">1:09:00 - Investing in applied AI with real impact</span></p><p><span style="background-color: transparent;">1:14:00 - Reid’s hope for AI engagement</span></p>]]></description>
  <itunes:image href="https://files.cohostpodcasting.com/cohost/21e43657-35ea-4e65-8f5d-885e9d364c06/shows/15811fe7-b105-4791-bd5b-a39e202ae51b/episodes/23ec53dd-7f30-4678-9b58-25befbc5b24d/4a9d186954.jpg" />
  <pubDate>Tue, 01 Jul 2025 09:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="70914800" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/23ec53dd-7f30-4678-9b58-25befbc5b24d/episode.mp3" />
  <itunes:title><![CDATA[Reid Hoffman: AI and Superagency]]></itunes:title>
  <itunes:duration>1:13:52</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="background-color: transparent;">As AI becomes more integrated into daily life, it raises urgent questions about trust, accountability, and human agency. What happens when people over-rely on AI or when helpful tools are held back by perfectionism? Can we deploy imperfect systems while still earning public confidence?</span></p><p><br></p><p><a href="https://www.linkedin.com/in/reidhoffman/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Reid Hoffman</a><span style="background-color: transparent;">, Co-Author of “Superagency: What Could Possibly Go Right with Our AI Future,” joins this episode to explore how AI can enhance, not replace, human judgment. He discusses democratic leadership, global competition, and the design choices that shape user agency. Reid also shares his perspective on long-term AI investment beyond short-term hype.</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span></span></p><p><span style="background-color: transparent;">03:00 - Superagency and human-centered design</span></p><p><span style="background-color: transparent;">17:27 - Risk, regulation, and collective action</span></p><p><span style="background-color: transparent;">21:42 - Rethinking agency in the age of agents</span></p><p><span style="background-color: transparent;">31:18 - Trust, regulation, and system design</span></p><p><span style="background-color: transparent;">55:09 - Building trust through real-world AI benefits</span></p><p><span style="background-color: transparent;">1:06:07 - China’s ambition and multipolar competition</span></p><p><span style="background-color: transparent;">1:09:00 - Investing in applied AI with real impact</span></p><p><span style="background-color: transparent;">1:14:00 - Reid’s hope for AI engagement</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="background-color: transparent;">As AI becomes more integrated into daily life, it raises urgent questions about trust, accountability, and human agency. What happens when people over-rely on AI or when helpful tools are held back by perfectionism? Can we deploy imperfect systems while still earning public confidence?</span></p><p><br></p><p><a href="https://www.linkedin.com/in/reidhoffman/" target="_blank" style="background-color: transparent; color: rgb(17, 85, 204);">Reid Hoffman</a><span style="background-color: transparent;">, Co-Author of “Superagency: What Could Possibly Go Right with Our AI Future,” joins this episode to explore how AI can enhance, not replace, human judgment. He discusses democratic leadership, global competition, and the design choices that shape user agency. Reid also shares his perspective on long-term AI investment beyond short-term hype.</span></p><p><span style="background-color: transparent;"><span class="ql-cursor">﻿</span></span></p><p><span style="background-color: transparent;">03:00 - Superagency and human-centered design</span></p><p><span style="background-color: transparent;">17:27 - Risk, regulation, and collective action</span></p><p><span style="background-color: transparent;">21:42 - Rethinking agency in the age of agents</span></p><p><span style="background-color: transparent;">31:18 - Trust, regulation, and system design</span></p><p><span style="background-color: transparent;">55:09 - Building trust through real-world AI benefits</span></p><p><span style="background-color: transparent;">1:06:07 - China’s ambition and multipolar competition</span></p><p><span style="background-color: transparent;">1:09:00 - Investing in applied AI with real impact</span></p><p><span style="background-color: transparent;">1:14:00 - Reid’s hope for AI engagement</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[As AI becomes more integrated into daily life, it raises urgent questions about trust, accountability, and human agency. What happens when people over-rely on AI or when helpful tools are held back by perfectionism? Can we deploy imperfect systems ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[e5fc26c1-ec90-4ba7-a78c-480848d55f75]]></guid>
  <title><![CDATA[Sendhil Mullainathan: AI and Algorithmic Bias]]></title>
  <description><![CDATA[<p><span style="color: rgb(34, 34, 34);">As AI continues to permeate various aspects of society, its impact on decision-making, bias, and future technological developments is complex. How can we navigate the challenges posed by AI, particularly when it comes to fairness and bias in algorithms? What insights can be drawn from the intersection of economics, computer science, and behavioral studies to guide the responsible development and use of AI?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">In this episode, Sendhil Mullainathan, a prominent economist and professor, delves into these pressing issues. He shares his journey from computer science to behavioral economics and discusses the role of AI in shaping the future of decision-making and societal structures. Sendhil provides a nuanced view of algorithmic bias, its origins, and the challenges in mitigating it. He also explores the potential and pitfalls of AI in healthcare and policymaking, offering insights into how we can harness AI for the greater good while being mindful of its limitations.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">0:00 - Start</span></p><p><span style="color: rgb(34, 34, 34);">1:51 - Introducing Sendhil</span></p><p><span style="color: rgb(34, 34, 34);">14:20 - Algorithmic bias</span></p><p><span style="color: rgb(34, 34, 34);">29:20 - Handling Bias</span></p><p><span style="color: rgb(34, 34, 34);">41:57 - AI and Decision Making</span></p><p><span style="color: rgb(34, 34, 34);">57:01 - AI in our Future</span></p><p><span style="color: rgb(34, 34, 34);">1:02:29 - Conclusion and the last question</span></p>]]></description>
  <pubDate>Fri, 23 Aug 2024 02:01:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="61257032" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/e61ad106-da0e-4109-823f-d8f143537824/episode.mp3" />
  <itunes:title><![CDATA[Sendhil Mullainathan: AI and Algorithmic Bias]]></itunes:title>
  <itunes:duration>1:03:48</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(34, 34, 34);">As AI continues to permeate various aspects of society, its impact on decision-making, bias, and future technological developments is complex. How can we navigate the challenges posed by AI, particularly when it comes to fairness and bias in algorithms? What insights can be drawn from the intersection of economics, computer science, and behavioral studies to guide the responsible development and use of AI?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">In this episode, Sendhil Mullainathan, a prominent economist and professor, delves into these pressing issues. He shares his journey from computer science to behavioral economics and discusses the role of AI in shaping the future of decision-making and societal structures. Sendhil provides a nuanced view of algorithmic bias, its origins, and the challenges in mitigating it. He also explores the potential and pitfalls of AI in healthcare and policymaking, offering insights into how we can harness AI for the greater good while being mindful of its limitations.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">0:00 - Start</span></p><p><span style="color: rgb(34, 34, 34);">1:51 - Introducing Sendhil</span></p><p><span style="color: rgb(34, 34, 34);">14:20 - Algorithmic bias</span></p><p><span style="color: rgb(34, 34, 34);">29:20 - Handling Bias</span></p><p><span style="color: rgb(34, 34, 34);">41:57 - AI and Decision Making</span></p><p><span style="color: rgb(34, 34, 34);">57:01 - AI in our Future</span></p><p><span style="color: rgb(34, 34, 34);">1:02:29 - Conclusion and the last question</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(34, 34, 34);">As AI continues to permeate various aspects of society, its impact on decision-making, bias, and future technological developments is complex. How can we navigate the challenges posed by AI, particularly when it comes to fairness and bias in algorithms? What insights can be drawn from the intersection of economics, computer science, and behavioral studies to guide the responsible development and use of AI?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">In this episode, Sendhil Mullainathan, a prominent economist and professor, delves into these pressing issues. He shares his journey from computer science to behavioral economics and discusses the role of AI in shaping the future of decision-making and societal structures. Sendhil provides a nuanced view of algorithmic bias, its origins, and the challenges in mitigating it. He also explores the potential and pitfalls of AI in healthcare and policymaking, offering insights into how we can harness AI for the greater good while being mindful of its limitations.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">0:00 - Start</span></p><p><span style="color: rgb(34, 34, 34);">1:51 - Introducing Sendhil</span></p><p><span style="color: rgb(34, 34, 34);">14:20 - Algorithmic bias</span></p><p><span style="color: rgb(34, 34, 34);">29:20 - Handling Bias</span></p><p><span style="color: rgb(34, 34, 34);">41:57 - AI and Decision Making</span></p><p><span style="color: rgb(34, 34, 34);">57:01 - AI in our Future</span></p><p><span style="color: rgb(34, 34, 34);">1:02:29 - Conclusion and the last question</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[As AI continues to permeate various aspects of society, its impact on decision-making, bias, and future technological developments is complex. How can we navigate the challenges posed by AI, particularly when it comes to fairness and bias in algori...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[8d3298fc-fdec-4cd6-84ea-851604db1bf8]]></guid>
  <title><![CDATA[Luis Videgaray: AI Policy Around the World]]></title>
  <description><![CDATA[<p>AI technology is transforming the world, raising crucial questions about governance, regulation, and societal impact. What measures should governments adopt to regulate AI effectively? How can academic institutions contribute to this development?&nbsp;</p><p><br></p><p>Luis Videgaray is a professor at MIT and the director of the MIT AI Policy for the World Project. Previously, he also served as the Minister of Finance and the Minister of Foreign Affairs in the Mexican Government. In this episode, Luis shares his journey and insights on the role of governments in regulating and utilizing AI. He also covers the evolution of AI policy, the geopolitical landscape, and the future of AI regulation.&nbsp;&nbsp;</p><p><br></p><p>3:47 - Luis’ time in the government&nbsp;</p><p>7:12 - AI in the past&nbsp;</p><p>20:19 - Regulation of AI&nbsp;</p><p>43:13 - Governments using AI&nbsp;</p><p>51:03 - Geopolitics of AI&nbsp;</p><p>1:11:31 - Luis at MIT&nbsp;</p><p>1:16:50 - Luis’ perspectives on AI&nbsp;</p><p>1:25:28 - Conclusion and the last question</p>]]></description>
  <pubDate>Fri, 16 Aug 2024 01:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="84409029" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/6c799bad-d3d0-4ca5-bec6-915fac7ea833/episode.mp3" />
  <itunes:title><![CDATA[Luis Videgaray: AI Policy Around the World]]></itunes:title>
  <itunes:duration>1:27:55</itunes:duration>
  <itunes:summary><![CDATA[<p>AI technology is transforming the world, raising crucial questions about governance, regulation, and societal impact. What measures should governments adopt to regulate AI effectively? How can academic institutions contribute to this development?&nbsp;</p><p><br></p><p>Luis Videgaray is a professor at MIT and the director of the MIT AI Policy for the World Project. Previously, he also served as the Minister of Finance and the Minister of Foreign Affairs in the Mexican Government. In this episode, Luis shares his journey and insights on the role of governments in regulating and utilizing AI. He also covers the evolution of AI policy, the geopolitical landscape, and the future of AI regulation.&nbsp;&nbsp;</p><p><br></p><p>3:47 - Luis’ time in the government&nbsp;</p><p>7:12 - AI in the past&nbsp;</p><p>20:19 - Regulation of AI&nbsp;</p><p>43:13 - Governments using AI&nbsp;</p><p>51:03 - Geopolitics of AI&nbsp;</p><p>1:11:31 - Luis at MIT&nbsp;</p><p>1:16:50 - Luis’ perspectives on AI&nbsp;</p><p>1:25:28 - Conclusion and the last question</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>AI technology is transforming the world, raising crucial questions about governance, regulation, and societal impact. What measures should governments adopt to regulate AI effectively? How can academic institutions contribute to this development?&nbsp;</p><p><br></p><p>Luis Videgaray is a professor at MIT and the director of the MIT AI Policy for the World Project. Previously, he also served as the Minister of Finance and the Minister of Foreign Affairs in the Mexican Government. In this episode, Luis shares his journey and insights on the role of governments in regulating and utilizing AI. He also covers the evolution of AI policy, the geopolitical landscape, and the future of AI regulation.&nbsp;&nbsp;</p><p><br></p><p>3:47 - Luis’ time in the government&nbsp;</p><p>7:12 - AI in the past&nbsp;</p><p>20:19 - Regulation of AI&nbsp;</p><p>43:13 - Governments using AI&nbsp;</p><p>51:03 - Geopolitics of AI&nbsp;</p><p>1:11:31 - Luis at MIT&nbsp;</p><p>1:16:50 - Luis’ perspectives on AI&nbsp;</p><p>1:25:28 - Conclusion and the last question</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[AI technology is transforming the world, raising crucial questions about governance, regulation, and societal impact. What measures should governments adopt to regulate AI effectively? How can academic institutions contribute to this development? L...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[8dd0643f-61d9-4f58-8964-168d41977f58]]></guid>
  <title><![CDATA[Anna Makanju: AI and Government]]></title>
  <description><![CDATA[<p>Governments are playing a crucial role in ensuring the safe and ethical deployment of AI technologies, protecting citizens while fostering innovation. What steps can policymakers take to safeguard our future? How can we, as citizens, contribute to the conversation and advocate for beneficial AI practices?</p><p><br></p><p>Anna Makanju, VP of Global Affairs at OpenAI, has extensive experience in national security and foreign policy roles across the U.S. government, UN, NATO, and Facebook. In this episode, Anna discusses her global tour with Sam Altman, her journey to her current role, and the steps governments can take to ensure a safe AI future.</p><p><br></p><p>3:18 - Anna and OpenAI’s goals</p><p>8:23 - Sam’s tour and government's role</p><p>19:25 - AI and the government</p><p>38:48 - AI and the military</p><p>51:25 - Anna’s time in the government</p><p>55:52 -  Geopolitics of AI</p><p>1:04:28 - AI safety and the future</p><p>1:14:50 - Conclusion and the last question</p>]]></description>
  <pubDate>Fri, 09 Aug 2024 02:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="73086109" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/8c81a2a9-faa4-489a-805c-c8cdb5b54a97/episode.mp3" />
  <itunes:title><![CDATA[Anna Makanju: AI and Government]]></itunes:title>
  <itunes:duration>1:16:07</itunes:duration>
  <itunes:summary><![CDATA[<p>Governments are playing a crucial role in ensuring the safe and ethical deployment of AI technologies, protecting citizens while fostering innovation. What steps can policymakers take to safeguard our future? How can we, as citizens, contribute to the conversation and advocate for beneficial AI practices?</p><p><br></p><p>Anna Makanju, VP of Global Affairs at OpenAI, has extensive experience in national security and foreign policy roles across the U.S. government, UN, NATO, and Facebook. In this episode, Anna discusses her global tour with Sam Altman, her journey to her current role, and the steps governments can take to ensure a safe AI future.</p><p><br></p><p>3:18 - Anna and OpenAI’s goals</p><p>8:23 - Sam’s tour and government's role</p><p>19:25 - AI and the government</p><p>38:48 - AI and the military</p><p>51:25 - Anna’s time in the government</p><p>55:52 -  Geopolitics of AI</p><p>1:04:28 - AI safety and the future</p><p>1:14:50 - Conclusion and the last question</p>]]></itunes:summary>
  <content:encoded><![CDATA[<p>Governments are playing a crucial role in ensuring the safe and ethical deployment of AI technologies, protecting citizens while fostering innovation. What steps can policymakers take to safeguard our future? How can we, as citizens, contribute to the conversation and advocate for beneficial AI practices?</p><p><br></p><p>Anna Makanju, VP of Global Affairs at OpenAI, has extensive experience in national security and foreign policy roles across the U.S. government, UN, NATO, and Facebook. In this episode, Anna discusses her global tour with Sam Altman, her journey to her current role, and the steps governments can take to ensure a safe AI future.</p><p><br></p><p>3:18 - Anna and OpenAI’s goals</p><p>8:23 - Sam’s tour and government's role</p><p>19:25 - AI and the government</p><p>38:48 - AI and the military</p><p>51:25 - Anna’s time in the government</p><p>55:52 -  Geopolitics of AI</p><p>1:04:28 - AI safety and the future</p><p>1:14:50 - Conclusion and the last question</p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Governments are playing a crucial role in ensuring the safe and ethical deployment of AI technologies, protecting citizens while fostering innovation. What steps can policymakers take to safeguard our future? How can we, as citizens, contribute to ...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[5bc02cf7-6228-41ae-b44b-0a3faad620a3]]></guid>
  <title><![CDATA[Reid Hoffman: AI and Human Collaboration]]></title>
  <description><![CDATA[<p><span style="color: rgb(34, 34, 34);">The rapid advancement of AI technology presents both opportunities and challenges. As AI becomes more integrated into various aspects of life, how can companies ensure that these technologies are used innovatively and ethically? What measures can be taken to navigate potential pitfalls and enhance human connections rather than causing harm?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">Reid Hoffman, Co-founder of Inflection AI, Co-founder and executive chairman of LinkedIn, and Founding investor and former board member at OpenAI has recently used AI in innovative ways. In this episode, Reid talks about his experience doing an interview of his AI digital twin, his time on the board of OpenAI, and the potential to integrate AI into our own lives in interesting ways.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">3:11 - Digital twin</span></p><p><span style="color: rgb(34, 34, 34);">9:07 -&nbsp;Innovative ways to engage with AI</span></p><p><span style="color: rgb(34, 34, 34);">26:05 - Different perspectives on AI</span></p><p><span style="color: rgb(34, 34, 34);">28:41 - Reid’s time with OpenAI</span></p><p><span style="color: rgb(34, 34, 34);">36:00 - Why Inflection?</span></p><p><span style="color: rgb(34, 34, 34);">47:56 - Future of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:01:08 Board dynamics</span></p><p><span style="color: rgb(34, 34, 34);">1:10:48 - World stage for AI</span></p><p><span style="color: rgb(34, 34, 34);">1:39:19 - Conclusion and the last question</span></p>]]></description>
  <pubDate>Fri, 02 Aug 2024 01:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="96926086" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/46702d5d-2f85-4920-a8be-1ae2cd85825b/episode.mp3" />
  <itunes:title><![CDATA[Reid Hoffman: AI and Human Collaboration]]></itunes:title>
  <itunes:duration>1:40:57</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(34, 34, 34);">The rapid advancement of AI technology presents both opportunities and challenges. As AI becomes more integrated into various aspects of life, how can companies ensure that these technologies are used innovatively and ethically? What measures can be taken to navigate potential pitfalls and enhance human connections rather than causing harm?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">Reid Hoffman, Co-founder of Inflection AI, Co-founder and executive chairman of LinkedIn, and Founding investor and former board member at OpenAI has recently used AI in innovative ways. In this episode, Reid talks about his experience doing an interview of his AI digital twin, his time on the board of OpenAI, and the potential to integrate AI into our own lives in interesting ways.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">3:11 - Digital twin</span></p><p><span style="color: rgb(34, 34, 34);">9:07 -&nbsp;Innovative ways to engage with AI</span></p><p><span style="color: rgb(34, 34, 34);">26:05 - Different perspectives on AI</span></p><p><span style="color: rgb(34, 34, 34);">28:41 - Reid’s time with OpenAI</span></p><p><span style="color: rgb(34, 34, 34);">36:00 - Why Inflection?</span></p><p><span style="color: rgb(34, 34, 34);">47:56 - Future of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:01:08 Board dynamics</span></p><p><span style="color: rgb(34, 34, 34);">1:10:48 - World stage for AI</span></p><p><span style="color: rgb(34, 34, 34);">1:39:19 - Conclusion and the last question</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(34, 34, 34);">The rapid advancement of AI technology presents both opportunities and challenges. As AI becomes more integrated into various aspects of life, how can companies ensure that these technologies are used innovatively and ethically? What measures can be taken to navigate potential pitfalls and enhance human connections rather than causing harm?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">Reid Hoffman, Co-founder of Inflection AI, Co-founder and executive chairman of LinkedIn, and Founding investor and former board member at OpenAI has recently used AI in innovative ways. In this episode, Reid talks about his experience doing an interview of his AI digital twin, his time on the board of OpenAI, and the potential to integrate AI into our own lives in interesting ways.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">3:11 - Digital twin</span></p><p><span style="color: rgb(34, 34, 34);">9:07 -&nbsp;Innovative ways to engage with AI</span></p><p><span style="color: rgb(34, 34, 34);">26:05 - Different perspectives on AI</span></p><p><span style="color: rgb(34, 34, 34);">28:41 - Reid’s time with OpenAI</span></p><p><span style="color: rgb(34, 34, 34);">36:00 - Why Inflection?</span></p><p><span style="color: rgb(34, 34, 34);">47:56 - Future of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:01:08 Board dynamics</span></p><p><span style="color: rgb(34, 34, 34);">1:10:48 - World stage for AI</span></p><p><span style="color: rgb(34, 34, 34);">1:39:19 - Conclusion and the last question</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[The rapid advancement of AI technology presents both opportunities and challenges. As AI becomes more integrated into various aspects of life, how can companies ensure that these technologies are used innovatively and ethically? What measures can b...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[bad3e3aa-0c7a-4769-84c0-a3e6b7e44f83]]></guid>
  <title><![CDATA[Sal Khan: AI in Education]]></title>
  <description><![CDATA[<p><span style="color: rgb(80, 0, 80);">Recent developments of AI offer an opportunity to transform traditional educational systems through tools that provide personalized learning and equal access to quality education. As we advance in our understanding of AI and its integration into daily life, how can we leverage these tools to dismantle educational barriers? What would a safe and beneficial AI education tool look like, and how should it be implemented to benefit everyone?</span></p><p><br></p><p><span style="color: rgb(80, 0, 80);">Sal Khan, an advocate for educational access and the founder of Khan Academy, has dedicated significant effort to this challenge. In this episode, Sal shares his personal journey with AI and elucidates how it can revolutionize educational access, support teachers, and inspire students. He discusses Khan Academy’s new AI tool, “Khanmigo,” and its potential to transform learning for the better.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">2:35 - Pre-ChatGPT Khanmigo</span></p><p><span style="color: rgb(34, 34, 34);">14:13 - Sal’s journey with AI</span></p><p><span style="color: rgb(34, 34, 34);">21:09 - AI’s effects on our learning and thinking</span></p><p><span style="color: rgb(34, 34, 34);">38:33 - AI and human relationships</span></p><p><span style="color: rgb(34, 34, 34);">44:01 - Envisioning future AI use</span></p><p><span style="color: rgb(34, 34, 34);">47:41 -&nbsp;Economics of education</span></p><p><span style="color: rgb(34, 34, 34);">56:39 - Standardized testing</span></p><p><span style="color: rgb(34, 34, 34);">1:02:10 - Societal impacts of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:09:48 - Future of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:13:22 - Conclusion and the last question</span></p>]]></description>
  <pubDate>Fri, 26 Jul 2024 01:30:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Mądry)]]></author>
  <enclosure length="73333532" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/8e9abf80-0fdd-4b9c-bd27-4cf698502c72/episode.mp3" />
  <itunes:title><![CDATA[Sal Khan: AI in Education]]></itunes:title>
  <itunes:duration>1:16:23</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(80, 0, 80);">Recent developments of AI offer an opportunity to transform traditional educational systems through tools that provide personalized learning and equal access to quality education. As we advance in our understanding of AI and its integration into daily life, how can we leverage these tools to dismantle educational barriers? What would a safe and beneficial AI education tool look like, and how should it be implemented to benefit everyone?</span></p><p><br></p><p><span style="color: rgb(80, 0, 80);">Sal Khan, an advocate for educational access and the founder of Khan Academy, has dedicated significant effort to this challenge. In this episode, Sal shares his personal journey with AI and elucidates how it can revolutionize educational access, support teachers, and inspire students. He discusses Khan Academy’s new AI tool, “Khanmigo,” and its potential to transform learning for the better.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">2:35 - Pre-ChatGPT Khanmigo</span></p><p><span style="color: rgb(34, 34, 34);">14:13 - Sal’s journey with AI</span></p><p><span style="color: rgb(34, 34, 34);">21:09 - AI’s effects on our learning and thinking</span></p><p><span style="color: rgb(34, 34, 34);">38:33 - AI and human relationships</span></p><p><span style="color: rgb(34, 34, 34);">44:01 - Envisioning future AI use</span></p><p><span style="color: rgb(34, 34, 34);">47:41 -&nbsp;Economics of education</span></p><p><span style="color: rgb(34, 34, 34);">56:39 - Standardized testing</span></p><p><span style="color: rgb(34, 34, 34);">1:02:10 - Societal impacts of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:09:48 - Future of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:13:22 - Conclusion and the last question</span></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(80, 0, 80);">Recent developments of AI offer an opportunity to transform traditional educational systems through tools that provide personalized learning and equal access to quality education. As we advance in our understanding of AI and its integration into daily life, how can we leverage these tools to dismantle educational barriers? What would a safe and beneficial AI education tool look like, and how should it be implemented to benefit everyone?</span></p><p><br></p><p><span style="color: rgb(80, 0, 80);">Sal Khan, an advocate for educational access and the founder of Khan Academy, has dedicated significant effort to this challenge. In this episode, Sal shares his personal journey with AI and elucidates how it can revolutionize educational access, support teachers, and inspire students. He discusses Khan Academy’s new AI tool, “Khanmigo,” and its potential to transform learning for the better.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">2:35 - Pre-ChatGPT Khanmigo</span></p><p><span style="color: rgb(34, 34, 34);">14:13 - Sal’s journey with AI</span></p><p><span style="color: rgb(34, 34, 34);">21:09 - AI’s effects on our learning and thinking</span></p><p><span style="color: rgb(34, 34, 34);">38:33 - AI and human relationships</span></p><p><span style="color: rgb(34, 34, 34);">44:01 - Envisioning future AI use</span></p><p><span style="color: rgb(34, 34, 34);">47:41 -&nbsp;Economics of education</span></p><p><span style="color: rgb(34, 34, 34);">56:39 - Standardized testing</span></p><p><span style="color: rgb(34, 34, 34);">1:02:10 - Societal impacts of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:09:48 - Future of AI</span></p><p><span style="color: rgb(34, 34, 34);">1:13:22 - Conclusion and the last question</span></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[Recent developments of AI offer an opportunity to transform traditional educational systems through tools that provide personalized learning and equal access to quality education. As we advance in our understanding of AI and its integration into da...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
<item>
  <guid isPermaLink="false"><![CDATA[897f641d-e2fc-46bc-a868-04b9ad71f5a5]]></guid>
  <title><![CDATA[Erik Brynjolfsson:  Economics and AI]]></title>
  <description><![CDATA[<p><span style="color: rgb(34, 34, 34);">While the rapid development of AI technology promises unprecedented productivity gains and innovations, concerns of job displacement and increasing inequality persist. How can we ensure AI complements human labor rather than replacing it? What measures can we take to prevent a dystopian AI future?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">Erik Brynjolfsson is a professor at Stanford University and has pioneered research on the economics of information technology and AI. In this episode, Erik discusses the potential for AI to enhance job productivity, complement our workforce, and boost economic growth at scales comparable to the Industrial Revolution while also exploring potential negative futures and strategies to avoid them.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">(03:16) Imagining the future</span></p><p><span style="color: rgb(34, 34, 34);">(09:41) Impacts on Productivity and Labor</span></p><p><span style="color: rgb(34, 34, 34);">(21:14) GPTs are GPTs!</span></p><p><span style="color: rgb(34, 34, 34);">(31:02) </span>Workhelix</p><p><span style="color: rgb(34, 34, 34);">(32:07) Self Driving Cars</span></p><p><span style="color: rgb(34, 34, 34);">(36:25) AI Helping Innovation</span></p><p><span style="color: rgb(34, 34, 34);">(38:02) Negative Impacts</span></p><p><span style="color: rgb(34, 34, 34);">(38:39) Historical Context and Economic Concerns</span></p><p><span style="color: rgb(34, 34, 34);">(42:56) AI's Impact on Job Markets and Productivity</span></p><p><span style="color: rgb(34, 34, 34);">(48:22) Shared Benefits</span></p><p><span style="color: rgb(34, 34, 34);">(49:46) Pessimism and Belief about AI</span></p><p><span style="color: rgb(34, 34, 34);">(01:03:59) Strategies for a Positive AI Future</span></p><p><span style="color: rgb(34, 34, 34);">(01:12:17) Last Question</span></p><p><br></p><p><br></p>]]></description>
  <pubDate>Thu, 11 Jul 2024 22:00:00 -0400</pubDate>
  <link>https://www.cohostpodcasting.com</link>
  <author><![CDATA[mit.officialpodcast@gmail.com (Aleksander Madry)]]></author>
  <enclosure length="75463448" type="audio/mpeg" url="https://audio-delivery.cohostpodcasting.com/audio/21e43657-35ea-4e65-8f5d-885e9d364c06/episodes/d32fdfad-02e9-46c3-a771-17bdefecb09d/episode.mp3" />
  <itunes:title><![CDATA[Erik Brynjolfsson:  Economics and AI]]></itunes:title>
  <itunes:duration>1:18:36</itunes:duration>
  <itunes:summary><![CDATA[<p><span style="color: rgb(34, 34, 34);">While the rapid development of AI technology promises unprecedented productivity gains and innovations, concerns of job displacement and increasing inequality persist. How can we ensure AI complements human labor rather than replacing it? What measures can we take to prevent a dystopian AI future?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">Erik Brynjolfsson is a professor at Stanford University and has pioneered research on the economics of information technology and AI. In this episode, Erik discusses the potential for AI to enhance job productivity, complement our workforce, and boost economic growth at scales comparable to the Industrial Revolution while also exploring potential negative futures and strategies to avoid them.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">(03:16) Imagining the future</span></p><p><span style="color: rgb(34, 34, 34);">(09:41) Impacts on Productivity and Labor</span></p><p><span style="color: rgb(34, 34, 34);">(21:14) GPTs are GPTs!</span></p><p><span style="color: rgb(34, 34, 34);">(31:02) </span>Workhelix</p><p><span style="color: rgb(34, 34, 34);">(32:07) Self Driving Cars</span></p><p><span style="color: rgb(34, 34, 34);">(36:25) AI Helping Innovation</span></p><p><span style="color: rgb(34, 34, 34);">(38:02) Negative Impacts</span></p><p><span style="color: rgb(34, 34, 34);">(38:39) Historical Context and Economic Concerns</span></p><p><span style="color: rgb(34, 34, 34);">(42:56) AI's Impact on Job Markets and Productivity</span></p><p><span style="color: rgb(34, 34, 34);">(48:22) Shared Benefits</span></p><p><span style="color: rgb(34, 34, 34);">(49:46) Pessimism and Belief about AI</span></p><p><span style="color: rgb(34, 34, 34);">(01:03:59) Strategies for a Positive AI Future</span></p><p><span style="color: rgb(34, 34, 34);">(01:12:17) Last Question</span></p><p><br></p><p><br></p>]]></itunes:summary>
  <content:encoded><![CDATA[<p><span style="color: rgb(34, 34, 34);">While the rapid development of AI technology promises unprecedented productivity gains and innovations, concerns of job displacement and increasing inequality persist. How can we ensure AI complements human labor rather than replacing it? What measures can we take to prevent a dystopian AI future?</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">Erik Brynjolfsson is a professor at Stanford University and has pioneered research on the economics of information technology and AI. In this episode, Erik discusses the potential for AI to enhance job productivity, complement our workforce, and boost economic growth at scales comparable to the Industrial Revolution while also exploring potential negative futures and strategies to avoid them.</span></p><p><br></p><p><span style="color: rgb(34, 34, 34);">(03:16) Imagining the future</span></p><p><span style="color: rgb(34, 34, 34);">(09:41) Impacts on Productivity and Labor</span></p><p><span style="color: rgb(34, 34, 34);">(21:14) GPTs are GPTs!</span></p><p><span style="color: rgb(34, 34, 34);">(31:02) </span>Workhelix</p><p><span style="color: rgb(34, 34, 34);">(32:07) Self Driving Cars</span></p><p><span style="color: rgb(34, 34, 34);">(36:25) AI Helping Innovation</span></p><p><span style="color: rgb(34, 34, 34);">(38:02) Negative Impacts</span></p><p><span style="color: rgb(34, 34, 34);">(38:39) Historical Context and Economic Concerns</span></p><p><span style="color: rgb(34, 34, 34);">(42:56) AI's Impact on Job Markets and Productivity</span></p><p><span style="color: rgb(34, 34, 34);">(48:22) Shared Benefits</span></p><p><span style="color: rgb(34, 34, 34);">(49:46) Pessimism and Belief about AI</span></p><p><span style="color: rgb(34, 34, 34);">(01:03:59) Strategies for a Positive AI Future</span></p><p><span style="color: rgb(34, 34, 34);">(01:12:17) Last Question</span></p><p><br></p><p><br></p>]]></content:encoded>
  <itunes:subtitle><![CDATA[While the rapid development of AI technology promises unprecedented productivity gains and innovations, concerns of job displacement and increasing inequality persist. How can we ensure AI complements human labor rather than replacing it? What meas...]]></itunes:subtitle>
 <itunes:keywords><![CDATA[]]></itunes:keywords>
  <itunes:explicit>false</itunes:explicit>
  <itunes:episodeType>full</itunes:episodeType>
</item>
</channel>
</rss>