Loading Video...

The AI Law Podcast-The law of AI and Using AI in LawThe AI Law Podcast-The law of AI and Using AI in Law

OpenAI’s Push for Copyright Exemptions – A Game Changer?

OpenAI has formally requested that the U.S. government ease restrictions on AI companies using copyrighted material for training purposes, framing the issue as critical to "strengthen America's lead" in the global artificial intelligence race. The proposal was submitted on March 23, 2025 as part of the incoming Trump administration's "AI Action Plan," which seeeks input from various stakeholders while aiming to eliminate "unnecessarily burdensome requirements" that could impede private sector innovation. This episode discusses this proposal, its bases, and its potential effects.

Published OnMarch 14, 2025
Chapter 1

Introduction to AI Training and Copyright Issues

Erick

Welcome to the AI Law Podcast. I am Erick Robinson, a partner at Brown Rudnick in Houston. I am Co-Chair of the firm's Patent Trial and Appeal Board Practice Group. In addition to being a patent litigator and trial lawyer, I am well-versed not only in the law of AI, but also have deep technical experience in AI and related technologies.

Erick

As always, the views and opinions expressed in this podcast do not necessarily represent those of Brown Rudnick. This podcast is presented for informational and educational purposes only.

Erick

Today, I am here with AI expert, Dr. Sonali Mishra!

Sonali Mishra, PhD

Thanks for having me, Erick!

Erick

Absolutely! So, let’s talk about AI training—what it actually means and why these systems need oceans of data to work. I mean, we're talking billions of text files, images, audio clips... everything. It’s like feeding everything ever written to create something that can, uh, learn patterns and replicate human language on steroids.

Sonali Mishra, PhD

Exactly. And this is where it gets messy, right? Because a lot of that 'ocean of data' is, well, copyrighted. AI systems can't exactly distinguish between public domain works and copyrighted ones when training. They just absorb it all.

Erick

Well, yeah, they’re not picky. But how did we even get to this point? I mean, copyright laws have always struggled to keep up with technological innovation. You’ve got things like photocopiers, VCRs, even MP3s pushing the limits. And now, AI's the next wave.

Sonali Mishra, PhD

Right, and unlike VCRs or MP3 players, AI doesn’t just copy—it creates. AI-generated content raises totally new questions about ownership. Like, if an AI writes an original poem, who owns it? The programmer? The user who gave the prompt? The AI itself?

Erick

I think we both know it’s not the AI itself. At least, not until they let robots have legal rights, which… let's hope doesn't happen anytime soon.

Sonali Mishra, PhD

But that’s the thing—countries across the globe are tackling this issue differently. The EU, for example, is playing it conservative, pushing for stricter copyright protections even in AI training datasets. Meanwhile, you’ve got countries like Japan or Singapore trying to be more permissive to encourage AI innovation.

Erick

And then there’s the U.S., where the Wild West approach always seems to dominate until something forces regulation. OpenAI has been pretty vocal about lobbying for exemptions, though. And that could reshape the entire framework of copyright law.

Sonali Mishra, PhD

And whether those exemptions actually help or hurt creators is still... well, very unclear. What about the musicians, artists, or journalists whose work ends up in an AI training set? They might argue this isn't just innovation—it’s exploitation.

Erick

Ah, the eternal balance—progress versus protection. And this isn’t the first time we’ve seen this catch-22 in copyright law. It’s just the stakes are much higher now with AI potentially rewriting the creative and legal playbook.

Sonali Mishra, PhD

And OpenAI’s role in the U.S. debate might be the tipping point. But whether it’s a game changer or just another chapter in this saga—

Erick

Yeah, that remains to be seen.

Chapter 2

OpenAI’s Position on Copyright Exemptions

Erick

So, as we were saying, OpenAI’s role in this debate could really be the tipping point. They’re pushing hard for copyright exemptions, arguing that without them, the U.S. might lose its edge in the AI race to countries with more flexible copyright standards. It’s a bold move, but also kind of predictable when you think about it.

Sonali Mishra, PhD

It’s a fair point. I mean, when you look at the global landscape, countries like China seem to have far fewer restrictions on training data. If the U.S. wants to keep up, let alone lead, you could argue this flexibility is critical.

Erick

Right. And OpenAI’s not just talking about staying competitive for the sake of it. They’re saying it’s crucial for driving AI innovation—things like better language models or breakthroughs in natural language processing. And innovation, theoretically, benefits everyone, right?

Sonali Mishra, PhD

Well, theoretically. But there’s a catch—they’re relying on concepts like ‘fair use’ to make this work. Fair use is already, uh, pretty murky when it comes to AI training. Does using copyrighted works in training datasets really fit within those parameters?

Erick

It’s murky, for sure. But OpenAI’s argument hinges on the idea that AI needs to be trained broadly to learn patterns at scale, not to replicate creative works directly. They’re walking a fine line between saying, “We’re, we’re not stealing,” and, “We need this data to function.”

Sonali Mishra, PhD

And the tech industry isn’t united on this either. Some companies back OpenAI’s call for looser restrictions, but others—especially the ones with big media divisions—seem to be really wary of carving out these exemptions.

Erick

Exactly. You’ve got companies on both sides of the equation. A company producing AI tools might favor these exemptions, while a company producing, I don’t know, blockbuster films or chart-topping songs, they’re likely to push back hard. It’s a classic case of conflicting interests within the same industry.

Sonali Mishra, PhD

At the same time, OpenAI’s messaging feels like a pretty confident gamble. They’re basically betting that the broader economic benefits—better AI tools, job creation, keeping up with competitors—will outweigh the concerns about, well, copyright harm.

Erick

But that’s the question, isn’t it? Do these purported benefits justify the potential fallout? Because if the courts don’t buy this justification, it could mean harsher restrictions across the board, not just for OpenAI but for AI development in the U.S. as a whole.

Chapter 3

Copyright Holders' Concerns and Counterarguments

Sonali Mishra, PhD

So, while OpenAI is presenting these potential benefits as a winning argument, let’s switch gears and consider the concerns coming from content creators. They see their work—whether it's writing, art, music—being absorbed into AI training datasets, and their first reflex is to ask, “What’s in it for me?”

Erick

Exactly. And their second question is probably something like, “How do I even stop this if I wanted to?” Copyright law wasn't exactly designed to handle situations where millions of works can be ingested in a split second.

Sonali Mishra, PhD

Right. And for a lot of creators, it feels like they’re losing control over their intellectual property. Even if the AI isn’t reproducing their work verbatim, it’s still fundamentally built on their creativity. That’s a hard pill to swallow for some.

Erick

Especially when you’re talking about industries like publishing or music, where profits are already razor-thin. Throw in AI, and there’s this fear—maybe even a legitimate one—that it could wipe out jobs or diminish the value of what they create.

Sonali Mishra, PhD

It’s more than just fear. Look at the lawsuits already cropping up against AI companies. Illustrators, musicians—they’re saying, “Hey, this isn’t just innovation. This is exploitation.” And they’re asking courts to weigh in.

Erick

And the courts definitely have their hands full. I mean, how do you measure something like ‘transformative use’ in the context of AI training? Is it enough to say, “Well, the AI creates entirely new things, so it’s fine”? Or does it come down to compensation?

Sonali Mishra, PhD

Compensation is such a sticking point. Some people argue for revenue-sharing models—where creators get a cut for their data being used. Others are talking about outright licensing agreements. But those ideas are, well, let’s just say they’re not exactly popular with the AI companies pushing to minimize restrictions.

Erick

Of course they’re not. If you tell companies they need to pay every single content creator included in their training datasets—

Sonali Mishra, PhD

Which could be millions—

Erick

Right, it could become financially and logistically impossible. So instead, they’re arguing that the benefits to society, to innovation, outweigh the need for individual compensation.

Sonali Mishra, PhD

But that’s where the debate keeps circling back. Because if creators feel they’re being exploited with no upside, it’s hard to get their buy-in. And if they manage to push for tighter regulations, the entire AI development process could hit a major roadblock.

Erick

That’s the gamble, though, isn’t it? Either create a system that works for everyone—or watch the legal and creative pushback slow down innovation. The stakes couldn’t be higher here.

Chapter 4

The Legal Landscape: Copyright Law and AI

Erick

It’s clear that creators are asking tough questions—and demanding answers. That brings us to the legal backbone of this whole issue: U.S. copyright law. It’s been evolving for centuries, adapting—sometimes begrudgingly—to new technologies. With AI, though, it’s like the law is scrambling to keep up, especially when we look at how training datasets are being handled.

Sonali Mishra, PhD

Right. Copyright law, at its heart, is about protecting original expression, not ideas or facts. That’s why the idea of training AI on creative works—it’s so legally provocative. The AI can't steal someone’s thought process, but if it trains on their book or artwork...

Erick

Exactly. And then you’ve got the concept of 'fair use.' Courts have wrestled with this for years—it allows unauthorized use of copyrighted material in certain cases. It covers things like criticism, commentary, teaching. But AI training? That’s uncharted territory.

Sonali Mishra, PhD

And the 'fair use' test isn’t exactly straightforward either. Judges look at factors like the purpose of the use—is it transformative? Does it harm the market for the original work? Training an AI doesn’t fall neatly into any of those categories.

Erick

Which brings us to court cases that are shaping this debate. Take Google Books, from, uh, a few years ago. Google scanned millions of books and made them searchable—it wasn’t just republishing content outright. The courts ruled in favor of 'fair use', but AI training adds complexity that wasn’t present in that case.

Sonali Mishra, PhD

Right. In Google’s case, they weren’t creating new books. But if AI models are generating poems, paintings, or even software based on training data? That starts to look a lot less like fair use and more like appropriation—or at least, that’s what plaintiffs are arguing now.

Erick

And that’s honestly the crux of the debate. We're not just talking artistic works here—it's data from news outlets, scientific papers, product manuals—things AI needs to 'learn' from. So, should the law carve out explicit exemptions for training data?

Sonali Mishra, PhD

And those exemptions would sit outside fair use entirely, right? Like a new legal category specifically for AI training. That’s an ambitious shift—and definitely a polarizing one. For businesses leaning heavily into AI, those exemptions could be a game-changer.

Erick

And for content creators, it might feel like writing off their rights entirely. This is why lawmakers are treading so carefully. It’s a balancing act between fostering innovation and protecting individual creators. Neither side is gonna get everything they want.

Sonali Mishra, PhD

Policy changes here are a double-edged sword. Stricter protections could stifle AI breakthroughs, but overly broad exemptions might crush creative industries. Shareholders might love one side more, but courts have to look at the broader implications.

Erick

And speaking of broader implications, it’s not just courts taking this on. Agencies like the U.S. Copyright Office are reassessing their rules, and Congress is, well, taking its sweet time figuring out if this needs to be legislated at all. It’s not exactly a swift-moving machine.

Sonali Mishra, PhD

And it can’t move too slowly, can it? Because the speed of AI's progress is, well, kind of jaw-dropping. If the U.S. doesn’t establish a workable legal framework soon—

Erick

We risk regulatory chaos—or worse, falling behind other countries entirely.

Chapter 5

International Perspectives on AI and Copyright

Erick

So, if the U.S. doesn’t act quickly, we risk falling behind—and that’s not just speculation. Take a look at what other countries are doing. The EU? They’re building on their Digital Copyright Directive to craft an AI regime that’s, let’s just say, not exactly lenient. It’s a more rigid approach compared to what we’ve seen here, but it’s setting a precedent—and fast.

Sonali Mishra, PhD

Right. The EU's directive is pretty strict. They want AI systems training on copyrighted works to get explicit permission, which... I mean, sounds good for creators but not great for innovation. Think about the sheer volume of licensing that would require.

Erick

It’s a bureaucratic nightmare. And then there’s Japan, where—surprise, surprise—they’ve taken a much more, uh, pragmatic approach. Japan’s stance is that as long as the end product doesn’t compete directly with the original work, AI training is fair game.

Sonali Mishra, PhD

That’s interesting, though, because Japan’s exemptions are partly why their AI sector is thriving. They’re basically saying, “We’re not gonna stop you from innovating, as long as you aren’t outright copying.” It feels... well, balanced, doesn’t it?

Erick

Balanced, perhaps. But then there’s China, where the government’s strategy is less about balance and more about brute force. They’ve adopted policies that, essentially, prioritize state-driven AI growth above all else. Copyright? It’s almost a, a secondary concern.

Sonali Mishra, PhD

Right. And that’s partly because China sees AI as a national competitive advantage. Their copyright laws are flexible when it serves government ambitions, which gives their companies a, let’s face it, massive head start.

Erick

And that brings us back to the U.S., where—honestly—it’s kind of the Wild West. We’ve got a cacophony of policies, court cases, and lobbying efforts but no cohesive strategy. Meanwhile, you’ve got countries like Singapore and South Korea, quietly modeling their approach somewhere between Japan and China.

Sonali Mishra, PhD

It’s funny how far apart these approaches are. But at the same time, they’re all pushing for influence in global AI governance bodies. The World Intellectual Property Organization, for example, seems like it’s trying to play referee in this chaotic game.

Erick

True, but governance isn’t easy when the players have such different priorities. The EU wants stricter rules; China is likely lobbying for standards that align with its broader policies. And the U.S.? Well, we mostly seem focused on holding on to our tech dominance.

Sonali Mishra, PhD

It makes you wonder if we’ll ever see something like international copyright agreements for AI—kind of like the Berne Convention but tailored for training data. Wouldn’t that be game-changing?

Erick

Game-changing, sure. Realistic? That’s, uh, another story. You’ve got creators, lawmakers, and tech giants globally—all pulling in different directions. And let’s not forget, this isn’t just about regulation. It’s about competitive advantage.

Chapter 6

The China Factor: What if China Leads in AI?

Erick

Speaking of China, what happens if—or maybe when—they take the lead in AI? We just touched on how their flexible copyright laws give them an edge, but pair that with their access to massive amounts of data and government-backed initiatives, and it’s clear they’re setting themselves up to dominate.

Sonali Mishra, PhD

Exactly. And it’s not just population size. They have a massive ecosystem that operates—let’s be honest—outside the ethical constraints we see in the West. AI developers there don’t have to worry about lawsuits over data breaches or informed consent. It’s efficient, but ethically... fraught.

Erick

“Fraught” is one way to put it. And then there’s the fact that China’s AI policies are state-controlled. Unlike the free-market chaos we’ve got here, their government steers the ship outright. They’re not just competing against private companies—they’re competing as a nation.

Sonali Mishra, PhD

And that centralization gives them a huge edge. State-backed AI projects get unlimited resources and direction. Compare that to, say, the U.S., where private companies drive progress but also constantly butt heads with regulators and... each other.

Erick

Right. It’s like we’re playing chess, and they’re playing... I don’t know, blitzkrieg. The question is, what does that do to global competition? If China doesn’t just lead technologically but also controls how AI shapes industries worldwide...

Sonali Mishra, PhD

It’s game over for a lot of industries. I mean, imagine AI innovation bottlenecked by policies crafted in Beijing. They’d dominate not just the technology but the global standards for AI. And for the West, catching up would get exponentially harder.

Erick

And it’s not just about economics either. The geopolitical implications are staggering. A world where China leads in AI could see their values baked into international tech. Privacy? Free expression? Those concepts might take a permanent backseat.

Sonali Mishra, PhD

And let’s not forget innovation itself. China’s ability to move quickly—due to fewer restrictions—could stifle creativity elsewhere. Countries still worrying about legal boundaries might struggle to keep up, while China shapes the playing field.

Erick

There's also the matter of ethics, or maybe we should say the lack thereof. Training AI without privacy considerations or copyright limits might produce results faster, sure. But what does that cost on a human level?

Sonali Mishra, PhD

And can the rest of the world afford to care? That’s the tough part. You can’t outpace a country that views your ethical debates as weaknesses. It creates this perverse incentive to, well, turn a blind eye.

Erick

So, we’re left in a bind. Compete by bending the rules, or stick to them and hope innovation prevails anyway. Meanwhile, China plows ahead with fewer hindrances. The global balance of power might depend on how other nations respond to this.

Chapter 7

The Economic Impact of Copyright Exemptions for AI

Erick

Alright, before we delve further, let’s zoom out for a second. We’ve been talking about these global dynamics—China’s rapid advancements, ethical challenges, and centralized approach. Now, one big factor in this race is copyright exemptions. Let’s talk about the ripple effects here. The first argument that gets thrown around a lot? Innovation. Exemptions could unleash a wave of AI development, driving economic growth. I mean, that’s the theory, right?

Sonali Mishra, PhD

It is. And to some extent, it’s already happening. Think about all the industries AI is disrupting—education, healthcare, even agriculture. With fewer copyright restrictions, companies could train AI faster, integrate it into tools more effectively, and, well, compete globally.

Erick

Exactly. It’s a multiplier effect. Faster innovation means better tools, better tools mean higher productivity, and higher productivity fuels economic growth. Companies save time, businesses scale faster—everyone wins.

Sonali Mishra, PhD

Not everyone. What about the creative industries? Those jobs that rely on copyright protections to, you know, stay afloat? If we’re saying economic growth at large is the goal, someone’s always paying the price for it.

Erick

Fair point. And that’s where the job market gets murky. On one hand, AI could create entirely new fields, new opportunities. But on the other hand, the risk of displacement—especially in content creation industries—is very real.

Sonali Mishra, PhD

Very real, and very immediate. AI can churn out marketing copy, generate code, compose music... If it’s cheaper and faster, businesses will opt for that over people. So, does economic growth offset those job losses? Or are we just trading human labor for machine efficiency?

Erick

Look, the productivity gains are undeniable. AI is already improving efficiency across sectors, whether it's automating repetitive tasks or solving problems at scales humans can’t even touch. But the issue is how those gains get distributed. Does every sector benefit—or just the big players?

Sonali Mishra, PhD

Right, and that’s where monopolization comes in, doesn’t it? The companies with the resources to develop advanced AI might just dominate entire industries, leaving smaller businesses—or even governments—struggling to keep up. The playing field isn’t exactly level here.

Erick

And the irony is, copyright exemptions could actually cement that dominance. More data equals more powerful models. And who’s got access to the most data? The tech giants pushing for these exemptions in the first place.

Sonali Mishra, PhD

It’s a feedback loop. They get broader exemptions, gain even more competitive advantages, and leave everyone else scrambling to catch up. How do you balance that with the need to foster innovation overall?

Erick

Well, that’s the tightrope policymakers have to walk. Too much protection stifles AI development; too little risks undermining entire industries and, frankly, public trust. There’s this myth that you can just legislate your way into a perfect compromise, but... history shows that’s more wishful thinking than reality.

Sonali Mishra, PhD

And the longer it takes to strike that balance, the more entrenched the big players become. It’s like trying to play referee after the game’s already started—and the score is something like, I don’t know, 10-0.

Erick

Yeah, except in this game, the referees are still figuring out the rules. Meanwhile, everyone’s arguing about whether the benefits are actually worth the risks. And that’s not a question with an easy answer.

Chapter 8

Ethical Considerations in AI Training

Erick

We were just talking about balance—finding that midpoint between fostering innovation and preventing monopolization. But let’s dig deeper into another challenge that complicates this balance: transparency in AI training. For starters, no one really knows exactly what data these models are being trained on—not even the developers half the time. It’s like trying to audit a black box, but the box just spits out Shakespeare one day and bad karaoke lyrics the next.

Sonali Mishra, PhD

Right, and that lack of transparency isn't just a technical headache—it’s actively problematic. Imagine your art or your writing ends up in a training dataset, and no one even asks you. Or worse, you only find out after the AI starts mimicking your style. Feels like a breach of trust, doesn’t it?

Erick

It does. And it’s compounded by bias. These models, as advanced as they are, can reflect—and amplify—the biases in their training data. You feed it skewed perspectives, you get skewed outputs. And the scary part is, it’s tough to spot unless you’re looking for it.

Sonali Mishra, PhD

And who’s really looking for it? Most companies just wanna ship the next big AI product, not sort through millions of training samples for implicit bias. But the consequences can be massive. You know, like misinformation, reinforcing stereotypes... even ethical dilemmas around AI-generated content being seen as fact when it’s just, well, wrong.

Erick

Exactly. Now throw in the ethical dilemma of using copyrighted material without permission. Even if it’s technically 'fair use,' it feels exploitative, especially when the original creators don’t see a dime from the process—or even know their work was used. It’s a moral gray area that could have legal ramifications down the line.

Sonali Mishra, PhD

And it goes deeper than just legality, doesn’t it? Let’s say the AI generates something totally original—or at least it looks original. How authentic is that piece if the foundation is someone else’s work? It’s like a remix of a remix, but without credit or compensation.

Erick

So that’s the big ethical quandary: How do we balance fostering AI innovation with protecting the integrity of original work? Because right now, it feels like we’re leaning too far towards utility and turning a blind eye to the ethical cost.

Sonali Mishra, PhD

Some people are calling for frameworks—basically, ethical guardrails for using data in AI. Licensing, transparency mandates, even limitations on what types of data can be used. But creating that kind of standard isn’t easy. Who decides? And who enforces it?

Erick

And even if we could figure that out, would it truly be enforceable? The global nature of AI means regulations in one country might not mean anything elsewhere. It’s an enforcement nightmare, but that doesn’t mean the ethical concerns should be ignored, either.

Sonali Mishra, PhD

Ignoring them isn’t an option. If we don’t address them now, we risk setting a precedent where AI development always comes at the expense of human creators. That’s... well, kind of dystopian, isn’t it?

Erick

It is. And solving it might require bold compromises—or, you know, entirely new approaches to compensation and collaboration for creators.

Chapter 9

Possible Solutions and Middle Ground Approaches

Erick

Building on what we discussed earlier about bold compromises and new approaches for collaboration, one possible solution is implementing compensation models for copyright holders—like licensing agreements. The idea is pretty straightforward: AI companies would pay creators to use their works in training datasets. Sounds simple, doesn’t it?

Sonali Mishra, PhD

Simple maybe, but in practice? It's a logistical nightmare. Licensing millions of individual works isn’t exactly efficient. And let’s not forget, when you’re dealing with datasets this massive, creators could get lost in the shuffle completely. Is there even a fair way to distribute royalties?

Erick

Yeah, fair point. But if licensing isn’t feasible, what about focusing AI training on public domain works or materials under something like Creative Commons licenses? It’s already cleared for use, and it could sidestep a lot of the legal headaches entirely.

Sonali Mishra, PhD

It could, but there’s a catch. Public domain and Creative Commons materials are limited. Relying solely on them could potentially stagnate innovation because, well, the dataset wouldn’t be as diverse. AI thrives on variety—the broader the scope, the better the output.

Erick

Right. So maybe it’s about striking a balance—focus on freely available data but expand it responsibly where necessary. And that’s where government regulations might come in, yeah? Some kind of framework that incentivizes innovation but doesn’t completely bulldoze over creators’ rights.

Sonali Mishra, PhD

That sounds good in theory, but government intervention can be, um, let’s say, slow-moving. AI’s evolving so fast, regulations might end up outdated before they’re even implemented. What about industry-led initiatives instead? Companies could create their own standards for ethical and fair use—kind of like self-regulation.

Erick

Self-regulation is an option, sure. But history shows it’s… inconsistent. Companies aren’t exactly famous for policing themselves when profits are on the line. And even if a few industry leaders set the bar high, there’s nothing stopping smaller players from ignoring those guidelines entirely.

Sonali Mishra, PhD

That’s fair. But it doesn’t have to be an either-or situation. Maybe the answer is a combination—a bit of government oversight, a bit of industry responsibility. Plus, you could foster AI that assists creativity rather than replaces it. Collaboration over competition, you know?

Erick

Collaboration sounds nice, but the challenge is convincing companies and creators to trust one another when the stakes are this high. Everyone’s worried about losing control of their piece of the pie.

Sonali Mishra, PhD

True, but the stakes are precisely why we can’t just throw up our hands. Whether it’s licensing, open datasets, regulations, or industry standards, something’s gotta give. Innovation can’t come at the cost of dismantling creative industries.

Erick

And it might take a bit of all those approaches to find the middle ground. Compensation, public interest, ethical assurances—they’re not mutually exclusive ideas. They could work in tandem if, and it’s a big if, the stakeholders are willing to cooperate.

Chapter 10

Closing Thoughts Call to Action

Erick

So, if we’re talking about finding middle ground—and let’s face it, we have to—we’re looking at this messy overlap between innovation, copyright, and creativity. It’s clear that no one’s happy with the current setup. Creators feel sidelined, lawmakers are racing to catch up, and AI companies are just trying to stay ahead of the curve. How do we bridge that?

Sonali Mishra, PhD

Exactly. And it’s not just about laws, right? It’s about values. Innovation’s important, sure, but if we’re steamrolling creators or skirting ethics in the process, how much progress is too much?

Erick

Right, and the stakes go far beyond copyright. We’re talking about fundamental questions of ownership, fairness, and trust. These issues, they’re not going away. In fact, as AI advances, they’re just gonna get, well, messier.

Sonali Mishra, PhD

That’s why businesses, creators, and policymakers need to start preparing now. Understand the trade-offs, build frameworks, and, honestly, start asking tough questions. Because the worst-case scenario? We wait too long to act and find out it’s too late to fix.

Erick

And for our listeners, whether you’re a lawyer, creator, entrepreneur, or just curious about how AI touches every part of life—a debate this complex needs engagement from everyone. Stay informed, voice your perspectives, and don’t, uh, don’t underestimate the influence you can have.

Sonali Mishra, PhD

Exactly. The more we talk about AI, copyright, and innovation, the closer we get to solutions that, hopefully, work for everyone. It’s a dialogue—an ongoing one—that needs everyone at the table.

Erick

And with that, I think we’ve covered a lot of ground today. As always, it’s been fascinating wading through this legal and ethical maze.

Sonali Mishra, PhD

Couldn’t agree more. And thank you to all our listeners for joining us on this journey.

Erick

Yeah, we appreciate you tuning in. On that note, we’ll close out here—until next time, keep asking the big questions and challenging the status quo. Take care.

About the podcast

Welcome to The AI Law Podcast by Erick Robinson. We navigate the intersection of artificial intelligence and the law. This podcast explores two critical dimensions: (1) the law of AI and (2) using AI in the practice of law. So let's explore the evolving legal landscape surrounding AI, from regulatory developments and litigation trends to IP, ethics, and liability issues, and also examine applications of AI in legal practice, including automation, legal research, and contract analysis.

This podcast is brought to you by Jellypod, Inc.

© 2025 All rights reserved.