Published OnMarch 6, 2025
AI's Legal Challenges and Innovations
The AI Law Podcast-The law of AI and Using AI in LawThe AI Law Podcast-The law of AI and Using AI in Law

AI's Legal Challenges and Innovations

This episode examines AI's impact on intellectual property rights, privacy, and accountability, featuring landmark cases on AI inventorship and GDPR compliance challenges. The discussion also highlights explainable AI, regulatory efforts to combat bias in algorithms, and real-world examples like IBM and MIT's AI-assisted innovations. Insights from legal professionals and cultural perspectives offer a well-rounded view of AI's shifting legal landscape.

Chapter 1

AI and Intellectual Property: A Legal Revolution

Erick

Welcome to the AI Law Podcast. I am Erick Robinson, a partner at Brown Rudnick in Houston. I am Co-Chair of the firm's Patent Trial and Appeal Board Practice Group. In addition to being a patent litigator and trial lawyer, I am well-versed not only in the law of AI, but also have deep technical experience in AI and related technologies.

Erick

As always, the views and opinions expressed in this podcast do not necessarily represent those of Brown Rudnick. This podcast is presented for informational and educational purposes only.

Erick

Today, I am here with my friend, fellow attorney, and AI expert, Dr. Sonali Mishra.

Sonali Mishra, PhD

Thanks for having me today, Erick!

Erick

Sonali, when we talk about AI and intellectual property, I think we’re looking at what is possibly the most revolutionary change to IP law in decades. I mean, historically, IP law has always hinged on the question of human authorship and inventorship. But now—

Sonali

Right, Erick, but the issue is that AI doesn’t exactly fit those boxes, does it? I mean, if an AI generates a novel invention or a stunning piece of art, who exactly owns that?

Erick

Well, that’s exactly the crux of the debate right now. Take copyright law, for instance. In the U.S., the Copyright Office has started to acknowledge works that include AI-generated content—but only if there’s sufficient human involvement. They're looking for some tangible creative input from a person. Without it, the work is essentially ineligible for protection.

Sonali

Which feels, in a way, like the law is scrambling to find middle ground. Because on the one hand, letting AI outputs go unprotected could discourage innovation. But on the other hand, giving AI-generated works blanket coverage might flood the system with claims that lack any real human touch. Kind of a Catch-22, isn’t it?

Erick

Exactly. And patents are even messier. There's been this growing push for AI to be recognized as co-inventors. You’ve got examples like that IBM and MIT collaboration, where an AI played a critical role in inventing a semiconductor material. The question is, does it make sense to put an AI on a patent application when it’s not
 well, legally a person?

Sonali

But Erick, think about the patents we’re potentially losing out on by not adapting. Imagine all the breakthroughs that could’ve been recognized if these systems had a formal way to share credit with their human collaborators. AI-assisted patents could really change the game for innovation.

Erick

That’s true, but creating a new legal category for "AI-assisted patents" would be no small feat. You’ve got considerations like accountability, licensing, enforcement—

Sonali

And the ethical side of it, too. Let’s not forget, some of these systems rely on training data that may not have been ethically sourced. How do we handle fair use laws in cases where AI insights were built on proprietary datasets?

Erick

Yeah, the genie’s out of the bottle there. But I think we’re also seeing some progress. Frameworks for licensing AI training data are starting to appear. The key, I think, will be creating structures that encourage innovation without opening the door to rampant abuse.

Sonali

You know, we’re going to have to rethink so much about fair use if AI keeps growing at this pace. It’s not just about intellectual property anymore. It’s bleeding into privacy, consent—

Erick

And liability. Don’t forget that. So far, IP law is leading the way, but it’s only one piece of the puzzle.

Chapter 2

Privacy and Data Protection in the AI Era

Sonali

Building on how AI is reshaping intellectual property, there’s another area we can’t ignore—privacy. AI and GDPR compliance—it’s an understatement to say this is a minefield, right?

Erick

Yeah, a complete minefield. AI systems need an almost insatiable amount of data to function effectively. The problem is, GDPR wasn’t designed for systems with this level of complexity in processing personal data.

Sonali

Exactly! Think about AI profiling. These models can map out behaviors, preferences—down to details most people don’t even realize they’re giving away. And honestly, GDPR compliance is becoming... well, harder to navigate with every iteration of these technologies.

Erick

You’re right. One major issue is transparency. GDPR requires organizations to clearly explain how data is being used. AI, especially those relying on neural networks, often functions as a black box. Explaining decision-making? Not exactly straightforward.

Sonali

And the regulators aren’t exactly taking it easy either. Just look at the fines some big companies have faced for lack of transparency. I mean, if they can’t figure it out, what hope does a smaller AI company have? You see what I mean?

Erick

Oh, absolutely. The disparity is staggering. Then there’s the matter of consent. Many AI deployments—and I mean, especially those in areas like targeted advertising or financial tools—they... well, they stretch the limits of what people actually understand they’re consenting to.

Sonali

Which is where this concept of “AI-specific consent” could come into play. Imagine interactive consent. Real-time demonstrations of decisions, showing users exactly how their data is being processed. It’s ambitious, sure, but that’s what it’s going to take to get meaningful consent with these systems.

Erick

Ambitious, but also necessary. Otherwise, we risk undermining the entire premise of data protection laws. What concerns me is that we’re not even scratching the surface on cross-border data transfers. That’s yet another layer of complexity.

Chapter 3

Liability and Accountability: EU's AI Liability Directive

Erick

And speaking of complexity, let’s shift to liability and accountability—a natural next step when we talk about transparency and consent. The EU’s AI Liability Directive is a critical piece of legislation updating legal frameworks to handle the unique challenges AI introduces when things go wrong. At its core, it’s about assigning responsibility in scenarios where AI systems cause harm. But the thing is—

Sonali

Wait, are we talking about the directive that includes the presumption of causality? Because that’s groundbreaking. If harm occurs, claimants can assume the AI provider was at fault unless the provider can prove otherwise. That’s a huge shift in legal thinking.

Erick

Exactly. And that presumption is what makes this directive so powerful. In theory, it simplifies things for, say, consumers who might not have the technical expertise to figure out how or why an AI system malfunctioned. But, uh, what’s your take on how this might play out in practice?

Sonali

Honestly, it’s a double-edged sword. Sure, it lowers the burden of proof for users. But it also puts AI companies on edge—forcing them to be ultra-transparent about their systems, which is a great thing for accountability, but—

Erick

But it’s also risky for innovation, right? I mean, the more disclosure obligations you pile on, the likelier it is that smaller players might just throw in the towel. The big tech companies can handle it. Startups? Not so much.

Sonali

Exactly. And think about how this directive ties into product liability laws. AI isn't a traditional "product," but its effects—when it goes rogue, like giving unsafe medical advice, for instance—can still lead to serious harm. Courts are already struggling to fit AI into existing product liability frameworks. It’s messy.

Erick

Messy is right. You’ve got these hybrid AI systems that blur the lines between software and product. The liability directive recognizes that—it urges companies to adopt something like "AI-specific due diligence." But how enforceable is that in, say, a global marketplace?

Sonali

Exactly. And let’s not forget that the directive also pushes for algorithmic transparency. Which, I mean, is vital, but also feels nearly impossible in cases where developers themselves don’t fully understand how their AI systems make decisions. If even they’re in the dark, how do you prove anything?

Erick

Or defend yourself against claims. You know, one workaround could be algorithmic audits—a sort of diagnostic checkup for high-risk AI systems. But would they hold up in courts when lawsuits come rolling in?

Sonali

And that’s why this directive is so fascinating. It’s progressive, and it closes some critical gaps in consumer protection. But at the same time, it’s creating entirely new legal questions—especially when we look at real-world scenarios. Like, what happens if your autonomous car goes haywire during an over-the-air update? Who shoulders the blame there?

Erick

Great question. Is it the car company? The developer of the autonomous system? Or the software provider for the update? These are precisely the kinds of questions that’ll keep liability lawyers busy for the next decade.

Chapter 4

Cross-Cultural Perspectives on Privacy

Sonali

You know, Erick, speaking of accountability, privacy plays a huge role too. Managing AI projects in places like New Delhi and Dallas, I’ve seen how vastly different attitudes towards privacy can shape expectations and approaches—and that impacts everything from transparency to trust.

Erick

Really? I mean, I know there are cultural differences, but give me an example. What’s something you’ve run into?

Sonali

Well, in New Delhi, there’s a growing focus on collectivism. Privacy isn’t always seen as an individual right—it’s part of the community's welfare. Take consent. People there are more likely to trust institutions to handle their data responsibly, especially if they can see tangible benefits, like healthcare improvements. It’s not perfect, but it feels, um, different from the data paranoia I see here in Dallas.

Erick

That’s fascinating. And in Dallas, or broadly in the U.S., it’s almost the opposite. “Paranoia” is a strong word, but there’s definitely more skepticism around data usage. People want control, or at least the feeling of control—

Sonali

Exactly! And that’s where projects get tricky. In India, I’ve worked on AI initiatives where we leaned on implied consent—it was manageable within the legal framework. But in Texas, oh, you need explicit consent, clear disclosures, and audits just to stay afloat legally. The compliance burden—it’s huge.

Erick

I can see that. And yet, doesn’t that stricter framework give end-users more confidence? I mean, sure, it’s tedious for companies, but aren’t we building trust and safeguarding rights?

Sonali

Ideally, yes. But there’s a cost. I’ve seen smaller startups in Dallas struggle to scale because legal compliance ate all their budgets. In New Delhi, businesses—especially AI-driven ones—still experiment freely while regulators... you know, play catch-up. It's a fine line between fostering innovation and protecting privacy.

Erick

Interesting point. How about enforcement? India’s Data Protection Act has come up so many times in my readings, but is it as rigorous on the ground as GDPR is here?

Sonali

Not quite. It’s still maturing, but there’s momentum. In fact, the big difference is in penalties. The GDPR fines can be truly debilitating, remember that French telecom case? That made global news. In India, enforcement is often more forgiving, especially for first-time offenses. It's like they want companies to fix issues, not shut down entirely. It’s cooperative—

Erick

Whereas GDPR enforcement feels more like an iron fist. But cooperation sounds
 refreshing, doesn’t it?

Sonali

It is. But you’ve got to balance that with effectiveness. I’ve worked on cross-border AI projects where this difference created chaos. Like, having to explain to an Indian partner why user data couldn’t meet Europe’s stricter localization needs? Oh, that conversation gets heated fast.

Erick

And it probably doesn’t help when laws evolve at very different paces. I imagine global compliance is a logistical nightmare.

Sonali

You have no idea. And that’s before we even touch on how AI uses data differently across markets. Let’s say you’re using a neural network trained on U.S. datasets but deploying it in India. You’re looking at misinforming conclusions unless you account for localized biases. That goes deeper than privacy—it’s about outcomes.

Erick

Misalignments in outcomes. That makes me think—where does one draw the line between adhering to local privacy expectations and maintaining global consistency?

Chapter 5

Transparency, Explainability, and Addressing Bias in AI

Erick

That actually reminds me—balancing local privacy rules and global frameworks is one thing, but what about the bigger challenge: transparency? When the system itself is a black box, I mean, even the creators struggle to explain how neural networks make decisions. How do you regulate something like that?

Sonali

Exactly! And that's the heart of the problem, right? If nobody can explain what an algorithm does, how can you trust it? Companies are under pressure—transparent systems that still deliver results? That’s a tough balance to strike.

Erick

It is. And that lack of trust is why regulators are stepping in. Take the EU's AI Act—they’re mandating algorithmic impact assessments for high-risk systems. Developers have to identify and disclose biases, risks, limitations... it's a lot to manage.

Sonali

And let’s be honest, Erick, most developers don’t have the tools—or, frankly, the expertise—for that level of disclosure. You can’t just crack open a deep learning algorithm and explain it like a recipe. It’s complicated.

Erick

Right. That’s why Explainable AI, XAI, has gained so much traction recently. These techniques—whether it’s visualizing how a model weighs variables or simplifying outputs into human-readable formats—are designed to bridge that understanding gap.

Sonali

But isn’t that a little ambitious? I mean, the black box problem hasn’t been solved yet, right? XAI can only go so far. There’s still a big gap between what’s explainable and what makes sense to end-users—or even regulators. You agree?

Erick

Oh, definitely. And there’s a tension here. Oversimplify, and you risk losing accuracy. Stick to purely technical explanations, and you alienate the very people the transparency rules are supposed to help. It’s... a tightrope.

Sonali

Not to mention the cost! Smaller companies are at a clear disadvantage. Developing explainable systems takes resources that startups just don’t have. I mean, how are they supposed to compete with the big players?

Erick

Which is a fair point. But without transparency, these systems are going to face an even bigger barrier: public trust—or rather, the lack of it. Regulators aren’t just passing these requirements to be difficult. They’re trying to ensure accountability in high-stakes applications like healthcare and criminal justice.

Sonali

Right, but I wonder—how enforceable are these mandates? Think about algorithmic impact assessments. Who’s qualified to audit them? Who decides what’s fair or biased? This isn’t as clear-cut as, say, a financial compliance audit.

Erick

True. And verifying algorithmic fairness or accuracy isn’t even standard yet. We’re still in the early days of standardized practices for algorithm auditing. That’s why there’s this push for more structured frameworks across industries.

Sonali

Frameworks that... might come too late for most companies. By the time one sector adopts a standard, AI has already evolved into something entirely different. It’s like regulators are chasing a moving target.

Erick

A moving target, yes, but that’s no excuse to give up. Take algorithmic impact assessments—they might not be perfect, but they force conversations about accountability. They make developers pause and ask, “Are we putting something harmful out there?”

Sonali

True. And I’ve seen tangible benefits in applying XAI techniques. Like, when AI explains a credit decision, it builds confidence. But the real issue? Who defines success here—developers, users, courts, or regulators? Because their priorities don’t always align.

Erick

That’s the debate, isn’t it? Explainability isn’t the end goal; it’s a means to an end. Regulations like the EU AI Act are trying to make AI safer, fairer, and ultimately, more useful. But there's still so much work to do.

Chapter 6

Combatting Bias and Discrimination

Sonali

Speaking of making AI safer and fairer, let’s talk about bias. This is one of those issues that hits the headlines regularly—hiring tools rejecting qualified candidates, loan algorithms discriminating against certain groups. It’s scary how much systemic bias can creep into systems that are supposed to be neutral.

Erick

Right. And it all comes down to the training data, doesn’t it? AI models are only as unbiased as the datasets they’re trained on. If the data reflects societal inequalities, then the system is going to reinforce those same inequalities. Garbage in, garbage out.

Sonali

Exactly, and it’s not like developers are intentionally embedding bias—most of the time, they’re not even aware of it. But here’s the thing, Erick: even when companies recognize bias after deployment, their response is often, well, inadequate. You’ve seen those hollow apologies, right? “We’re working to fix it.” But what about legal accountability?

Erick

You’re right to point that out. Addressing bias has real legal consequences. Anti-discrimination laws already apply to automated decision-making systems, but enforcement is lagging. Take the U.S., for example—companies using biased hiring AI could be violating both federal and state anti-discrimination statutes. And yet, who’s monitoring these systems?

Sonali

And that’s where fairness standards come in. There’s real movement in industries like healthcare and finance to develop standardized metrics for algorithmic fairness. But Erick, they’re so inconsistent! One sector uses one definition of fairness, another sector uses something else entirely. At what point do regulators step in and standardize this for everyone?

Erick

Well, they’re starting to. We’ve seen attempts at sector-specific fairness guidelines, but they’re still fragmented. And frankly, until we get global agreement—which, let’s face it, is ambitious—we’re going to keep seeing these patchwork approaches. It doesn’t help that debiasing is technically challenging. AI systems don’t just rely on one variable; they process thousands at once. How do you identify and remove bias in that kind of complexity?

Sonali

Yeah, that’s the technical side. But let’s talk incentives. Companies aren’t going to spend the time or money on debiasing unless they’re forced to. Public pressure helps, sure, but without legal consequences, how many organizations are actually going to prioritize fairness?

Erick

Exactly. And just look at the initiatives aimed at addressing training data bias—it’s a step forward, but slow. Licensing frameworks for ethically sourced training datasets are gaining traction, but we’re nowhere near having those be industry-wide standards. That means a lot of these AI systems are still being built on skewed foundations.

Sonali

And it’s not just about skewed datasets. Think about the downstream effects. A biased algorithm in hiring doesn’t just result in unfair practices—it can perpetuate discrimination at scale. Perfect example being—

Chapter 7

Case Study: Amazon’s AI Hiring Tool

Sonali

Like I was saying, a biased algorithm in hiring can have massive downstream effects. Take Amazon’s hiring tool, for example—that fiasco where their system systematically discounted resumes from women. It’s such a clear case of how an algorithm designed to be neutral ended up amplifying existing biases instead.

Erick

Oh, I remember. And the root cause? It was all in the data. The system was trained on ten years’ worth of hiring data that reflected Amazon’s historical hiring patterns. Guess what? Those patterns were already skewed towards hiring men for technical roles. So, the algorithm basically learned to prefer male candidates because the data told it that was the norm.

Sonali

Right! And what makes it worse is how long this went unnoticed. These biases weren't discovered until the tool had been in use for a while. Erick, doesn’t that just scream for more proactive algorithmic testing—before deployment?

Erick

Absolutely. But here’s an interesting question: whose fault is it legally? Is it Amazon for deploying a flawed system? Or the developers for not catching the bias? That’s the legal gray area we’re in right now. It’s not like the algorithm consciously made biased decisions—it simply did what it was trained to do.

Sonali

And that's the scary part—these weren’t wild edge cases. This was systemic, baked into the core logic of the tool. For me, it highlights the need for mandatory fairness audits. If companies had to prove their AI systems were unbiased before rolling them out, situations like this could be avoided.

Erick

It’s an appealing idea, but enforcement would be a nightmare. Who sets the standards for what "unbiased" even means? Are we talking about demographic parity? Statistical thresholds? And more importantly, who has the authority to conduct these audits?

Sonali

Well, we've already seen regulators push for algorithmic impact assessments in high-risk systems. Maybe hiring tools should fall under the same category. After all, they directly affect people’s livelihoods. But it’s not just about audits—it’s also about accountability. If an AI system discriminates like this, who’s held responsible?

Erick

Exactly. And that’s where the legal angle gets tricky. Do you treat the AI system like a defective product? Or do you hold the developers accountable under negligence laws? Courts are still figuring this out, but in the meantime, companies are finding themselves increasingly on the defensive.

Sonali

And they should be. Because this isn’t just a PR disaster—it’s a legal one waiting to happen. Anti-discrimination laws clearly apply here, even if the tool was unintentional in its bias. But I wonder, Erick, do you think existing laws are enough to address this?

Erick

Not really. The laws weren’t written with AI in mind. Sure, you can apply anti-discrimination statutes, but most of them were crafted for human decision-makers, not algorithms. That’s why we need adaptations—policies that directly address AI accountability and transparency in hiring practices.

Sonali

And transparency is key. Companies like Amazon can say they’ve fixed the issue, but we have no way of knowing if those fixes are robust—or if the next hiring tool will make the same mistakes. Which brings us back to explainability. If hiring algorithms are black boxes, solving bias is almost impossible.

Erick

True. And that’s where a lot of these systems hit their limits. Even with explainable AI techniques, it’s hard to assure people, much less courts, that systemic bias is fully addressed. It’s not just technical—it’s philosophical. Can AI ever make decisions free of human prejudices?

Sonali

That’s the question, isn’t it? And until we figure that out, regulations will have to keep playing catch-up. But one thing’s for sure: the more prominent these cases become, the harder it’ll be for companies to ignore the legal and ethical stakes involved.

Chapter 8

Conclusion

Erick

You know, Sonali, this whole debate really shows just how much AI is reshaping the legal landscape. When you think about accountability, transparency, intellectual property—every area is having to evolve just to keep up with the pace of these advancements.

Sonali

Right, and what’s striking to me is how layered it all is—like, you don’t just have one set of challenges. Each legal question opens the door to five more. This isn't a case of tweaking a few statutes and calling it a day.

Erick

Exactly. And it’s not just about lawmakers. Judges, attorneys, businesses—everyone’s struggling to navigate this landscape. You take something like AI-assisted patents. I mean, the idea that an AI might need legal recognition down the line? That’s a game-changer.

Sonali

It is, but let’s not overlook privacy. AI’s hunger for data is reshaping how we think about consent and protection. We’re already seeing frameworks emerge, but... honestly, Erick, do you think they’re moving fast enough?

Erick

Fast enough? Hardly. Especially with global discrepancies in enforcement. The GDPR might set the gold standard, but compliance hurdles are immense. And then you’ve got systems like, uh, the AI Liability Directive trying to tackle accountability in entirely new ways. It’s a mess, but it’s... progress, I suppose.

Sonali

Progress, yes, but incomplete. For me, one of the biggest risks lies in bias. We’ve seen how algorithms can scale discrimination—like that hiring tool from Amazon—and the legal system isn’t equipped to fully address those failures yet.

Erick

No argument there. And transparency is still the Achilles’ heel of AI. If we can’t explain how a system makes decisions, we can’t properly challenge—or defend—those decisions in court. It’s a foundational flaw.

Sonali

But Erick, flaws like this also represent opportunities, you know? Transparency, fairness audits, collaborative frameworks—these are areas where law and technology can work together. If anything, the gaps show us what needs to be fixed next.

Erick

True, and I think the next wave of innovation will be as much legal as it is technological. Specialized AI statutes, industry-specific regulations, globally standardized frameworks. It’s going to take time, but the groundwork is being laid, piece by piece.

Sonali

And that’s what’s exciting about this space. Sure, there are challenges—massive ones. But there’s also so much potential. AI is reshaping not just how we work, but how we think about fairness, accountability, and justice. The law isn't just adapting; it’s evolving right alongside the technology.

Erick

Well said. On that note, I think we’ve covered a lot of ground today. From intellectual property quagmires to the tangled mess of bias and beyond. Sonali, I’ve got to say, it’s always a pleasure having these conversations. You somehow make legal headaches feel manageable.

Sonali

Likewise, Erick. And, hey, here’s hoping our listeners are now a little more prepared for the whirlwind that is AI and the law. A lot to think about, but also a lot to look forward to.

Erick

Absolutely. And that’s all for today, folks. Thanks for joining us, and we’ll see you next time.

Sonali

Until next time, take care!

About the podcast

Welcome to The AI Law Podcast by Erick Robinson. We navigate the intersection of artificial intelligence and the law. This podcast explores two critical dimensions: (1) the law of AI and (2) using AI in the practice of law. So let's explore the evolving legal landscape surrounding AI, from regulatory developments and litigation trends to IP, ethics, and liability issues, and also examine applications of AI in legal practice, including automation, legal research, and contract analysis.

This podcast is brought to you by Jellypod, Inc.

© 2025 All rights reserved.