This episode examines AI's impact on intellectual property rights, privacy, and accountability, featuring landmark cases on AI inventorship and GDPR compliance challenges. The discussion also highlights explainable AI, regulatory efforts to combat bias in algorithms, and real-world examples like IBM and MIT's AI-assisted innovations. Insights from legal professionals and cultural perspectives offer a well-rounded view of AI's shifting legal landscape.
Erick
Welcome to the AI Law Podcast. I am Erick Robinson, a partner at Brown Rudnick in Houston. I am Co-Chair of the firm's Patent Trial and Appeal Board Practice Group. In addition to being a patent litigator and trial lawyer, I am well-versed not only in the law of AI, but also have deep technical experience in AI and related technologies.
Erick
As always, the views and opinions expressed in this podcast do not necessarily represent those of Brown Rudnick. This podcast is presented for informational and educational purposes only.
Erick
Today, I am here with my friend, fellow attorney, and AI expert, Dr. Sonali Mishra.
Sonali Mishra, PhD
Thanks for having me today, Erick!
Erick
Sonali, when we talk about AI and intellectual property, I think weâre looking at what is possibly the most revolutionary change to IP law in decades. I mean, historically, IP law has always hinged on the question of human authorship and inventorship. But nowâ
Sonali
Right, Erick, but the issue is that AI doesnât exactly fit those boxes, does it? I mean, if an AI generates a novel invention or a stunning piece of art, who exactly owns that?
Erick
Well, thatâs exactly the crux of the debate right now. Take copyright law, for instance. In the U.S., the Copyright Office has started to acknowledge works that include AI-generated contentâbut only if thereâs sufficient human involvement. They're looking for some tangible creative input from a person. Without it, the work is essentially ineligible for protection.
Sonali
Which feels, in a way, like the law is scrambling to find middle ground. Because on the one hand, letting AI outputs go unprotected could discourage innovation. But on the other hand, giving AI-generated works blanket coverage might flood the system with claims that lack any real human touch. Kind of a Catch-22, isnât it?
Erick
Exactly. And patents are even messier. There's been this growing push for AI to be recognized as co-inventors. Youâve got examples like that IBM and MIT collaboration, where an AI played a critical role in inventing a semiconductor material. The question is, does it make sense to put an AI on a patent application when itâs not⊠well, legally a person?
Sonali
But Erick, think about the patents weâre potentially losing out on by not adapting. Imagine all the breakthroughs that couldâve been recognized if these systems had a formal way to share credit with their human collaborators. AI-assisted patents could really change the game for innovation.
Erick
Thatâs true, but creating a new legal category for "AI-assisted patents" would be no small feat. Youâve got considerations like accountability, licensing, enforcementâ
Sonali
And the ethical side of it, too. Letâs not forget, some of these systems rely on training data that may not have been ethically sourced. How do we handle fair use laws in cases where AI insights were built on proprietary datasets?
Erick
Yeah, the genieâs out of the bottle there. But I think weâre also seeing some progress. Frameworks for licensing AI training data are starting to appear. The key, I think, will be creating structures that encourage innovation without opening the door to rampant abuse.
Sonali
You know, weâre going to have to rethink so much about fair use if AI keeps growing at this pace. Itâs not just about intellectual property anymore. Itâs bleeding into privacy, consentâ
Erick
And liability. Donât forget that. So far, IP law is leading the way, but itâs only one piece of the puzzle.
Sonali
Building on how AI is reshaping intellectual property, thereâs another area we canât ignoreâprivacy. AI and GDPR complianceâitâs an understatement to say this is a minefield, right?
Erick
Yeah, a complete minefield. AI systems need an almost insatiable amount of data to function effectively. The problem is, GDPR wasnât designed for systems with this level of complexity in processing personal data.
Sonali
Exactly! Think about AI profiling. These models can map out behaviors, preferencesâdown to details most people donât even realize theyâre giving away. And honestly, GDPR compliance is becoming... well, harder to navigate with every iteration of these technologies.
Erick
Youâre right. One major issue is transparency. GDPR requires organizations to clearly explain how data is being used. AI, especially those relying on neural networks, often functions as a black box. Explaining decision-making? Not exactly straightforward.
Sonali
And the regulators arenât exactly taking it easy either. Just look at the fines some big companies have faced for lack of transparency. I mean, if they canât figure it out, what hope does a smaller AI company have? You see what I mean?
Erick
Oh, absolutely. The disparity is staggering. Then thereâs the matter of consent. Many AI deploymentsâand I mean, especially those in areas like targeted advertising or financial toolsâthey... well, they stretch the limits of what people actually understand theyâre consenting to.
Sonali
Which is where this concept of âAI-specific consentâ could come into play. Imagine interactive consent. Real-time demonstrations of decisions, showing users exactly how their data is being processed. Itâs ambitious, sure, but thatâs what itâs going to take to get meaningful consent with these systems.
Erick
Ambitious, but also necessary. Otherwise, we risk undermining the entire premise of data protection laws. What concerns me is that weâre not even scratching the surface on cross-border data transfers. Thatâs yet another layer of complexity.
Erick
And speaking of complexity, letâs shift to liability and accountabilityâa natural next step when we talk about transparency and consent. The EUâs AI Liability Directive is a critical piece of legislation updating legal frameworks to handle the unique challenges AI introduces when things go wrong. At its core, itâs about assigning responsibility in scenarios where AI systems cause harm. But the thing isâ
Sonali
Wait, are we talking about the directive that includes the presumption of causality? Because thatâs groundbreaking. If harm occurs, claimants can assume the AI provider was at fault unless the provider can prove otherwise. Thatâs a huge shift in legal thinking.
Erick
Exactly. And that presumption is what makes this directive so powerful. In theory, it simplifies things for, say, consumers who might not have the technical expertise to figure out how or why an AI system malfunctioned. But, uh, whatâs your take on how this might play out in practice?
Sonali
Honestly, itâs a double-edged sword. Sure, it lowers the burden of proof for users. But it also puts AI companies on edgeâforcing them to be ultra-transparent about their systems, which is a great thing for accountability, butâ
Erick
But itâs also risky for innovation, right? I mean, the more disclosure obligations you pile on, the likelier it is that smaller players might just throw in the towel. The big tech companies can handle it. Startups? Not so much.
Sonali
Exactly. And think about how this directive ties into product liability laws. AI isn't a traditional "product," but its effectsâwhen it goes rogue, like giving unsafe medical advice, for instanceâcan still lead to serious harm. Courts are already struggling to fit AI into existing product liability frameworks. Itâs messy.
Erick
Messy is right. Youâve got these hybrid AI systems that blur the lines between software and product. The liability directive recognizes thatâit urges companies to adopt something like "AI-specific due diligence." But how enforceable is that in, say, a global marketplace?
Sonali
Exactly. And letâs not forget that the directive also pushes for algorithmic transparency. Which, I mean, is vital, but also feels nearly impossible in cases where developers themselves donât fully understand how their AI systems make decisions. If even theyâre in the dark, how do you prove anything?
Erick
Or defend yourself against claims. You know, one workaround could be algorithmic auditsâa sort of diagnostic checkup for high-risk AI systems. But would they hold up in courts when lawsuits come rolling in?
Sonali
And thatâs why this directive is so fascinating. Itâs progressive, and it closes some critical gaps in consumer protection. But at the same time, itâs creating entirely new legal questionsâespecially when we look at real-world scenarios. Like, what happens if your autonomous car goes haywire during an over-the-air update? Who shoulders the blame there?
Erick
Great question. Is it the car company? The developer of the autonomous system? Or the software provider for the update? These are precisely the kinds of questions thatâll keep liability lawyers busy for the next decade.
Sonali
You know, Erick, speaking of accountability, privacy plays a huge role too. Managing AI projects in places like New Delhi and Dallas, Iâve seen how vastly different attitudes towards privacy can shape expectations and approachesâand that impacts everything from transparency to trust.
Erick
Really? I mean, I know there are cultural differences, but give me an example. Whatâs something youâve run into?
Sonali
Well, in New Delhi, thereâs a growing focus on collectivism. Privacy isnât always seen as an individual rightâitâs part of the community's welfare. Take consent. People there are more likely to trust institutions to handle their data responsibly, especially if they can see tangible benefits, like healthcare improvements. Itâs not perfect, but it feels, um, different from the data paranoia I see here in Dallas.
Erick
Thatâs fascinating. And in Dallas, or broadly in the U.S., itâs almost the opposite. âParanoiaâ is a strong word, but thereâs definitely more skepticism around data usage. People want control, or at least the feeling of controlâ
Sonali
Exactly! And thatâs where projects get tricky. In India, Iâve worked on AI initiatives where we leaned on implied consentâit was manageable within the legal framework. But in Texas, oh, you need explicit consent, clear disclosures, and audits just to stay afloat legally. The compliance burdenâitâs huge.
Erick
I can see that. And yet, doesnât that stricter framework give end-users more confidence? I mean, sure, itâs tedious for companies, but arenât we building trust and safeguarding rights?
Sonali
Ideally, yes. But thereâs a cost. Iâve seen smaller startups in Dallas struggle to scale because legal compliance ate all their budgets. In New Delhi, businessesâespecially AI-driven onesâstill experiment freely while regulators... you know, play catch-up. It's a fine line between fostering innovation and protecting privacy.
Erick
Interesting point. How about enforcement? Indiaâs Data Protection Act has come up so many times in my readings, but is it as rigorous on the ground as GDPR is here?
Sonali
Not quite. Itâs still maturing, but thereâs momentum. In fact, the big difference is in penalties. The GDPR fines can be truly debilitating, remember that French telecom case? That made global news. In India, enforcement is often more forgiving, especially for first-time offenses. It's like they want companies to fix issues, not shut down entirely. Itâs cooperativeâ
Erick
Whereas GDPR enforcement feels more like an iron fist. But cooperation sounds⊠refreshing, doesnât it?
Sonali
It is. But youâve got to balance that with effectiveness. Iâve worked on cross-border AI projects where this difference created chaos. Like, having to explain to an Indian partner why user data couldnât meet Europeâs stricter localization needs? Oh, that conversation gets heated fast.
Erick
And it probably doesnât help when laws evolve at very different paces. I imagine global compliance is a logistical nightmare.
Sonali
You have no idea. And thatâs before we even touch on how AI uses data differently across markets. Letâs say youâre using a neural network trained on U.S. datasets but deploying it in India. Youâre looking at misinforming conclusions unless you account for localized biases. That goes deeper than privacyâitâs about outcomes.
Erick
Misalignments in outcomes. That makes me thinkâwhere does one draw the line between adhering to local privacy expectations and maintaining global consistency?
Erick
That actually reminds meâbalancing local privacy rules and global frameworks is one thing, but what about the bigger challenge: transparency? When the system itself is a black box, I mean, even the creators struggle to explain how neural networks make decisions. How do you regulate something like that?
Sonali
Exactly! And that's the heart of the problem, right? If nobody can explain what an algorithm does, how can you trust it? Companies are under pressureâtransparent systems that still deliver results? Thatâs a tough balance to strike.
Erick
It is. And that lack of trust is why regulators are stepping in. Take the EU's AI Actâtheyâre mandating algorithmic impact assessments for high-risk systems. Developers have to identify and disclose biases, risks, limitations... it's a lot to manage.
Sonali
And letâs be honest, Erick, most developers donât have the toolsâor, frankly, the expertiseâfor that level of disclosure. You canât just crack open a deep learning algorithm and explain it like a recipe. Itâs complicated.
Erick
Right. Thatâs why Explainable AI, XAI, has gained so much traction recently. These techniquesâwhether itâs visualizing how a model weighs variables or simplifying outputs into human-readable formatsâare designed to bridge that understanding gap.
Sonali
But isnât that a little ambitious? I mean, the black box problem hasnât been solved yet, right? XAI can only go so far. Thereâs still a big gap between whatâs explainable and what makes sense to end-usersâor even regulators. You agree?
Erick
Oh, definitely. And thereâs a tension here. Oversimplify, and you risk losing accuracy. Stick to purely technical explanations, and you alienate the very people the transparency rules are supposed to help. Itâs... a tightrope.
Sonali
Not to mention the cost! Smaller companies are at a clear disadvantage. Developing explainable systems takes resources that startups just donât have. I mean, how are they supposed to compete with the big players?
Erick
Which is a fair point. But without transparency, these systems are going to face an even bigger barrier: public trustâor rather, the lack of it. Regulators arenât just passing these requirements to be difficult. Theyâre trying to ensure accountability in high-stakes applications like healthcare and criminal justice.
Sonali
Right, but I wonderâhow enforceable are these mandates? Think about algorithmic impact assessments. Whoâs qualified to audit them? Who decides whatâs fair or biased? This isnât as clear-cut as, say, a financial compliance audit.
Erick
True. And verifying algorithmic fairness or accuracy isnât even standard yet. Weâre still in the early days of standardized practices for algorithm auditing. Thatâs why thereâs this push for more structured frameworks across industries.
Sonali
Frameworks that... might come too late for most companies. By the time one sector adopts a standard, AI has already evolved into something entirely different. Itâs like regulators are chasing a moving target.
Erick
A moving target, yes, but thatâs no excuse to give up. Take algorithmic impact assessmentsâthey might not be perfect, but they force conversations about accountability. They make developers pause and ask, âAre we putting something harmful out there?â
Sonali
True. And Iâve seen tangible benefits in applying XAI techniques. Like, when AI explains a credit decision, it builds confidence. But the real issue? Who defines success hereâdevelopers, users, courts, or regulators? Because their priorities donât always align.
Erick
Thatâs the debate, isnât it? Explainability isnât the end goal; itâs a means to an end. Regulations like the EU AI Act are trying to make AI safer, fairer, and ultimately, more useful. But there's still so much work to do.
Sonali
Speaking of making AI safer and fairer, letâs talk about bias. This is one of those issues that hits the headlines regularlyâhiring tools rejecting qualified candidates, loan algorithms discriminating against certain groups. Itâs scary how much systemic bias can creep into systems that are supposed to be neutral.
Erick
Right. And it all comes down to the training data, doesnât it? AI models are only as unbiased as the datasets theyâre trained on. If the data reflects societal inequalities, then the system is going to reinforce those same inequalities. Garbage in, garbage out.
Sonali
Exactly, and itâs not like developers are intentionally embedding biasâmost of the time, theyâre not even aware of it. But hereâs the thing, Erick: even when companies recognize bias after deployment, their response is often, well, inadequate. Youâve seen those hollow apologies, right? âWeâre working to fix it.â But what about legal accountability?
Erick
Youâre right to point that out. Addressing bias has real legal consequences. Anti-discrimination laws already apply to automated decision-making systems, but enforcement is lagging. Take the U.S., for exampleâcompanies using biased hiring AI could be violating both federal and state anti-discrimination statutes. And yet, whoâs monitoring these systems?
Sonali
And thatâs where fairness standards come in. Thereâs real movement in industries like healthcare and finance to develop standardized metrics for algorithmic fairness. But Erick, theyâre so inconsistent! One sector uses one definition of fairness, another sector uses something else entirely. At what point do regulators step in and standardize this for everyone?
Erick
Well, theyâre starting to. Weâve seen attempts at sector-specific fairness guidelines, but theyâre still fragmented. And frankly, until we get global agreementâwhich, letâs face it, is ambitiousâweâre going to keep seeing these patchwork approaches. It doesnât help that debiasing is technically challenging. AI systems donât just rely on one variable; they process thousands at once. How do you identify and remove bias in that kind of complexity?
Sonali
Yeah, thatâs the technical side. But letâs talk incentives. Companies arenât going to spend the time or money on debiasing unless theyâre forced to. Public pressure helps, sure, but without legal consequences, how many organizations are actually going to prioritize fairness?
Erick
Exactly. And just look at the initiatives aimed at addressing training data biasâitâs a step forward, but slow. Licensing frameworks for ethically sourced training datasets are gaining traction, but weâre nowhere near having those be industry-wide standards. That means a lot of these AI systems are still being built on skewed foundations.
Sonali
And itâs not just about skewed datasets. Think about the downstream effects. A biased algorithm in hiring doesnât just result in unfair practicesâit can perpetuate discrimination at scale. Perfect example beingâ
Sonali
Like I was saying, a biased algorithm in hiring can have massive downstream effects. Take Amazonâs hiring tool, for exampleâthat fiasco where their system systematically discounted resumes from women. Itâs such a clear case of how an algorithm designed to be neutral ended up amplifying existing biases instead.
Erick
Oh, I remember. And the root cause? It was all in the data. The system was trained on ten yearsâ worth of hiring data that reflected Amazonâs historical hiring patterns. Guess what? Those patterns were already skewed towards hiring men for technical roles. So, the algorithm basically learned to prefer male candidates because the data told it that was the norm.
Sonali
Right! And what makes it worse is how long this went unnoticed. These biases weren't discovered until the tool had been in use for a while. Erick, doesnât that just scream for more proactive algorithmic testingâbefore deployment?
Erick
Absolutely. But hereâs an interesting question: whose fault is it legally? Is it Amazon for deploying a flawed system? Or the developers for not catching the bias? Thatâs the legal gray area weâre in right now. Itâs not like the algorithm consciously made biased decisionsâit simply did what it was trained to do.
Sonali
And that's the scary partâthese werenât wild edge cases. This was systemic, baked into the core logic of the tool. For me, it highlights the need for mandatory fairness audits. If companies had to prove their AI systems were unbiased before rolling them out, situations like this could be avoided.
Erick
Itâs an appealing idea, but enforcement would be a nightmare. Who sets the standards for what "unbiased" even means? Are we talking about demographic parity? Statistical thresholds? And more importantly, who has the authority to conduct these audits?
Sonali
Well, we've already seen regulators push for algorithmic impact assessments in high-risk systems. Maybe hiring tools should fall under the same category. After all, they directly affect peopleâs livelihoods. But itâs not just about auditsâitâs also about accountability. If an AI system discriminates like this, whoâs held responsible?
Erick
Exactly. And thatâs where the legal angle gets tricky. Do you treat the AI system like a defective product? Or do you hold the developers accountable under negligence laws? Courts are still figuring this out, but in the meantime, companies are finding themselves increasingly on the defensive.
Sonali
And they should be. Because this isnât just a PR disasterâitâs a legal one waiting to happen. Anti-discrimination laws clearly apply here, even if the tool was unintentional in its bias. But I wonder, Erick, do you think existing laws are enough to address this?
Erick
Not really. The laws werenât written with AI in mind. Sure, you can apply anti-discrimination statutes, but most of them were crafted for human decision-makers, not algorithms. Thatâs why we need adaptationsâpolicies that directly address AI accountability and transparency in hiring practices.
Sonali
And transparency is key. Companies like Amazon can say theyâve fixed the issue, but we have no way of knowing if those fixes are robustâor if the next hiring tool will make the same mistakes. Which brings us back to explainability. If hiring algorithms are black boxes, solving bias is almost impossible.
Erick
True. And thatâs where a lot of these systems hit their limits. Even with explainable AI techniques, itâs hard to assure people, much less courts, that systemic bias is fully addressed. Itâs not just technicalâitâs philosophical. Can AI ever make decisions free of human prejudices?
Sonali
Thatâs the question, isnât it? And until we figure that out, regulations will have to keep playing catch-up. But one thingâs for sure: the more prominent these cases become, the harder itâll be for companies to ignore the legal and ethical stakes involved.
Erick
You know, Sonali, this whole debate really shows just how much AI is reshaping the legal landscape. When you think about accountability, transparency, intellectual propertyâevery area is having to evolve just to keep up with the pace of these advancements.
Sonali
Right, and whatâs striking to me is how layered it all isâlike, you donât just have one set of challenges. Each legal question opens the door to five more. This isn't a case of tweaking a few statutes and calling it a day.
Erick
Exactly. And itâs not just about lawmakers. Judges, attorneys, businessesâeveryoneâs struggling to navigate this landscape. You take something like AI-assisted patents. I mean, the idea that an AI might need legal recognition down the line? Thatâs a game-changer.
Sonali
It is, but letâs not overlook privacy. AIâs hunger for data is reshaping how we think about consent and protection. Weâre already seeing frameworks emerge, but... honestly, Erick, do you think theyâre moving fast enough?
Erick
Fast enough? Hardly. Especially with global discrepancies in enforcement. The GDPR might set the gold standard, but compliance hurdles are immense. And then youâve got systems like, uh, the AI Liability Directive trying to tackle accountability in entirely new ways. Itâs a mess, but itâs... progress, I suppose.
Sonali
Progress, yes, but incomplete. For me, one of the biggest risks lies in bias. Weâve seen how algorithms can scale discriminationâlike that hiring tool from Amazonâand the legal system isnât equipped to fully address those failures yet.
Erick
No argument there. And transparency is still the Achillesâ heel of AI. If we canât explain how a system makes decisions, we canât properly challengeâor defendâthose decisions in court. Itâs a foundational flaw.
Sonali
But Erick, flaws like this also represent opportunities, you know? Transparency, fairness audits, collaborative frameworksâthese are areas where law and technology can work together. If anything, the gaps show us what needs to be fixed next.
Erick
True, and I think the next wave of innovation will be as much legal as it is technological. Specialized AI statutes, industry-specific regulations, globally standardized frameworks. Itâs going to take time, but the groundwork is being laid, piece by piece.
Sonali
And thatâs whatâs exciting about this space. Sure, there are challengesâmassive ones. But thereâs also so much potential. AI is reshaping not just how we work, but how we think about fairness, accountability, and justice. The law isn't just adapting; itâs evolving right alongside the technology.
Erick
Well said. On that note, I think weâve covered a lot of ground today. From intellectual property quagmires to the tangled mess of bias and beyond. Sonali, Iâve got to say, itâs always a pleasure having these conversations. You somehow make legal headaches feel manageable.
Sonali
Likewise, Erick. And, hey, hereâs hoping our listeners are now a little more prepared for the whirlwind that is AI and the law. A lot to think about, but also a lot to look forward to.
Erick
Absolutely. And thatâs all for today, folks. Thanks for joining us, and weâll see you next time.
Sonali
Until next time, take care!
Chapters (8)
About the podcast
Welcome to The AI Law Podcast by Erick Robinson. We navigate the intersection of artificial intelligence and the law. This podcast explores two critical dimensions: (1) the law of AI and (2) using AI in the practice of law. So let's explore the evolving legal landscape surrounding AI, from regulatory developments and litigation trends to IP, ethics, and liability issues, and also examine applications of AI in legal practice, including automation, legal research, and contract analysis.
This podcast is brought to you by Jellypod, Inc.
© 2025 All rights reserved.