Skip to main content

When Algorithms Make the Call: AI, Employment Law and the New Architecture of Workplace Responsibility

Some panels feel like previews. This one felt like a diagnosis.

In reflecting on a recent JAMS panel on artificial intelligence (AI) and employment law, one point came through clearly: the conversation was not about distant or hypothetical risk. It was about decisions already being made, harms already materializing and a legal framework already under strain. The takeaway was direct: AI is not altering what employers are responsible for. It is altering how those responsibilities are triggered, tested and exposed.

The legal framework governing the workplace—antidiscrimination law, harassment doctrine, accommodation obligations—remains intact. What is changing rapidly is the operational environment in which those frameworks must function.

Four voices shaped the conversation. Each spoke in a personal capacity, drawing on experience across legal doctrine, regulatory enforcement, litigation and enterprise governance. What emerged was not just informed perspective, but alignment: a shared view that responsibility has not shifted, only the mechanisms through which it is exercised.

The Mechanism Changes; Liability Does Not

Evandro Gigante opened with a framing that anchored the discussion. 

AI does not create a new category of legal risk. It alters the mechanism through which familiar risks manifest. Employers may view AI as a buffer, a layer that distances them from decision-making. The law does not. Liability attaches to outcomes, not tools. Whether a hiring decision is made by a manager, a spreadsheet or an algorithm, the legal analysis remains the same.

Gigante illustrated this across three domains.

First, AI-enabled harassment. Voice cloning, deepfakes and synthetic media introduce new forms of misconduct, but not new categories of harm. A fabricated image can be as damaging as a real one. The employer’s obligation to investigate and remediate remains unchanged.

Second, hiring and screening. Employers often rely on third-party AI tools, but delegation is not abdication. The decision still belongs to the employer, along with responsibility for ensuring that the process is nondiscriminatory, validated and subject to meaningful human oversight.

Third, workplace accommodations. Employees are increasingly requesting AI-based tools as part of accommodation frameworks. Employers must navigate confidentiality, reliability and supervision while assessing whether alternatives exist. The structure of the legal analysis remains familiar, even as the tools evolve.

Across these areas, Gigante emphasized operational imperatives—governance, validation, documentation, transparency and human oversight—not as aspirational goals, but as present requirements.

A Legal Framework Under Stress

Ivie Serioux tested how well that framework can withstand real-world conditions.

She began with what she described as the “decision-maker problem” in the age of agentic AI: systems that act, not merely assist. These tools can source candidates, rank resumes and reject applicants without human interaction. The intuitive question is whether this autonomy shifts responsibility. It does not. Courts continue to look to the human principal: the entity that selected and deployed the system. Employer and vendor may share liability, but autonomy does not displace accountability.

Serioux then examined New York City’s Local Law 144, which requires bias audits and candidate disclosures for automated hiring tools. Despite its clear requirements, compliance has been minimal. A 2024 Cornell study found that only 18 of 391 covered employers had posted audit results. A subsequent audit by the New York State Comptroller characterized enforcement as ineffective. The implication is practical: In the absence of consistent enforcement, compliance becomes a self-executing obligation. Exposure exists regardless of whether regulators act.

To me, her most consequential point concerned evidence.

The assumption that audio or visual material reflects reality can no longer be taken for granted. Deepfakes and synthetic media are not only tools of harassment; they are beginning to appear in internal investigations and litigation contexts. This introduces a fundamental shift in evidentiary analysis.

Employers can no longer rely on surface credibility. Verification must become procedural. That means corroborating digital evidence with system logs, access records, metadata and witness accounts. It also means preserving audit trails that can withstand scrutiny after the fact. In practice, this raises the evidentiary bar: Organizations must now be able to demonstrate not only what happened, but how they know what happened.

This shift has operational consequences. Investigative protocols must adapt. HR and legal teams must be trained to question the authenticity of digital artifacts. And organizations must assume that evidentiary disputes will become more frequent, more technical and more central to litigation outcomes.

Overlaying these issues is a fragmented enforcement landscape. Federal signals are inconsistent, while state and local jurisdictions are moving in divergent directions. For organizations operating across jurisdictions, the result is not uncertainty alone, but immediate exposure that cannot wait for regulatory clarity.

When Things Go Wrong

Kristine D’Amato addressed what happens when AI-driven employment decisions fail.

She explained why these claims differ from traditional disputes. A single flawed system can generate hundreds or thousands of adverse decisions simultaneously. The Equal Employment Opportunity Commission (EEOC) sued virtual education service iTutor for using AI-based screening software to reject women over age 55 and men over age 60, producing more than 200 discriminatory outcomes from a single configuration failure. Scale, not novelty, is what makes these claims distinct.

D’Amato then turned to the insurance market. Employment practices liability insurance (EPLI) carriers are responding to AI risk with new underwriting scrutiny and, in some cases, outright exclusions. Policies now commonly address categories such as bias-related exclusions, AI-dependent decision exclusions and regulatory enforcement exclusions.

More importantly, underwriting questions are functioning as a de facto governance audit. Carriers are asking whether organizations conduct bias audits, allow candidates to request non-AI alternatives, implement human review of AI outputs and include vendor indemnities. These are not abstract inquiries. They are proxies for litigation readiness.

Documentation, she emphasized, is decisive. Validation studies, adverse-impact analyses, audit trails and records of human oversight materially affect both coverage and defensibility. The absence of these elements is not a technical gap, but a structural vulnerability.

She identified three particularly acute risks: the absence of bias audits, the lack of a human alternative for applicants and the failure to document oversight. Organizations that cannot address these areas are exposed simultaneously in litigation and in the insurance market.

Inside Organizations: From Principle to Practice

Rippi Karda focused on execution, what organizations must do in practice.

First, vendor contracts. Vendor claims about fairness and compliance have little value unless they are enforceable. Organizations must translate these claims into concrete obligations: measurable deliverables, defined timelines, enforceable service levels and clear dispute mechanisms. Without this, reliance on vendor assurances will not withstand scrutiny.

Second, due diligence. Effective evaluation requires understanding how a system works, including its training data and known failure modes. This process must involve legal, HR, IT and compliance stakeholders, with attention to confidentiality and intellectual property risks.

Third, human oversight. Meaningful oversight requires both comprehension and authority. Reviewers must be able to evaluate outputs and override them where necessary. Formal approval without understanding is not oversight. It is exposure.

Fourth, governance. Effective frameworks borrow from privacy and cybersecurity: written policies, training, defined accountability and continuous review. These structures must be robust enough to withstand current scrutiny and adaptable enough to evolve with the law.

A Converging Message—and an Invitation

Across legal doctrine, enforcement reality, litigation exposure and operational practice, the message converged: AI does not reduce employer responsibility. It redistributes it, often in less visible but more consequential ways.

One implication follows directly from that convergence: Disputes in this space will be complex, technical and interdisciplinary.

This is where mediation has a distinct role to play.

As explored in my December 2025 Reuters Legal News piece on AI-driven employment disputes, these cases occupy a hybrid space: part technology, part civil rights, part human misunderstanding. Mediation can move parties from adversarial positions toward shared inquiry, enabling remedies that are difficult to achieve through litigation alone: voluntary audits, revised datasets, independent monitoring and structured transparency.

This becomes particularly relevant in light of two dynamics described by the panel. First, the scale of AI-related claims means disputes may involve large groups of affected individuals. Second, the growing uncertainty around digital evidence increases the risk of prolonged factual disputes. Mediation offers a forum better suited to navigating both.

It also facilitates translation between communities, lawyers, engineers and HR professionals, who often approach the same problem from fundamentally different perspectives.

The risks described by the panel are real. So are the tools available to address them.

What remains is the willingness to act, not in anticipation of perfect regulatory clarity, but in recognition that the central legal question has not changed.

Who is responsible?

The answer is not unclear. It is simply easier to obscure.

Disclaimer:

This page is for general information purposes. JAMS makes no representations or warranties regarding its accuracy or completeness. Interested persons should conduct their own research regarding information on this website before deciding to use JAMS, including investigation and research of JAMS neutrals. See More

Scroll to top