Written by William S. Galkin From GalkinLaw LLC on August 22, 2025
AI in the Workplace Today
Artificial intelligence is rapidly reshaping the workplace. Employers are deploying AI to parse resumes, monitor productivity, score employee performance, and even draft internal communications. Generative AI tools are increasingly being used to summarize meetings, draft contracts, and assist in employee training. The promise is efficiency, cost savings, and more objective decision-making. But alongside these benefits come significant legal and governance risks. Employers face an urgent question: how do we integrate AI responsibly without exposing the organization to discrimination claims, privacy violations, or regulatory penalties?
Emerging Legal Risks in the U.S.
The legal challenges are no longer theoretical. In the area of recruitment and hiring, the Equal Employment Opportunity Commission (EEOC) has issued guidance making clear that employers remain liable for discrimination even if the decision-making process is automated[EEOC Technical Assistance, May 18, 2023]. State and local regulators are also stepping in. New York City Local Law 144 requires annual bias audits of automated employment decision tools and mandates disclosure to job candidates[NYC Admin. Code § 20-870 et seq.]. Illinois’s Artificial Intelligence Video Interview Act requires employers using AI to analyze video interviews to obtain consent and provide transparency to applicants[820 ILCS 42/5 (2019)].
Employee privacy is another major concern. Many AI-driven workplace tools rely on sensitive employee data such as voice recordings, biometrics, or behavioral patterns. In states with comprehensive privacy statutes like California’s Consumer Privacy Rights Act (Cal. Civ. Code § 1798.100 et seq.), Colorado’s Privacy Act (Colo. Rev. Stat. § 6-1-1301 et seq.), and Virginia’s Consumer Data Protection Act (Va. Code Ann. § 59.1-575 et seq.), employees are entitled to notice, purpose limitation, and, in some instances, opt-out rights. Misuse of monitoring tools may also implicate state wiretap laws, and overbroad surveillance risks challenge under National Labor Relations Act § 7, which protects employees’ rights to organize and engage in concerted activities.
Transparency and explainability are becoming as critical as privacy. A worker denied a promotion based on an AI-generated score may demand to know how that decision was made, and courts are likely to scrutinize whether employers can articulate a rational and nondiscriminatory basis. Vendor claims of “bias-free” outputs are unlikely to satisfy legal requirements without supporting evidence.
The Role of DPIAs in Governance
One of the most effective tools for managing these risks is the Data Protection Impact Assessment (DPIA). DPIAs originated under the EU General Data Protection Regulation (GDPR, Art. 35), where they are mandatory for high-risk processing activities, including automated decision-making. While the U.S. has not adopted a federal DPIA requirement, the concept is gaining traction. The Colorado Privacy Act requires data protection assessments for high-risk processing, including profiling that presents a foreseeable risk of harm. The California Privacy Rights Act empowers the California Privacy Protection Agency to issue regulations mandating risk assessments for AI and automated decision-making, and draft rules are already under consideration.
A DPIA is not simply paperwork; it is a structured governance process. It requires organizations to map how an AI tool is used, identify the categories of data it processes, evaluate risks such as bias, discrimination, or privacy intrusions, and document mitigation strategies. Employers are forced to confront whether a hiring algorithm could replicate historic discrimination, whether monitoring software is proportionate to legitimate business needs, whether employee data is securely handled, and whether vendor claims are contractually enforceable.
Why DPIAs Are Becoming a Best Practice
The strategic value of DPIAs extends well beyond compliance. A completed DPIA demonstrates diligence and can serve as a defense in litigation by showing that the employer took reasonable steps to identify and mitigate risks. It also builds employee trust, signaling that the company is approaching AI adoption thoughtfully rather than recklessly.
DPIAs strengthen vendor management by shifting the burden of proof back onto technology providers. Employers can require vendors to supply testing results, submit to independent audits, and agree to contractual safeguards rather than relying on marketing assurances. Finally, DPIAs provide future-proofing. As additional states enact AI-specific legislation and federal agencies move toward more robust rulemaking, employers with established DPIA processes will adapt quickly and with less disruption than those starting from scratch.
Conclusion: AI Without Governance is a Liability
AI in the workplace is not simply an HR or IT innovation. It is a legal and governance challenge that implicates civil rights law, privacy law, labor law, and contract law. The risks of bias, privacy violations, lack of transparency, and vendor misrepresentation are real and growing. But they are also manageable.
DPIAs and their U.S. counterparts are emerging as the key governance tool for ensuring that workplace AI adoption is not only efficient but also legally sound and sustainable. For employers, the choice is stark: AI without governance is a liability, while AI with governance can be a genuine competitive advantage.