I recently attended The Sedona Conference Working Group 11 mid-year meeting in Fort Lauderdale, FL, where I joined a panel discussing the draft Commentary on the Colorado AI Act (CAIA) and the Colorado Privacy Act (CPA). As more states pass AI and privacy laws, one thing is clear: aligning these frameworks is more complex than expected.
There is a critical need for a comprehensive regulatory framework that addresses the gaps in existing privacy laws and the new issues posed by AI. Here are some key points from our discussion:
Regulatory Gaps Slowing AI Governance
AI and privacy laws often overlap and conflict, making harmonization essential before addressing broader risks. Some critical areas of overlap and conflict include:
- Collection: AI models need vast datasets for effective training. This is in conflict with data privacy laws like the GDPR that compel companies to collect only the minimum amount of data necessary.
- Accuracy: Privacy laws require data to be accurate, but AI systems (especially generative AI) can produce false or inaccurate information, making it difficult to ensure the reliability standards of new AI regulation.
- Consent: Privacy law demands informed consent for data use, while the complexity of AI makes it difficult for people to understand and consent to all the potential uses of their data. Mandating revocation of consent also takes on new complexities with AI.
- Jurisdiction: Different jurisdictions are taking different approaches to AI and privacy, creating potential conflicts for companies who operate across multiple regulatory environments.
The Compliance Risks of AI-Driven Hiring Tools
One major area of focus for AI governance continues to be the use of AI in the recruiting and hiring processes—when and how the CPA and CAIA apply to candidates’ personal data. We’ve seen a large uptick in the use of AI in recruitment and hiring management in the past few years, and many companies are using AI to streamline these processes. However, not all employers might have considered the implications of how AI handles applicant data, and those that have may be unclear on how to navigate the seemingly irreconcilable differences.
Many issues can arise if employers are not vigilant about whether their AI tools comply with data privacy laws. By nature, AI collects and stores large amounts of data, and the same is true of sensitive candidate information. Every time someone inputs data or information to a publicly-accessible AI platform, there is a risk that confidential information will be shared.
Employers rely on a patchwork of data privacy laws that mandate how this information will be used and stored. The federal American Privacy Rights Act, which Congress has not yet passed, would potentially unite these measures and require employers to provide notice of AI use to applicants (and employees) and give them the opportunity to opt out. In the meantime, employers need to keep an eye out for security breaches and remain compliant under current applicable laws.
How GenAI Complicates Privacy Rights
Data isn’t just stored; it’s transformed (vectorized, embedded, and combined). This makes it nearly impossible to access, correct, move, or delete original inputs as CPA requires. Data privacy transparency requirements add yet another layer of difficulty.
Some of the major data privacy and legal challenges GenAI presents include:
- Personally identifiable information (PII) disclosure: Once ingested, PII is difficult or even impossible to remove, resulting in privacy violations if the AI model includes PII in its outputs (the mosaic effect).
- Prompt risks: Users could unwittingly input sensitive information into public GenAI tools, and the data can then be exposed to other users.
- Opaqueness and bias: GenAI is used to automate decision making, raising ethical and legal concerns regarding lack of transparency in the way decisions are made and algorithmic bias.
- Deepfakes: GenAI can create highly realistic images, audio, and video (deepfakes), which can lead to identity theft, extortion, and using someone’s likeness or voice without their consent.
Final Thoughts
Our team is exploring ways to harmonize potentially conflicting privacy and AI laws, using the intersection of Colorado’s CPA and CAIA as an example for providing guidance. There is evidence that we are moving in the right direction, albeit slowly. Strong monitoring and validation methods for GenAI processes are proving to be effective. “Human in the loop” safeguards and proven sampling strategies help maintain control over inputs and outputs. Please stay tuned for the next update as our work continues.


