AI is changing discovery faster than most teams can realistically absorb. What once required weeks of human review can now be completed in hours, and document volumes that previously felt unmanageable are suddenly within reach. Costs are shifting, timelines are compressing, and expectations, on all sides, are evolving just as quickly.
As that shift happens, a more complicated question emerges. If AI makes doing more possible, does that automatically make doing more reasonable? And if review is faster and less expensive, does that change how proportionality should be evaluated?
In practice, the answer is not nearly as straightforward as the technology might suggest.
Because while AI is redefining speed and scale, proportionality has never been about what’s possible. It is about what can be defended. Courts still expect reasonableness. Opposing counsel still challenges the process. And ultimately, legal teams are still responsible for standing behind every discovery decision that is made.
This is where the gap begins to appear.
Technical teams are moving quickly to implement AI-driven workflows that reduce costs and increase efficiency. At the same time, legal teams are working to ensure those workflows hold up under scrutiny. Both are moving forward, but not always in alignment, and that misalignment is where risk lives.
Across real matters, the tension is already being worked through. Teams are not just experimenting with generative AI; they are testing it against established standards. They are measuring recall and precision, applying structured validation methods, and adapting familiar TAR frameworks to new workflows. They are evaluating hallucination risk, managing privilege concerns, and making deliberate decisions about when and how AI should be used.
In other words, they are moving beyond possibility and into proof.
That is the difference between talking about AI and actually using it in a defensible way.
It is also the focus of our upcoming webinar: Reality Check: Rethinking Proportionality in the GenAI Age.
In this session, you’ll hear from Michael Milicevic and William Wallace Belt, who bring technical and legal perspectives required to navigate this shift. Together, they will walk through how AI-driven workflows are being applied, measured, and defended in practice, grounded in real CDS matters and anonymized metrics.
Because at this point, the question is no longer whether AI can be used in discovery. The question is whether it can be used in a way that stands up when it matters most.
If you’re responsible for discovery outcomes, whether from a legal or technical standpoint, this is a conversation worth being part of.
Join us on April 27th and get your own reality check directly from the experts putting AI to work in real matters.
Register now to see how AI-driven discovery holds up in real matters.


