The OnTheMove Clinical Blog

Breaking Down the Barriers: How AI Can Finally Help CRAs Improve Monitoring

Adam Prowse | | 5 minute read

Clinical Research Associates spend countless hours translating field observations into structured visit reports, a problem that is a “classic” area for AI to add value.

However, even though other sectors have successfully adopted AI, in Clinical Operations, outside of some limited use cases, it still hasn't made life easier for CRAs—or consistently improved quality.

Navigating Regulatory Barriers

One perceived barrier to AI is the regulatory framework in Clinical Operations. However, within the context of Monitoring & Visit Reporting, it is possible to embed AI into a CRA's process while complying with the guidance in ICH E6(R3). By utilizing AI effectively within a CRA's workflow, Sponsors and CROs can see (and crucially, demonstrate) increased compliance and improved quality. Put simply, AI that helps CRAs spot issues early isn't a challenge to regulators.

There is a similar challenge with Validation – by its nature, AI does not give 100% consistent outputs to a given question – even though the substantive output is consistent. This would be a serious barrier to using AI in place of any part of a CRA's work. However, used appropriately, AI in Monitoring and, in particular in Visit Reporting, isn't necessarily part of the system to be validated – it's an aid & productivity tool informing the CRA's controlled use of the validated system.

Another way to look at Validation is that CRAs themselves aren't validated — they're trained, and their output is reviewed. Bringing AI into monitoring is (if correctly conceived and implemented) more analogous to a new CRA assistant than a new system and so that training and review approach (along with a robust audit trail of the AI's processes) delivers regulatory compliance – this approach is consistent with the FDA's recent exclusion from its AI guidance of AI use cases which seek operational efficiencies (including in internal workflows and report writing) which “do not impact patient safety”.

Current Use Cases and AI Potential

We already see AI adding some value in Visit Reporting, for example through translations and turning a quick comment into a professional questionnaire answer (while maintaining a clear audit trail). However, while valuable, these limited use cases only deliver a small portion of the gains that current generation LLM AI can support.

AI is improving rapidly beyond those early adopter uses, but the real challenge now is using it in a way that genuinely helps CRAs, rather than just becoming another software led process that interrupts their visits.

It's about how well the AI actually fits into a CRA's day – how to inform their own actions and decisions without eroding the CRA's scope for professional judgment – and it is that balance that delivers on the regulatory requirements.

We are now at the point where AI can reliably sift through data, spot patterns and flag risks that might otherwise remain hidden. When that analysis works well, CRAs are freed from being chained to report writing. They can arrive on site with clearer understanding of the site's particular issues, engage meaningfully with key site personnel, and then spend less time afterward translating observations into their structured reports.

Preserving CRA Judgment

We see in other domains how easily people begin accepting AI suggestions unthinkingly. In Monitoring, that's dangerous. If CRAs become mere validators of AI-generated output, efficiency gains might come at the cost of depth, insight, and accountability.

For me, the CTMS user interface is where the success of AI empowering CRAs will be determined. How AI suggestions are presented and how they fit into the CRA's thought process is what makes the difference.

Yes, AI can draft responses or suggest follow-ups—but if a CRA is just clicking 'accept' or 'edit,' the system isn't really helping them think. Thoughtful decision-making requires context, not just convenience. Giving the CRA the right information in line in their workflow at the right time helps them engage more deeply and improves the quality of their work.

Picture this: a CRA makes their notes (voice or text) and AI instantly connects the dots. It flags inconsistencies, links findings to past visits, and suggests follow-ups. All of this happens inline, not buried in a separate module. The CRA can see not only what is being suggested, but why: how it relates to historical data, and how alternative interpretations might differ, all at the point that they're making their decisions on the AI's suggestions.

That kind of design doesn't take decisions away from CRAs, but really does help the CRA to make (and record) them better. CRAs become more productive, while also being able to demonstratively deliver the quality Monitoring and Visit Reporting proposed by ICH E6(R3).

Designing AI Workflows That Support CRAs

When we think about deploying AI to CRAs, it's tempting to treat that as a largely technical exercise: model training and prompts, APIs, validation protocols. But that puts priorities in the wrong order. The true power of today's AI lies in handling unstructured inputs—notes, voice, observations.

So, the challenge now isn't smarter models; it's smarter workflows that let CRAs use AI safely. CRAs should have full visibility: access to analytics, historical entries, and related follow-ups—all at their fingertips as they make each AI-prompted decision.

That's when AI actually helps – not by replacing a CRA's judgment, but by sharpening it.

About the author

Adam Prowse is Chief Operating Officer at OnTheMove Software, having been with the company since its inception in 2012. OnTheMove has a long track record of providing robust and innovative solutions for both Sponsors and CROs. OnTheMove for Veeva enhances the Site Monitoring process by presenting the CRA with the information they need, when and where they need it. This improves monitoring quality and reduces the time spent navigating multiple systems and performing report write-up.