The OnTheMove Clinical Blog

Will AI kill central monitoring?

Duncan Scattergood | | 4 minute read

At the risk of being a little circular, I thought I would ask ChatGPT more or less that exact question . Then, in the spirit of AI proposes and humans decide, I will comment on where I agree and, importantly, where I disagree with the AI.

The ChatGPT Answer

That’s a sharp question — it gets at the tension between traditional centralized monitoring in clinical trials and the new wave of AI-enabled monitoring and risk detection.

Here’s the breakdown:

  1. What “central monitoring” is today
    • Central monitoring means analyzing trial data remotely and centrally (as opposed to just on-site visits) to identify risks, errors, or fraud.
    • It’s mandated in many regulatory frameworks (e.g., ICH E6(R2), FDA guidance on risk-based monitoring).
    • Humans — often statisticians and data managers — design rules and checks, review outliers, and decide escalation.
  2. Where AI comes in
    • Machine learning models can spot anomalies in patient data, site performance, and safety trends more quickly and with higher sensitivity than pre-defined rules.
    • AI can also predict risks before they manifest (e.g., predicting which sites are likely to underreport adverse events, or which data patterns may signal protocol deviations).
    • Natural language processing (NLP) can process clinical notes, queries, and adverse event narratives more efficiently than manual review.
  3. Will AI “kill” central monitoring?
    • Not entirely. AI won’t eliminate the need for central monitoring — it will reshape it:
      • Central monitoring is not just about detection; it’s also about interpretation and regulatory compliance.
      • Regulators expect human oversight, not just algorithmic alerts.
      • AI models themselves require monitoring for bias, drift, and validation — so ironically, AI may add another layer to central oversight.
    • In practice, AI will likely:
      • Automate 60–80% of detection tasks currently done via static rules.
      • Free humans to focus on adjudication, context, and decision-making.
      • Make central monitoring more proactive and continuous, rather than periodic.
  4. The most realistic future
    • Hybrid model: AI performs first-pass detection, risk scoring, and prioritization. Human experts validate findings, contextualize them, and communicate with sites.
    • Regulatory angle: Unless ICH or FDA explicitly allow “AI-only” monitoring (unlikely in the short term), central monitoring remains indispensable.
My thoughts

As is often the case, I think that is a pretty remarkable attempt and one that has saved me a lot of thinking, writing, and editing of drafts.

But, actually, I think the answer is a little more radical than is being suggested here. AI could well kill today’s concept of a central monitoring function.

A little history

Let us step back in time and consider the two key drivers for adopting central monitoring.

One was supporting a more standardized, data-driven approach across sites and reducing the need to rely on CRA judgment during site visits. If a particular KRI is exceeded, then these are the more detailed points to review.

The other was that CRA time is expensive. Brutally, if we can transfer some tasks to cheaper resources, then either each CRA can absorb more workload or have more time available for higher-value activities.

Notwithstanding the ironic need to oversee the AI, we can achieve these objectives with the AI alone. If the output from the AI is presented back to the CRA, who will be the best person to “focus on adjudication, context, and decision-making”, then the CRA is empowered to manage their sites. The separate central monitoring function may pass into history, not long after its creation.

... central monitoring to be proactive and continuous ...

As ChatGPT pointed out, AI will allow central monitoring to be proactive and continuous. The AI will alert the CRA to potential problems as soon as they are detected. The CRA will then be able to proactively work with sites, outside of the regular visit cycle, to prevent those potential problems from becoming issues that affect the overall effectiveness of a study. That’s a transformative change.

Similarly, if the AI detects thematic issues across multiple sites, then it can alert the relevant study manager.

Will this change happen overnight? Of course not. Beyond the AI itself, work will be required to embed the AI output into the CTMS. Not only will the output need to be in an easily understood and actionable form, but it will need to be woven into day-to-day processes. If it is presented via a separate function that the CRA can ignore, then there is a danger that behaviors will not change. The last point also triggers the thought that, just like reducing the amount of SDV was a major cultural shift, to yield the benefits of AI monitoring, organizations will need a change management program.

But in 3 to 5 years, will the Central Monitor job role exist? I suspect not.

What do you think?

About the author

Duncan Scattergood is Managing Director at OnTheMove Software and has worked with Clinical Trial Management Systems (CTMS) for over 10 years. OnTheMove for Veeva enhances the Site Monitoring process by presenting the CRA with the information they need, when and where they need it. This improves monitoring quality and reduces the time spent navigating multiple systems and performing report write-up.