Poor data can derail a clinical trial. It can prevent it from getting published and prevent key treatments from hitting the market. On the business side, poor data costs money because it means drug developers waste time and money on trials that don’t provide results.
More trial leaders, sponsors and clinical research organizations (CROs) are working to limit the amount of bad data they generate through better quality monitoring. Here’s why risk-based quality monitoring should be combined with traditional quality assurance for flagging potential bad data issues.
Compliance and Quality Assurance are Key Issues in 2021
More teams will likely invest in compliance and quality management systems to address their key priorities. Even if a sponsor, CRO or trial developer isn’t prioritizing quality monitoring this year, they will likely invest in it as new regulations are released.
The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) is publishing new guidelines later this year. These recommendations are known as R3 because they are the third revision to the guidelines, which first debuted in 1996 with the rise of electronic health records (EHRs).
In April 2021, the ICH shared a work-in-progress document on their updated principles. According to a summary by the Collaborative Institutional Training Initiative (CITI Program), one of the most common themes in clinical research is “including quality by design in every part of the research, especially factors critical to the quality.” The guidelines also focus on the idea that clinical research includes many parties. Each of the stakeholders or participants involved in your trial needs a clear understanding of compliance and quality assurance.
Quality Assurance is Ongoing, Not Retroactive
One of the main changes in quality assurance over the past decade has been the transition to ongoing monitoring, which allows for earlier intervention.
“With ongoing RBM [risk-based monitoring], cumulative data can be examined at subject and site levels, flagging potential errors that must be queried or systematic errors/errors in process that may occur at a site,” says Sheelagh Aird, senior director of clinical data operations at global biometrics CRO Phastar. “The data monitoring team can then take remedial action. This could trigger an on-site monitoring visit, or further site training.”
This means that teams can jump in as soon as they realize there is a problem, rather than combing through and rejecting bad data once the study is nearing completion. This leads to better patient outcomes and fewer abandoned studies.
“RBQM [risk-based quality management] allows sites to shift focus to critical data and processes, which improves patient safety while improving the quality of critical data,” according to a whitepaper by the Atlantic Research Group. “Thus, RBQM is a valuable concept in clinical research, providing higher quality data while potentially reducing the overall cost and time to approval of an investigational agent.”
Technology allows people to identify problems immediately. Risk-based quality monitoring means sponsors and CROs can stop potential issues before they become major crises.
Team Training and Onboarding is Critical
RBQM is invaluable to clinical trial leaders; however, it has the same limitations as other forms of technology: It is only as strong as the people using it. Sponsors, CROs and other clinical trial stakeholders need to focus on team training and company core values when introducing risk-based quality monitoring technology.
“Early risk management activities require project teams to have a strong understanding of the study endpoints and supporting CTQ [Critical to Quality] factors,” write Volker Hack and Brian Barnes, the executive director of global clinical development and the former director, respectively, at contract research organization PPD. “Those factors function as a compass to provide a common direction across the project team, helping them to focus not only on what is important, but, more notably, what is not important.”
Not only should teams receive training on the new tools, but they should also have a clear understanding of how this technology benefits patients. Additionally, some industry leaders believe the training should be interactive and include checks to ensure everyone fully understands the new systems.
“Many clinical research professionals think that reading something and documenting that they read it in a log means that they have been trained,” says Vatche Bartekian, president of Vantage BioTrials. “This, however, does not mean that the training was effective.”
From a senior leadership perspective, executives need to consider how the use of RBQM will change the clinical trial process. How much time will these tools take up? What impact will this have on lower-level employees?
According to an industry whitepaper by the Association of Clinical Research Organizations, “strong consideration should be taken to realize the impact of system integration prior to establishing an RBQM framework. This consideration should account for the assessment of system integrations to ensure that optimal knowledge and information sharing capabilities are established.”
Quality Monitoring is a Transparency Issue
Clinical organizations should be effusive in their praise of testifying to the benefits and results of their quality assurance efforts and RBQM tools. These systems increase the chances that clinical trials will be approved for publication. They assure patients that they are in good hands. They prove to the FDA that there are checks in place to catch errors.
“No longer is quality accomplished by way of checklists and box checking, but instead by integrating steps that are specific and measurable,” says Meaghan Powers, senior consultant of clinical operations at Halloran Consulting Group. “Quality should be discussed openly, and issues such as poor trial design, misconduct, and data collection should be deliberated to avoid future occurrences.”
Quality management requires transparency and open discussion to prevent repeat instances. By prioritizing quality and discussing it, teams can show how committed they are to providing the cleanest data possible.