http://www.appliedclinicaltrialsonline.com/does-transcelerate-rbm-guidance-offer-standardization Does the TransCelerate RbM Guidance Offer Standardization? Jan 28, 2015 By Moe Alsumidaie Applied Clinical Trials The concept of risk-based monitoring (RbM) is evolving, as nonprofit organizations continue to collaborate with the biopharmaceutical/medical device industry to investigate, pilot and implement RbM practices. While the industry seems to be getting closer towards implementing RbM programs, the subject is still at its infancy, as the industry explores this paradigm. The biopharmaceutical/medical device industry is still trying to find ways to establish standardization in its business processes and technological infrastructures in order to seamlessly conduct RbM, however, current RbM practices lack standardization. This, in essence, defeats the core purpose of RbM, which involves generating reproducible and consistent results in monitoring and study execution quality. In many industries, defining a process requires standardization in order to yield quality and reproducible results. Albeit clinical trials are far from a manufacturing process, clinical trial design is becoming a standardized and scalable operation. For example, the FDA successfully executed the Master Lung Protocol , which achieved numerous study outcomes for several therapies. Moreover, standardization is moving into different therapeutic areas, such as CNS. “There are organizations that plan to conduct proof of concept and confirmatory Alzheimer’s studies by means of a shared protocol,” said Beat Widler, Partner at Widler Schiemann Ltd. In a biopharmaceutical/medical device environment that implements standardized operational and technological infrastructures, standardization is going to be an essential aspect of seamless clinical operations implementation. This article will explore current practices and guidances on RbM, and then recommend other options for consideration. TransCelerate Biopharma’s Position on RbM Standardization TransCelerate is currently leading the way in evangelizing its RbM ideology. TransCelereate has developed a process which advises study teams to evaluate risk from numerous and comprehensive standpoints. While these tools can assist study teams with categorically interpreting risk factors during trial design, the process introduces a great level of subjectivity, which can impact RbM results from one trial to another. To demonstrate, The Risk Assessment Categorization (RACT) Tool asks study teams to evaluate risk factors, such as the risk impact of compound interactions with other medications, and the severity of subject population conditions; subjective interpretation of these risks introduces inconsistency in standardizing risk assessments in clinical operations, which brings in variability during clinical trial risk assessment design and implementation. Additionally, RACT offers considerations for study teams to think about as they design their risk plans, which adds another level of volatility in clinical trial risk assessments. What is Consequently Happening in the Industry? While TransCelerate’s methodology offers guidance regarding the process of developing RbM programs, the subsequent variability associated with the process introduces challenges with scaling up RbM in business operations and clinical technology functions. Individual study teams are defining their own risk assessments and methodologies, and based on the subjective focus of RACT’s categorical discussion questions, study teams are formulating unmeasurable and unquantifiable risk outcomes in clinical operations, which is seemingly adding to the confusion of operationalizing RbM on an enterprise level. To elaborate, defining compound risk levels relative to Standard of Care applications is an unquantifiable factor from a clinical project management standpoint. “If subjectivity were to be introduced in a manufacturing process, nobody would trust the final result and risk process,” said Peter Schiemann, Partner at Widler Schiemann Ltd. “When the study team changes, and risk assessments change, the risk interpretations and plans can be completely different,” added Schiemann. What Should RbM Standards Look Like in Clinical Trials? The key towards establishing standards in clinical trial RbM is to not only understand quantifiable factors, but also factors that are applicable and standard towards clinical trials in general. Medtronic, for example, implemented a toolkit of quantifiable and scalable risk assessments that were successful in evaluating site risk. These analytical risk toolkits can be focused on a variety of risk objectives, such as enrollment performance, study endpoints, primary/secondary objectives, inclusion/exclusion criteria, protocol deviations, data quality and integrity, and even project management financial risks. Building a general toolkit that can be applied towards all clinical trials in a variety of therapeutic areas can enhance standardization, consistency and quality outcomes during RbM execution. What about differing therapeutic areas? It is important to emphasize that every therapeutic area will exhibit differentiating risk factors. Correspondingly, risk assessments such as safety, and clinical trial failure prediction models will differ in varying therapeutic areas. However, all trials share a large similarity, which can be exploited for standardization. “Once standardizations are implemented, RbM should run like an engine,” indicated Schiemann. History Repeating or a Shift in the RbM Paradigm? Successfully deploying RbM across an organization is a challenge, as biopharmaceutical/medical device enterprises struggling to establish sufficient IT infrastructures for data streamlining, and are redefining roles and responsibilities. Introducing subjectivity rather than standardization in RbM will only add to the confusion of RbM deployment. Shouldn’t the industry start thinking in an objective and standardized manner ? “Clinical personnel are afraid to introduce standards because if standards are introduced, they’re no longer the ones calling the shots... everyone thinks that their trial is unique and therefore need to reinvent the wheel, and it’s counterproductive,” suggested Schiemann.
http://www.appliedclinicaltrialsonline.com/conquering-rbm-story-medtronics-risk-based-management-methodology Conquering RBM: A Story on Medtronic's Risk Based Management Methodology Oct 14, 2014 By Moe Alsumidaie Applied Clinical Trials I have been tracking and writing about risk-based monitoring (RBM) since the inception of FDA’s initial draft guidance in 2011, and I am finding that biopharmaceutical sponsors are still struggling to implement RBM. After speaking to Jonathan Helfgott, Associate Director for Risk Science at the FDA, I asked him why the FDA kept their RBM guidance open to interpretation; Helfgott suggested that every company is different, and they should feel free to exercise whichever methodologies they feel are appropriate for their clinical trials. The industry has tried to establish some consistency by standardizing RBM methodologies. Amgen, for example, has leveraged its FSP model to execute RBM, and Bristol-Myers Squibb (BMS) appears to be getting closer towards establishing an infrastructure that supports RBM , moreover, Transcelerate continues to release elaborate and complex RBM guidance documents. Personally, I have not seen actionable advances in RBM until I met Peggy Fay, Director of Global Clinical Monitoring, Medtronic Clinical Operations. About a year ago I met Peggy at a New York ACRP symposium and there were several topics presented, one of which included Risk-Based Monitoring. Peggy was faced with a monstrous challenge: she was overseeing a multi-centered medical device clinical trial with 1500 patients and due to complex protocol design, 2970 data points needed to be collected, monitored and verified.Given Medtronic’s monitoring methodologies at that time, Peggy estimated it would cost more than $21 million to monitor that data. “It was not financially viable,” said Peggy, as she indicated that number of monitors needed to cover 45 sites and to review 218 CRF pages would blow through the study budget.“We had no choice but to find a more efficient way to better manage and monitor the data, to ensure data integrity and subject safety” added Peggy. Facing the Organization Many biopharmaceutical enterprises are still coping with executing successful RBM pilots because of organizational challenges. BMS, for example, had to implement a change management program that focused on education and communication of a new set of roles and responsibilities in order to enable RBM. Peggy took a different approach; why not make RBM mold to existing roles and responsibilities? Why not build a system and process that is intuitive and easy to use in nature, given current study team skillsets and operational roles? “When I ask people about RBM, they think Risk-Based Monitoring. We think about Risk-Based Management,” said Peggy. Peggy stressed that risk-based management is not strictly about monitoring, but rather, quality risk-based management across a clinical trial.Medtronic’s risk-based management initiatives were fueled by several internal and external drivers; internal drivers for change included questionable data integrity, site management/performance issues, and unproductive resolution findings due to communication concerns between the sponsor and monitors.External drivers included minimal changes in protocol violation and nonconformance rates, and regulatory pressures for improved quality risk management. Peggy rounded up a variety of clinical operations groups and started having meetings on changing internal processes to formulate and support the risk-based management infrastructure.“It can be difficult to put cross functional teams together, getting them to focus on a common goal at the same time you are trying to change a ‘mindset’. But, we were forced to collaborate, to be creative as collectively we were all faced with an unsustainable situation, we had no choice,” said Peggy. Medtronic’s ‘Plan, Do, Check, Act’ Process Peggy believes that risk-based management is a systematic approach towards identifying and mitigating risks. This process involves adaptive monitoring, defining critical versus noncritical risks, meeting study and regulatory needs, and developing tools to incorporate risk management strategies into clinical trial design.Medtronic created a risk management process to address these issues, namely, Plan, Do, Check, Act, which is supported by FDA Guidance . Jean Mulinde, Senior advisor of CDER at the FDA indicated in a recent DIA workshop that the FDA considers protocol design as the blueprint for quality and demonstrated how poorly designed protocols resulted in an increase in FDA audits and adverse findings.Medtronic Clinical Operations (MCO) has embraced this philosophy, as in the Plan portion of the process, they have incorporated quality by design methodologies on critical study endpoints during protocol design. During this process, a cross functional team conducts a risk assessment, impact analysis and risk mitigation strategy; defining critical and noncritical risks associated with the study, patient population and/or study sites.Assigning weighted values to risk indicators the team sets monitoring intensity and assigns centralized/on-site monitoring activities. Subsequently, the study team and functional service providers generate risk-based monitoring, data management, safety and SAP plans that each identify corresponding communication, escalation and action plans if Key Risk Indicator (KRI) signals are triggered. The MCO monitoring team uses a gradated KRI model, as illustrated in Figure 1 . The Do and Check portions of the process focus on study execution. In the Do portion, Medtronic targets measuring critical data (i.e., predefined data points that are relevant to primary/secondary endpoints and are considered high risk) from two perspectives: (1) at each site, and (2) aggregated sites across the trial. Centralized monitors focus on data quality, site responsiveness, subject retention, and protocol compliance, whereas on-site monitors focus on targeted source document verification (critical data), and random statistical sampling. In the Check portion, Medtronic analyzes a variety of risks, such as intrinsic risks (i.e., safety profiles, site experience, study populations, AE tracking), design risks (i.e. protocol simplification, number of protocol amendments, and study endpoint approach), and study operational risks (i.e., site staff turnover, site workload, enrollment rates, AE reporting levels). Figure 2 demonstrates Key Performance Indicators (KPI) for specific KRIs. Figure 2: Site Risk Indicators Figure 2 shows that this particular study site was experiencing a resource constraint; the study site was enrolling too many patients to handle the necessary workload that is associated with data management discrepancy resolution, and AE reporting. “We not only focused on under-enrolling sites, but, we were concerned with over-enrolling sites; were they prepared to take on that many patients?How deep were the bags under the coordinator’s eyes?This tool allowed us to better oversee site activities and help sites improve performance through early intervention,” mentioned Peggy. In the Act portion of the process, the MCO study team and monitoring group are able to initiate a proactive approach via CAPAs to mitigate issues before they build, which minimizes costs, and improves data integrity/transparency, especially during FDA audits. Further, the MCO monitors are able to continually evaluate risk-based risk and performance indicators, driving escalation plans in response to variances or issues of non-compliance thus ensuring a self-innovating cycle. No Issues with FDA Audits I have observed that many clinical operations personnel have a tendency to exhibit a level of hesitancy when it comes to validating RBM systems with FDA audits. On the contrary to status quo beliefs, well-designed RBM systems and processes protect biopharmaceutical enterprises from adverse findings during FDA audits, and the Medtronic case study validates the fallacy.To confirm the value of the RBM approach, the MCO operations team awaited results from multiple site audits; no significant concerns or observations noted during these audits. Peggy confirmed the risk-based management model allowed the study team to generate a well-designed protocol that focused only on critical data; together with cross functional team the collaborative process improved transparency, data quality and integrity, and supported a proactive response to event signals compared to traditional monitoring and operational models. Be Flexible and Practical In summary, while the industry continues to run around in circles trying to figure the best fit model for RBM, the key towards implementing successful RBM strategies include executing a practical process that is flexible towards an organizations specific infrastructure. The Medtronic Clinical Operations (MCO) approach is a great example of how a medical device company was able to successfully (1) change clinical operations cultures to adopt risk-based management philosophies and processes in study and clinical operational design (2) develop simple and practical analytical tools to not only quantify and measure risk, but also improve data transparency (3) mitigate risks through CAPAs and proactive targeted monitoring, and (4) continually improve risk-based management systems. Despite the challenges associated with internal bureaucratic hindrances and restrictive cultures, it pays to take the leap of risk in order to benefit from risk-based management systems, and Peggy has successfully adopted innovative methodologies in RBM that many biopharmaceutical companies eagerly vie for. I hope to see operational advancements in RBM methodologies at CBI’s Risk-Based Monitoring in Clinical Studies Conference from November 6-7 in Philadelphia
http://www.appliedclinicaltrialsonline.com/5-essential-cornerstones-rbm-technology 5 Essential Cornerstones of RbM Technology Feb 23, 2015 By Courtney McBean Applied Clinical Trials If you've spent time researching risk-based monitoring (RBM) over the past year, you know dozens of academics, CROs, clinical trial practitioners, and statisticians have proposed varied approaches based on sponsor needs and trial designs. Unfortunately, despite an abundance of theoretical perspectives on RBM, clinical trial teams soon discover a significant gap still exists between perspectives and actual tools to enable real-world RBM implementation . The challenge for sponsors remains how to translate theory and methodologies of risk-based monitoring to their unique setting. Often overlooked in early evaluation of RBM are the tools and technologies required for successful implementation. Without the right tools to track and prioritize the shift from traditional monitoring to RBM, successful implementation is doubtful, posing detrimental consequences not only for the monitors, but also the overall trial. For successful risk-based monitoring program implementation, here are the five must-have capabilities of an effective RBM tool: 1. Ongoing and Continuous Performance Tracking Trials never go exactly as planned. The best-intentioned RBM strategy implemented at the beginning of the trial is unlikely to be optimized over the life of the trial. Some sites will not perform as well as expected, while others will surpass projections. Primary investigators and research staff churn, impacting site performance. A static, one-time RBM plan/strategy simply will not be able to account for the dynamics of a real-world clinical trial. An optimal risk-based tool will adjust by incorporating real-time site performance, site status changes, and sponsor profile changes. 2. Multiple Data Sources Data in clinical trials are complex and dynamic. The tremendous amount of data collected in a trial offers valuable insight − if harnessed correctly. A solid RBM strategy includes feedback and input from multiple data sources. EDC, CTMS, IVR/IRT, and other study databases contain rich data that when viewed correctly, provide valuable information about how a site is performing and can even predict problem areas. Performance metrics contained in these databases are critical to catching such problems early and preventing significant compliance issues. 3. Incorporation of Monitor Feedback Without question, RBM tools must incorporate data from clinical trial databases such as EDC and CTMS. This data however paints only a partial picture of what's really going on in a trial. One of the most valuable, yet underutilized, resources for tracking and adapting an RBM plan is direct feedback from the monitors themselves. Who knows better what is really going on at a site than the monitors? The leading RBM tools include a mechanism to capture input from the monitors on the RBM strategy implementation at specific sites. 4. Flexibility and Customization Sponsors have widely varying needs, strategies and risk tolerances and with no two clinical trials the same. Each trial is designed from the ground up with specific endpoints and objectives. Te tool you choose to implement your RBM strategy needs to accommodate the vast and varying needs of sponsors. During technology evaluation, keep in mind the importance of flexibility and customization unique to your organization's needs. 5. Actionable Reporting Clinical trial teams are inundated with data from EDC, CTMS, in-house systems, e-mail, phone, and other sources. When implementing an RBM strategy, it's critical the technology provide targeted, actionable information. RBM tools must provide sustained, meaningful reporting that enables trial teams to take action on the issues detected. Theories rarely live up to their promises to deliver value or return on investment without diligent and careful thought as to implementation – and RBM is no different. RBM demands leading-edge tools to help integrate new monitoring theories, whether in new or existing clinical trials. RBM has the potential to deliver on the promise of more efficient use of the ever-shrinking pool of funds and resources dedicated to clinical trials while at the same time improving overall monitoring effectiveness. With the right RBM enabling technologies, clinical trial monitoring groups will, for the first time, be able to "do more with less". http://www.bioclinica.com/blog/risk-based-monitoring-technology-5-essential-capabilities
http://www.appliedclinicaltrialsonline.com/rbm-guidance-document-ten-burning-questions-about-risk-based-study-management RbM Guidance Document: Ten Burning Questions about Risk-Based Study Management Jan 13, 2015 By Moe Alsumidaie , Beat Widler, PhD , Johanna Schenk, MD, PhD , Peter Schiemann, PhD , Artem Andrianov, PhD , María Proupín-Pérez, PhD Applied Clinical Trials The RbM Consortium isa transnationalalliance of quality risk management industry experts, risk-based technology firms, data analysts, and biopharmaceutical business strategists.Members of this group include WSQMS , PPHplus , Cyntegrity , and Annex Clinical .The objective of this collaboration is to improve awareness of Risk-based Monitoring (RbM) in clinical trials, to advise biopharmaceutical enterprises, medical device companies, CROs, study sponsors, andstudysites on best practices, and communicating the best implementation strategies on sound quality risk management systems, technologies, and business methodologies in RbM. This guidance document will offer information about RbM strategies. Quality Risk Management (QRM) , Quality by Design (QbD) , Risk-based Monitoring (RbM) , data-driven monitoring or centralized monitoring have become interrelated terms of a hot topic, and there is hardly an organization that does not claim to already apply or considers to implement such an approach for clinical development and pharmacovigilance. This begs the question: is it just hype or a shift in paradigm? We are convinced that this is more than just hype. However, the shift in paradigm has not yet happened either. We—the RbM Consortium, which includes recognized experts from this area— observe many approaches to a risk-based study management but came to the conclusion that it is too early to celebrate an advent of a new era in clinical trials and development. We notice that the majority of approaches proposed lack fundamental elements of Quality by Design and Quality Risk Management strategies. We are observing five major flaws: Lack of a uniform and broad understanding of underlying principles, methodology and approach: A shared understanding of terminology, scope of tasks, deliverables, as well as roles and responsibilities across the entire stakeholder community involved in the planning, conduct, analysis, reporting, and assessment of clinical trials—e.g., sponsors, contract research organizations (CROs), investigators (and, as a result, patients)—is important to the realization of a risk-based study management approach. Decisions about risks are not based on soundly defined objective decision criteria: Risk assessments and their resulting decisions today are mostly based on the evaluation and opinion of teams or individuals. What happens if those teams or individuals are replaced by others? Will their decisions be the same? This is highly doubtful, since individuals base their decisions on their experience and the situation they are currently in. As important experience is for establishing a risk-based system, independent decision criteria or decision frameworks are needed to keep the human factor as small as possible once the system goes live. Emphasis is not put on the foundation of risk-based monitoring: The foundation of a sound risk-based approach starts much earlier than the trial itself with aspects such as protocol design, study start-up assessments, site qualification, patient enrollment optimization, to name a few. The common approach focuses only on a centralized review of incoming data or a strategy of reduced source document verification (SDV) and site monitoring but protocol design aspects, which are the blueprint for the trial, are neglected. As a result, protocols of inadequate design (structure and content) are implemented with a study management approach that is likely to be unfit for the complexity of the protocol. No integrated quality strategy is developed and implemented: Such strategy must include study design, site selection (investigators who have the required therapeutic and Good Clinical Practice (GCP) experience as well as access to the patients to be enrolled), study management (centralized/data driven monitoring) and general oversight aspects (quality management plan, third party oversight, audits, etc.). Risk assessments are conducted in silos and not shared within and between sponsors: In order to save time and money, sharing of assessments of “common” entities, such as CROs, central laboratories and others can contribute to a leaner and more effective approach to managing clinical trials. In addition, networks of service providers should be created to serve as one-stop shop for a sponsor of a clinical study covering all elements from protocol design to reporting of trial results including disclosure, submissions, training, qualifications, risk assessments, safety duties and many more. Based on the above-mentioned deficiencies we have put together a catalogue of ten key aspects on how to navigate QbD, QRM and RbM and to implement a sound Risk-based Study Management Strategy . 1.Is it true that only larger pharma or device companies can apply a risk-based approach? Designing and implementing a risk-based approach does not depend on the size of a company. Smaller biotech, pharmaceutical and device companies as well as academia can do it, too. All will benefit as much as Big Pharma from a smart approach to monitoring. We all face the same challenges: lack of resources, time pressure and growing regulatory requirements. It is important that the designed solutions to RbM are meeting the criteria defined above in our statement. There are many solutions to RbM, whether they are built by using a spreadsheet or supported by a highly sophisticated IT system with automated data analysis. The more clinical studies and the more complex they are, i.e., global, multiple arms, comparators, etc., the better would be a solution that is technically sound and to a high degree automated. However, this is not needed in all scenarios. We will explain it further in the following. 2.Do I need sophisticated IT tools to implement a risk-based approach? A risk-based approach starts with the identification of risks followed by the analysis as to their impact, likelihood and detectability, and the possible actions to mitigate them. For such an assessment, no sophisticated IT tools are required. The tracking of risk profiles – at the sponsor, CRO or clinical trial center level – and of actions triggered by the risk assessments can be performed with off-the-shelf office tools (e.g., spreadsheets). If larger volumes of data need to be assessed, more sophisticated web-based tools may be user-friendlier. However, when clinical trial data are to be regularly reviewed for outliers, errors or other non-compliances, and large volumes of data need to be processed; this would require the use of a validated and well-designed computerized system to minimize workload. In addition, data need to be presented in reports that are easy to understand, classified in risk categories that are based on objective criteria, and result in actionable mitigating actions. 3.Is it true that we need to change the way we write protocols and set up trials to implement successfully a risk-based approach? There is consensus that the current approach to clinical development is not sustainable due to the following reasons: About 50% of sites fail to reach the planned recruitment targets.1 The majority if not all milestones missed in the clinical trials—more than 95% of clinical trials do not end on time and on budget as planned in the first place.1 90% of the studies meet their recruitment goals, but at the expense of mostly twice as much time as originally planned.1The costs attributable to these delays are gigantic. Too many avoidable protocol amendments, with a first amendment implemented even before the very first patient has been enrolled—most of them due to the rush towards “First-Patient-In”. Compliance flaws causing delays and additional costs. Bottom line: trials take much longer and cost much more than originally planned. A senior inspector summarized recently her reservations about the current approach as follows: “If sponsors and CROs do not learn to write better protocols, moving from the current trial monitoring approach to a risk-based or centralized monitoring approach, it is likely to end in a compliance disaster.” We are also absolutely convinced that the quality of protocols needs substantial improvement, for example, by implementing the following plan changes: Focusing on one study rationale and one or two primary objectives. Verifying with real-life data that inclusion criteria are fit for purpose and support the value proposition of the new product. Verifying that the study set-up is aligned with patients’ and investigators’ priorities and capabilities through patient-centric models and simulations. Verifying with real-life data that enrollment plans (recruitment goal and rate) are realistic and the right sites are being involved . We should never forget that the rationale and objectives of a protocol should target a clear value proposition and that the key stakeholders of a protocol are not the editors of a scientific journal but the professionals performing reviews at health authorities and Ethics Committees/Institutional Review Boards (EC/IRB) prior to clinical trial approval or in the context of Health Technology Assessments (HTA) for the decision on pricing and reimbursement. 4. What are KRIs and KPIs, and how can they support a risk-based approach? KRIs are used to measure in an objective manner the risk imposed on a process or system. We distinguish leading and lagging KRIs. A leading KRI measures parameters that indicate a problem building up before it materializes so there is still time to correct. Lagging KRIs are indicating deviations that already happened, but as a single event would not have a great impact, however, when piling up, impose a risk on the process/deliverables looked at. Generally, leading KRIs are more effective but are also more difficult to measure than lagging ones. A proper suite of KRIs combining leading and lagging KRIs allows a study management team to plan and execute a risk-based strategy, and KRIs can be benchmarked via empirical data analyses. Delayed Monitoring Visits (compared to the timelines set in the Study Monitoring Plan ) is an example of a leading KRI. A delay in on-site monitoring does not per se threat patients’ safety, integrity and rights, and data integrity but increases the risk of tardy detection of a GCP issue, and as such it is sensible from a study oversight perspective to address such delay in a proactive manner. Protocol Deviations is an example of a lagging KRI: the GCP deviation has already occurred and although the deviation cannot be corrected its recurrence can be prevented. KRIs can be designed as “digital” triggers, i.e., the KRI is either on or off : when the threshold is reached the KRI “fires” and generates a signal. This approach, rather than having “multi-level” KRI thresholds, avoids complexity in the calculation logic and also facilitates the “validation” of the threshold. Indeed, any threshold needs to be justified by means of objective criteria. Validation of a KRI threshold can in essence be achieved in three ways: a) Thresholds taken from regulatory, SOP, industry, etc. standards, e.g., the KRI for processing a serious adverse event would be set at day 13 to avoid non-compliance with the 15 days reporting timelines. b) Analyzing compliance data from past studies, i.e., analyzing the behavior of a KRI threshold in studies known for good compliance vs. studies with recognized GCP deficiencies. c) Analyzing “real world” data, e.g., analyzing Electronic Health Records (EHR) to get clues about realistic expectations vs. protocol requirements. For instance, EHRs can be analyzed to determine how frequently and to what extent a patient’s visit deviates from the scheduled visit date. d) If none of the above approaches can be applied, an arbitrary threshold can be set and the adequacy of the threshold can be verified or fine-tuned later through on-site monitoring and auditing approaches. When defining KRIs study teams must practice caution to measure what has an impact on compliance and process effectiveness and not what can easily be measured. For instance, measuring compliance against timelines may induce team members to take short cuts, which can be the root cause of new deficiencies. Reports aggregating KRI analyses data can be presented via tables, graphic mapping, odometers, and other analytical visualizations. 5. Will a risk-based approach make site selection/qualification and patient recruitment into my trials any easier, and will it help to get better data faster and cheaper? Risk-based approaches apply not only to monitoring, but also to many facets of a clinical trial, such as site selection/qualification, protocol design and subject enrollment/retention. Sponsors can mitigate clinical trial risks in the following areas: Predict Study Site Enrollment and Quality Potential Prior to Initiation: Initiating poorly performing study sites is common and is costly. Up to one third of study sites never enroll a single patient , and it can cost up to 50,000 USD per site to seek, initiate, maintain and close poorly performing sites . With the availability of numerous data sources (i.e., patient populations, number of ongoing clinical trials at an investigative site, competitive trials, site resources, years of investigator experience, etc.), we advise sponsors/CROs to conduct empirical analyses and run predictive models against historical site performance to unveil factors that affect quality and enrollment outcomes. Quantifying site enrollment and quality potential prior to site initiation enables sponsors to minimize exposure to poorly performing sites, which saves costs and improves quality outcomes. Protocol Design Optimization: According to FDA and EU inspectors, a well-designed protocol is the blueprint for quality clinical trials, and many studies experience timeline slippage and compliance issues because of poorly designed protocols, which result in numerous amendments, and costing more than 400,000 USD per amendment . In order to minimize timeline slippage, we advise sponsors/CROs to test protocols against real-time aggregated Electronic Medical Records data. This test enables sponsors/CROs to optimize protocol inclusion/exclusion criteria to better match real-world patient medical criteria, which can minimize enrollment slippage and the number of protocol amendments. Retain and Engage Patients: Missing clinical trial data due to subject dropouts has a significant impact on clinical trial timelines, budgets and quality outcomes. Reasons for subject dropout can vary from subject geo-demographics, protocol complexity, subject commitment, education, distance from study site, employment status, physical activity and much more . Albeit there are technologies that focus on engaging patients, many are unproven in the realm of healthcare and clinical trials. We advise sponsors/CROs to not only test out feasible technologies, but also engage patients through validated technologies, such as SMS (Short Messaging Service, also known as Text Messages), which is a proven and cost-effective way to improve medication adherence , improve medical office appointments , and motivate patients to stay in a clinical trial. 6. Will a risk-based approach allow me to stop SDV (source document verification) and reduce the burden of on-site monitoring? With RbM, source document verification (SDV) is not becoming obsolete but the way it is applied needs to change as outlined below. The FDA clearly states, “No single approach to monitoring is appropriate or necessary for every clinical trial. FDA recommends that each sponsor defines a monitoring plan that is tailored to the specific human subject protection and data integrity risks of the trial. Ordinarily, such a risk-based plan would include a mix of centralized and on-site monitoring practices.”10Therefore, through the systematic use of a centralized review of the study data we can consider a reduction in the burden of on-site monitoring, improvements in clinical data quality and an increase in the clinical monitors’ effectiveness. All sources of data should be mined and analyzed such as the data from the CRF, its metadata (e.g., audit trail data), the CTMS, the TMF, safety database, etc. When such an approach to study management is applied, SDV will serve as a root cause analysis tool when needed rather than a data comparison tool. We should always keep in mind that the goal of any monitoring activity – regardless of the tools used – is to protect patients’ safety, integrity and rights as well as ensuring data integrity. A proactive approach to study oversight increases efficiency by reducing the need for corrective actions. 7. How much money do I save by implementing a risk-based approach? Depending on the complexity of a clinical study, the costs of monitoring (personnel, travel, expenses, etc.) can reach magnitudes of 25-30 % of the clinical trial budget.11Since quite some time this has been a target for cost-saving exercises at many pharmaceutical and biotech companies. However, none has actually achieved the goal of cost reduction in this budget line item. With risk-based monitoring, however, many believe that now has come the time, and finally the tools are available, to reduce the monitoring budget. Although certain tasks can now be done centrally, there is still a need for visiting the centers to ensure that patients’ safety and the integrity of the data are observed. Therefore, the current monitoring effort that is rather distributed equally will be shifted towards sites in greater need, such as centers with many patients, not experienced in conducting GCP trials, potential problems indicated through KRIs and central data analysis/risk assessment, etc. This means that we will rather see a shift in effort than a reduction of it. However, the positive result from this shift is that the sites in need now get the attention and support necessary to help them to deliver good results. Savings in the long run will be realized when through well planned and implemented clinical studies the timelines to data base lock and ultimately to the launch of the product will be shortened, which will ensure longer patent exclusivity. When we take a medicinal product with annual sales of 500 million USD and divide those annual sales by 365 days, we can see that the real savings of 1.65 million USD per day are waiting for those who set up their studies in the QbD spirit. 8. How will Health Authorities react if they discover a major or critical finding that was not detected or not addressed through your risk management approach? We have discussed this aspect multiple times with Health Authorities’ inspectors and their feedback was clear and consistent with regulators’ messages about RbM: “errors in clinical trials are acceptable to regulators as long as had we perfect data we would still make the same decision and come to the same conclusion.” In other words, inspectors will continue to detect and report non-compliances, however if inspectors and reviewers come to the conclusion that through those non-compliances patients’ safety, integrity and rights as well as data integrity were not compromised, a trial/system/process will still be considered compliant in the sense that the goal of GCP (patients’ safety, rights, integrity, and data integrity) was reached. Obviously, even if such findings were not considered as critical, the sponsor must, however, implement adequate CAPAs. Additionally, the risk management approach has the advantage of knowledge formalization and incorporation of the critical findings into the future risk-monitoring process, which leads to continuous improvement. 9. How does a risk-based approach impact on my organization, i.e., study site, Quality Assurance, Data Management, Clinical Operations, Drug Safety, Biometrics, etc.? Our experience shows that change management activities around the development and rollout of an RbM/QRM strategy in study management and oversight are key to success. A “must do” is early communication, honestly and proactive alignment of the entire organization with the plans, goals and – most importantly – the organizational and people implications. Unfortunately, still today, a silo approach to study oversight activities is the norm. Within a sponsor company, Clinical Operations develops monitoring plans, QA audit plans and Clinical Science develops plans about the benefit–risk assessments. Resources are allocated by functions and not in an integrated manner. For instance, the data management budget may allow developing an EDC and data management solution but does not include resources for programming and maintaining KRIs. Similarly, the biostatisticians may be tasked to develop a Statistical Analysis Plan (SAP) but do not have resources to develop programs for the early detection of outliers or abnormalities in the data structure. When CROs are involved, the wealth of insight about process efficiencies and deficiencies are rarely shared across their clients to generate learnings and trigger process improvements, too. Therefore, when a company decides to adopt an RbM approach, a holistic, integrated risk-based study management plan needs to be developed. From this follows that all stakeholders need to work together to identify the required tools, activities and minimum tolerable error levels. The latter is nothing else than the definition of the “design space” for a given study and the justification with objective evidence of that “space.” RbM requires a change in mindset and, therefore, change management that is introduced at the very beginning of an RbM project is key to its success. 10. What is the best implementation strategy for a risk-based approach? Our experience has shown that a systematic introduction of an RbM approach rather than a big bang launch is the best implementation strategy for such a project. Indeed, a big bang strategy is likely to overwhelm an organization, underestimates the technical and organizational challenges and thus triggers resistance amongst concerned staff members. A four-step approach has proven multiple times to work best: Step 1. Predictive Modeling and Empirical Analysis: In the predictive modeling and empirical analysis phase, study teams need to (a) determine clinical operations/business research objectives, and (b) conduct empirical research on data from numerous completed clinical trials to uncover factors that impact clinical trial risk and performance outcomes, for example, factors that predict clinical trial failure. Once teams identify risk factors, we advise teams to conduct statistical analyses to determine analytical parameters, which can be used as benchmarks, and to predict future outcomes. After developing a portfolio of statistically linked factors and analytical expected outcomes, study teams can use these parameters to proceed with the proof of concept phase (step 2), and examine the application of these predictive models in real time. Step 2. Proof of Concept (PoC): In the PoC phase, the tools, methodologies and processes needed to implement a Quality by Design (QbD) and RbM approach are developed and applied in one to a maximum of three studies with teams that are eager to move to a modern RbM methodology. Involving people enthusiastic about RbM is essential as it allows overcoming hurdles and backslashes without being forced to fight skepticism. This PoC generates important learnings on how to implement RbM in the sponsor organization. Specifically, on how to address gaps, conditions for success and limitations to RbM imposed by external constraints. For instance, it can give an overview of the capabilities of current IT systems that cannot be easily fixed. The PoC also delivers “success stories” that can be shared within the sponsor organization as a preparation for step 2. Typically, in the PoC there is little IT involvement and the tools developed and employed use simple technology such as structured questionnaires built in spreadsheets. Step 3. Pilot: In the pilot phase, the tools, approaches and processes developed during the PoC are applied to a larger number of trials or an entire development program to fine-tune the methodology and to start automating some of the tools developed during the PoC. The pilot will also involve teams that may be more skeptical about an RbM approach to gain also experience in managing expectations of any type of team member. The pilot provides the roadmap for the full-fledged implementation of RbM in the sponsor’s organization. In this phase, there is heavy involvement of IT, Data Management and Biometrics to develop the QbD assessment, KRIs and tools for the verification of the plausibility of trial data. In parallel, the new processes are fixed based on the learnings from the pilot phase and the supporting organization is designed and introduced. Step 4. Full Deployment: In the full deployment phase, all trials managed by the sponsor will now use an RbM approach. Tools, processes and methodologies are continuously refined to integrate feedback from users. The new organization, or changes to the current one, are implemented to support the new procedures and to participate in the fine-tuning. The goal is to implement an integrated quality oversight strategy from study planning to regulatory submissions. References: Tufts Center for the Study of Drug Development Impact Report “New Research from Tufts Characterizes Effectiveness and Variability of Patient Recruitment and Retention Practices”. 15 (1), January/February 2013. S. Young. Non-Enrolling Sites Come at a Price. Medidata Solutions, blog.medsol.com, July 2012. D. Handelsman. Optimizing Clinical Research Operations with Business Analytics, SAS Global Forum 2011, paper 204. K. A. Getz et al. Measuring the Incidence, Causes, and Repercussions of Protocol Amendments. Drug Information Journal 2011, 45, 265-275. J. S. K. B. Yadlapalli and I. G. Martin. Seeking Predictable Subject Characteristics That Influence Clinical Trial Discontinuation. Drug Information Journal 2012, 46, 313-319. A. Siddiqi, et al. Early Participant Attrition from Clinical Trials: Role of Trial Design and Logistics. Clin Trials 2008, 5, 328-335. S. Khonsari et al. Effect of a Reminder System Using an Automated Short Messaging Service on Medication Adherence Following Acute Coronary Syndrome. Eur J Cardiovasc Nurs. Online 2 February 2014. S. Prasad and R. Anand. Use of Mobile Telephone Short Message Service as a Reminder: the Effect on Patient Attendance. International Dental Journal 2012, 62, 21-26. Medidata. Medidata and TransCelerate BioPharma Inc. Announce Findings of Joint Research Initiative on Clinical Trial Site Monitoring Methods. Online Press Release 19 November 2014. U.S. Department of Health and Human Services, Food and Drug Administration (FDA): Guidance for Industry, Oversight of Clinical Investigations - A Risk-Based Approach to Monitoring, August 2013. E. L. Eisenstein. Reducing the Costs of Phase III Cardiovascular Clinical Trials. American Heart Journal 2005, 149, 482-488.
http://www.appliedclinicaltrialsonline.com/rbm-guidance-document-ten-burning-questions-about-risk-based-study-management eSource: RbM Evolution is Coupled with SDV Elimination Apr 03, 2015 By Lisa Henderson Applied Clinical Trials The emergence of risk-based monitoring (RbM) is creating a revolution in the way biopharmaceutical sponsors and CROs manage clinical trial quality. However, with the massive variability in the way study teams execute RbM, study sites are expressing concerns over how study teams are executing RbM. Specifically, study sites are trying to differentiate between RbM and remote monitoring (RM), and study sites still experience financial burdens associated with such activities. While many clinical operations personnel are focusing on identifying how RbM will result in a paradigm shift, few are concentrating on addressing the real pain point of RbM/monitoring: source document verification (SDV). This article will discuss the differences between traditional monitoring, remote monitoring, and how innovative technologies, such as eSource, will fit in to risk-based monitoring, and ultimately eliminate SDV. The Evolution of Clinical Trial Monitoring This section will describe the evolution of clinical trial monitoring. Figure 1 illustrates a process flow for traditional monitoring. In this case, the study monitor arranges an appointment with the study site (which can last days), conducts 100% SDV at the site, and generates a monitoring report. This resource intensive process is associated with data cleaning challenges, as sometimes monitors uncover issues and errors with unverified source data long after a patient is gone, which places pressure on data cleaning efforts. In order to mitigate this outcome, monitors have to conduct more frequent site visits, which strains clinical trial budgets. RbM execution can be quite a challenge in this case, as time delays associated with the data verification process creates discrepancies in RbM reporting. Figure 2 delineates an RM process flow. In this scenario, the remote monitor requests the site to provide source documentation, and the site, correspondingly, offers the necessary source documentation remotely. Unfortunately, the site is sometimes burdened with additional tasks, such as scanning and faxing documentation. Moreover, the lack of monitor presence can impact study site performance (i.e., enrollment, motivation, operational quality, etc.). Similar to the traditional model, the RM process also poses challenges towards RbM execution due to data verification time delays. Figure 3 outlines the eSource process, and how it fits in to RbM. eSource is a technology that enables direct data entry into an electronic tablet, and the technology is typically coupled with Electronic Data Capture (EDC). Subsequently, the electronic record becomes the source, and is accessible via a web portal. Additionally, similar to EDC, the technology is capable of issuing auto queries on the source if there are erroneous or questionable data points, which enhances data quality. As the research coordinator inputs source data into e-source, the centralized monitor can immediately access, visualize and verify source data, and can issue additional queries to study sites for any questionable data. This self-repairing process ultimately cleans data in real time as the study progresses. This technology offers many benefits compared to traditional monitoring models. Firstly, direct data input into eSource essentially eliminates SDV. Secondly, the system’s ability to offer real-time access to source data enables RbM, and significantly enhances data quality repair efforts in real time, while the site still has access to clinical trial patients. Lastly, as a result, the process saves costs (for both the sponsor and study site) and mitigates data cleaning timeline slippage. Will eSource Eliminate SDV? The Polaris Perspective Many question whether eSource will eliminate SDV, and whether the technology is compatible with RbM. Industry experts believe that eSource is an RbM enabling technology; “eSource is not in itself risk-based, but it is a facilitating technology. When direct-to-EDC technology is used, there is no lag time between a subject visit and eCRF data entry. Since RbM depends on analyzing data, and doing so as early as possible, direct-to-EDC entry is highly desirable for RbM,” said Laurie Meehan from Polaris Compliance Consultants, Inc . Industry experts also see that SDV will eventually diminish. “A joint study conducted by Medidata and TransCelerate revealed that just over 1% of site-entered eCRF data are corrected due to SDV . If industry accepts this, we should see a dramatic reduction in SDV rates. Because direct-to-EDC technologies obviate the need for SDV, they render the whole debate moot, but, widespread adoption of these technologies will take time. Interestingly, by the time direct-to-EDC technology is ubiquitous, SDV rates may be close to zero anyway,” added Meehan. Ding Dong: SDV is Gone? While the process of SDV is not technically gone, innovative processes associated with SDV eliminates the activity of verifying source documents with EDC data, a costly, inefficient, and (an arguably) unnecessary task. Clinical Ink and Target Health, Inc. featured use cases on the benefits of leveraging e-source at CBI’s Risk-Based Monitoring for Small to Midsized Organizations Conference in San Francisco. For exciting updates on the evolution of eSource and RbM, don’t miss CBI’s upcoming events: eSource in Clinical Investigations, May 4-5, 2015 and Risk Based Monitoring in Clinical Studies , November 5-6, 2015 in Philadelphia. References: http://dij.sagepub.com/content/early/2014/10/09/2168479014554400.full.pdf+html?ijkey=H5UJ1zAOJkuq6keytype=refsiteid=spdij