tag 标签: metrics经管大学堂:名校名师名课

相关帖子

版块 作者 回复/查看 最后发表
使用数据网络度量、图形和拓扑来探索网络 特性 外文文献专区 可人4 2022-3-8 0 263 三江鸿 2022-4-27 09:35:35
潜在因果社会经济健康指数 外文文献专区 kedemingshi 2022-4-13 0 291 kedemingshi 2022-4-13 15:00:00
交换映射的规范度量 外文文献专区 可人4 2022-4-7 0 295 可人4 2022-4-7 08:55:00
软件初创企业的100+指标--多声音文献综述 外文文献专区 kedemingshi 2022-4-6 0 329 kedemingshi 2022-4-6 19:00:00
R中的高维度量 外文文献专区 能者818 2022-3-2 0 499 能者818 2022-3-2 14:15:00
阿里、新浪《大数据实时计算系统实践》课程全部课件分享 attachment 数据分析与数据挖掘 jzm0109 2018-11-2 8 2266 jzm0109 2018-11-26 15:28:48
Marketing Metrics (2nd Edition) attachment 市场营销 coopetition 2009-12-7 13 5627 wmwong 2017-2-22 06:39:26
Credit Metrics模型在中小企业融资贷款信用风险管理中的应用研究 attachment 论文版 夏朵1985 2009-8-22 12 4874 小学徒1号 2015-10-25 21:15:34
Data-Driven Marketing:The 15 Metrics Everyone in Marketing Should Know attachment 市场营销 mumu7656 2010-5-3 9 5643 shuke0327 2015-1-15 11:03:52
risk metrics(JP morgan) attachment 金融学(理论版) lleyton 2007-11-18 3 7747 albertsun 2011-12-31 14:37:05
[下载]jp morgan risk metrics technical document attachment 金融学(理论版) nk_patrick 2006-4-24 3 6223 crystalding 2011-10-20 00:57:16
risk metrics材料的汇总(7篇) attachment 金融学(理论版) alex.wh 2010-5-21 5 2657 ryrongyu 2010-7-15 17:15:31
感谢Performance measures and metrics in logistics and supply chain management attachment 文献求助专区 logyquan 2010-3-31 1 2320 jianhongnet 2010-3-31 23:47:19
Role of risk metrics [foundation] CFA、CVA、FRM等金融考证论坛 静湖宜陶 2010-3-5 0 1520 静湖宜陶 2010-3-5 11:02:09
求教:如何用蒙特卡罗方法求解Credit Metrics 模型的VAR值? 金融学(理论版) kjship 2007-3-26 3 3412 yaorenjun0772 2010-2-18 22:29:49
Credit Metrics <JP.Morgan> attachment 金融学(理论版) marcockc 2010-2-2 0 1888 marcockc 2010-2-2 21:03:12
Practitional Guide for Risk Metrics attachment 经济金融数学专区 dreamuc 2009-8-10 0 2363 dreamuc 2009-8-10 01:02:48
[原创]风险计量精选三Metrics:市场风险、信用风险、公司风险 attachment 金融学(理论版) thinkby 2008-3-7 0 2704 thinkby 2008-3-7 15:48:00

相关日志

分享 The Next Generation of Clinical Trial Performance Measurement
slimdell 2015-4-6 18:28
http://www.appliedclinicaltrialsonline.com/next-generation-clinical-trial-performance-measurement The Next Generation of Clinical Trial Performance Measurement Feb 27, 2014 By Michael J Howley PA-C, PhD Applied Clinical Trials Organizations involved in clinical trials have embraced the idea that they should collect performance measures to assess how well they conduct clinical trials. Both sponsors and CROs have invested considerable time and money to develop performance measures, but there is still uncertainty (and some frustration) about what should be assessed and how it should be measured. Performance measurement is an important issue for clinical trials. Sponsors and vendors are under cost pressures to deliver trials more efficiently. Regulators are increasingly demanding that sponsors demonstrate appropriate governance and oversight over their clinical trials. Clinical trials managers operate in a complex environment where an oversight today can come back to be a major problem months or years from now. Clinical trials providers want to know how they are performing relative to competitors and how they can invest-to-improve. In this article, I review the current state of clinical trial performance measurement and suggest the next generation of clinical trial performance assessment. My perspective is that of an academic trained in the measurement of performance, service quality, and customer satisfaction who has been conducting clinical trial performance research for the past four years. A Brief History of Clinical Trials Performance Measures The various approaches to measuring clinical trial performance can be organized into three distinct approaches, in-house assessments, customer evaluations surveys, and operational and financial metrics. In-house assessments were developed by CRO executives to track customer satisfaction. Every firm used different surveys and the questions tended to change to address the important issues in each trial. These surveys were very useful because they were customized, but frequently could not be used to track performance because of the changing metrics. Customer evaluation surveys are developed and administered by research organizations like Avoca or ISR.1They typically measure the customer’s psychological state (e.g. satisfaction) that results from the clinical trial performance. These internal states are important because they drive repurchase decisions and word-of-mouth. I find that executives sometimes (and mistakenly) believe these surveys are ‘qualitative.’ But these ratings of internal states are quantified by the respondent and assigned a number, so they are actually quantitative. A more accurate critique of customer evaluation surveys is that they have low reliability. That is, different subjects often have different responses to an identical service experience. Offsetting this reliability problem, customer evaluation surveys have high validity. That is, these surveys are highly accurate in capturing the customer’s response to a service. Much of the recent activity in clinical trials performance has emphasized operational metrics or many of the clinical trial management systems (CTMS) systems. This approach to performance measurement captures process indicators such as the number of protocol amendments or deviations (to assess quality), the elapsed time between site activation to first patient entered (to capture cost and efficiency), or the number of days between protocol approval to site activation (to evaluate timeliness). The advantage of operational metrics is that they are highly reliable. If you ask how many days it took to recruit patients in a trial, you would get a fairly consistent response across multiple subjects. The disadvantage of operational metrics is that they have weak validity. Counting the number of protocol deviations does not fully and accurately capture the construct of clinical trial quality. Figure 1 illustrates the trade-offs involved between customer evaluations and metrics. Critique of Performance Measurement Practices Looking across all of the approaches to measuring performance in clinical trials, I believe that the industry is far ahead of other sectors in healthcare. Improvement is needed, however, in order to achieve scientific performance measurement that fundamentally improves clinical research. I see four limitations to current clinical trial performance measurement. First, the industry does a great job in identifying potential items to measure, but there is little validation of the items. Identifying items is a necessary, but not sufficient step. There must be statistical validation to completely understand what you are measuring. Validation can answer questions like: Are the items significantly related to the construct you are trying to measure? Are the items repetitive or are they contributing additional measurement of the construct? Does an item really belong with this construct or does it belong with another? Without statistical validation, in other words, it is very difficult to know exactly what your items are measuring. Secondly, I see an artificial tension between validity and reliability (i.e. customer evaluations vs. operational metrics). There is no need to choose one over the other. Statistical models today can assess the validity and reliability of a measurement model and managers can then select models that balance validity and reliability. For organizations that emphasize financial or operational metrics, these items can easily be included in performance measurement models. Thirdly, it is common practice in the industry to benchmark on averages. In this approach, organizations assess their performance based on how they compare to a metric’s average score. This can lead to the metrics trap, which occurs when the industry is under- or overperforming (Figure 2). Imagine that we are managing a study team that recruits subjects in 70 days. We get data that describes the average recruitment time of 85 days. Since we performed better than the average, we claim that this was high-performing recruiting. Imagine, however, that high-quality recruiting only happens at 55 days. Now the 70 days to recruit subjects is a low-quality recruiting performance, even though it is above average. In this age of big data and predictive models, it is simply below the standard of practice to make decisions based on simple descriptive statistics. Predictive models that link to performance quality are needed in order to avoid the metrics trap. The fourth performance measurement limitation is the problem of ‘measurement-in-isolation.’ I see a tendency to assess performance on only a single variable and not account for the other drivers of trial performance. Clinical trial activities do not occur in isolation. A whole string of activities is necessary for a clinical trial to happen. The number of days that it takes to recruit subjects, for example, is dependent on other factors like the project manager, investigator meetings, or the recruiting materials. If these factors are not integrated into the measurement, the metrics will be biased. Since clinical trials include multiple activities, it is only proper that our models must also include all of the performance activities. Again, I emphasize that I believe that the clinical trials sector does an above-average job in measuring performance. But we now have the tools available for the clinical trials industry to move to the next generation of clinical trial performance measurement. The Next Generation of Clinical Trial Performance Measurement You do not need to take a psychometrics course or register for advanced statistics to develop your own performance metrics. The key ingredient with any metrics project is the ability to clearly think about what they want to measure and the discipline to maintain that focus. In this section, I will explain the current scientific processes in enough detail so that the process could be used to manage a metrics team effectively. I will follow the steps illustrated in Figure 3 and use examples of our research on developing clinical trials performance measures. Step 1: Define what you are measuring . Before you start collecting data, it is important to clearly identify what (exactly) you are measuring. This stage can be surprisingly difficult because disagreement commonly erupts on what you should measure. You will need to exhibit leadership at this point to achieve buy-in on a manageable scope for the project. In our case, we initially focused on full clinical trial performance but quickly found it was an enormous construct with many complex component parts. In discussing with experienced clinical trials managers, we found that they thought about clinical trials in stages (e.g. study startup, conduct, and closeout) so we were able to focus on each stage individually. Step 2: Gather items to measure your constructs. Once you have clearly defined what you want to measure, you need to identify the key drivers and collect items (e.g. questions to assess performance or financial/operational metrics) to assess the presence of the drivers. Don’t rely on your own experience. Get out and interview people with recognized expertise about both the key drivers and how they should be measured. This can get complex, because there a commonly multiple sequential steps involved in performing a complex service like a clinical trial. It often helps to create a process map (i.e. box-and-arrow path model) to organize your thinking. Remember that you also need to collect items to assess the performance quality and satisfaction that result from the service for the validation model. How many interviews do you need to know whether you have a complete set of potential items? The general rule of thumb is that you have done enough interviews when you are no longer learning anything new from your subjects. For us, it was about three-dozen interviews. Step 3: Statistical Validation. Once you have collected the measurement items, you are ready for validation. You will need to organize the items and create a measurement instrument using scaled items or fill-in-the-blank response boxes. This instrument can then be loaded on to an online form for data collection. The statistical analysis usually involves path modeling, e.g. structural equation modeling or PLS path analysis. Again, you don’t need to do the statistics yourself, but you will need to talk to statisticians. It will help to have a basic overview of their procedures. Here is the basic idea: we are trying to measure abstract concepts (e.g. investigator recruitment, project manager performance, or data management). We use items (e.g. a survey question or operational/financial metric) to assess the degree to which the concept is present. The large circle on the left side of Figure 4, for example, represents a concept. On the right side of the figure, we see that each additional item contributes a bit more to our measurement of the concept. The statistical output can tells us how much measurement each item contributes and whether it is significant. The statistics also give us the reliability of the items and provide evidence of validity. Based on these assessments, we can then make scientific decisions about the efficiency, validity, and reliability of the instrument. Let me illustrate validation with an example from our research on study closeout. In our initial interviews, the executives described two independent related to data in closeout performance – processing the data and query resolutions. When we validated the data, however, we found that all the data management issues combined into a single driver that included that included both the data processing and query resolution. The new ‘data management’ driver had high reliability (.97) and demonstrated improved validity. Recognizing these validation issues leads to unbiased metrics, greater efficiency in measurement, and opportunities to improve performance. * Step 4: Link to Performance Quality. The final step in the next generation of performance measurement is to include all of the validated measures in a predictive model. This is the necessary step that helps you to avoid the metrics trap. In predictive modeling, we focus on the relationships between the key drivers and the quality of the performance. In statistical terms, we assess the structural relationships. For managers, this step is typically the most intuitive and easiest to grasp. Let me illustrate the measurement-in-isolation problem with an example from our research. In study closeout, we included items on the timeliness of closeout activities like drug reconciliation, drug returns, and the closeout visits. This timeliness variable (b = .54, t= 6.49, p .001) was positively and significantly associated with closeout performance. If we include the additional closeout drivers (closeout visit activities, query resolution, data management, and collection of documents), then timeliness switched to be negative and significant.2A CRO who only looked at a single variable could be investing solely in closeout timeliness and be degrading their performance! You can see why clinical trials managers should be involved in this process to help guide the development of the measurement model. In developing models, it is important to explore multiple models in order to understand how key drivers interact to lead to improved performance quality. The statisticians may be able to understand the output, but it is important that the measures make sense to the subject matter experts in the field. Discussion The clinical trials industry has recognized the importance of measuring performance and enthusiastically embraced various measurement approaches. Present-day measurement practices, however, are not grounded on scientific performance measurement techniques. In this article, I have offered a critique and suggest steps to move to the next generation of performance measurement. These measurement approaches will not create additional work for you. In fact, this approach should be more efficient for both those who manage the performance measurement process and those respondents who must provide the data. Without statistical validation, managers must resolve conflicts about what to measure through long discussions and compromise. They inevitably end up simply adding items so the instrument gets inflated and burdensome. With statistical validation, items can be tested to see if they contribute to the predictive model. If so, they can be included in additional data collection. If they don’t load, there is no point in collecting the additional data. This also leads to much more efficient data collection process for respondents who must provide the data. In our research, it typically takes 3 to 5 minutes to complete an instrument. This next generation of performance measurement also provides confidence. Clinical trials managers must track innumerable details and worry about what they are missing. Unless they are recognized and managed, these blind spots have the potential to corrupt a clinical trial. If the trial is outsourced to a CRO, then these performance measures can proved a scientific basis for contract oversight. Regulators are increasingly emphasizing the importance of governance in clinical trials. The scientific measurement described in this paper provides a sound basis on which to demonstrate proper governance of a clinical trial. Finally, CROs are interested in this approach to performance measurement because it provides a scientific basis to guide investments in performance improvement. These predictive models can identify the key drivers that have the greatest impact on the quality of clinical trials performance. CROs can then maximize their ROI on quality. Footnotes 1 For any of the companies I use as exemplars, it is important to note that they do a variety of research, but emphasize these approaches in clinical trials performance measurement. 2 The technical explanation is that these are partialled regression coefficients. That is, the coefficients are estimated with redundant information removed.
个人分类: CT|0 个评论
分享 Site-Driven Metrics
slimdell 2015-4-6 17:33
http://www.appliedclinicaltrialsonline.com/site-driven-metrics Site-Driven Metrics The clinical trials industry can benefit from site-driven metrics for process improvement and site scoring. Mar 01, 2012 By Laura Youngquist Applied Clinical Trials Volume 21, Issue 3 RELATED ARTICLES Integration Paves the Way EMA's Reflection on eTMF The Future of Clinical Trial IT is Only a Pulse Away MORE IN CLINTECH The last decade has been marked by an increased desire among sponsors and CROs to measure and track performance related to study conduct at all stages of the clinical trial lifecycle. Motivated to implement continuous improvement strategies for better efficiency and quality, sponsors and CROs have collected a growing amount of data and metrics related to operational performance. Metrics can be powerful tools for implementing data-driven process improvements. One organization in particular, the Metrics Champion Consortium (MCC), is actively encouraging their use within the clinical trial industry. The MCC is a nonprofit dedicated to the development and support of performance metrics and quality tools within the clinical trial industry. The MCC encourages sponsors and CROs to reduce cost and improve the success of their trials by investing their resources in doing things right at the beginning of a study—approving well-designed protocols and selecting sites that can enroll qualified patients and execute the protocols, rather than having to fix the protocols and add sites to the studies part way through the trial.1 While this focus on data-driven process improvement is good, there is still much left to be desired from the point of view of the sponsors and CROs. For one thing, sponsors and CROs are often frustrated by the time lag inherent in the process of collecting operational data and the lack of insight into study conduct between monitoring visits. According to data collected by the Tufts Center for the Study of Drug Development, about 90% of all clinical trials are delayed due to over-ambitious timelines and difficulty enrolling patients. Further, for the average trial, about 20% of all principal investigators (PIs) fail to enroll a single patient and 30% under enroll in a given trial.2The cost of performance like this to the entire industry in terms of time, resources, and productivity is unsustainable. The opportunities to implement performance metrics in order to support continuous improvement strategies at the site level remain numerous. Top-down approach Currently, many CROs and sponsors collect metrics around site performance for their own use, often requiring the sites to provide much of the information being collected, using a variety of tools and formats. However, many of these efforts are of little value because the metrics don't truly capture the operational realities of conducting clinical trials. The reason for this disconnect is that the metrics being collected, and the methods by which they are collected, are determined with a top-down perspective of what the sponsors and the CROs want and have little benefit for the sites. Fortunately, the solution requires a relatively small shift in perspective. A shift away from requiring sites to enter seemingly arbitrary operational data into myriad sponsor and CRO systems and instead motivating sites to collect their own metrics for the purpose of measuring and improving their internal processes will not only improve study conduct at the sites but result in more complete and timely data for the sponsor and CROs. Support metrics collection The first step in collecting good operational metrics across all sites is for each site to implement a centralized database of its own. Sites that have implemented a clinical trial management system, or CTMS, that helps organize daily tasks and minimize manual processes for site staff, achieve good compliance with regard to data entry because the use of the CTMS has immediate benefits for individual staff members. Once in place, a site-centric CTMS makes the collection of measurable performance data a natural part of the daily workflow for site staff because it happens automatically. For example, measuring the time it takes a site to open a trial is easy when the site is using a system like the Allegro CTMS@Site system to administer their portfolio of clinical trials. With this in place, the site just needs to count the days between the date that the protocol is sent for IRB review and the date that the first patient is enrolled—dates that are already recorded within the CTMS. Sites, sponsors, and CROs benefit Sites benefit from collecting operational metrics by gaining self-awareness into areas for process improvement and better control of clinical trial financials. An often overlooked side benefit of good site metrics is the ability to focus on areas of strength. For example, in today's environment where only 30% of PIs deliver on their recruitment targets, sites can ensure they stand out by taking on studies that they know match their patient accrual capabilities. Combine a solid understanding of its current operational performance along with a centralized database of its patients and volunteers, and a site's ability to select trials on which they will be successful is greatly improved. Sponsors and CROs benefit when these sites proactively promote their areas of strength when vying for study opportunities. A site-centric method for collecting performance metrics combined with a CTMS can also greatly improve the time lag in the data collection process. Data that are collected as part of every completed task into a central repository are much easier to retrieve for reporting purposes between monitoring visits. Conclusion By nurturing an appreciation for data-driven process and quality improvement at the site level, players at all levels of the clinical trial industry benefit. At the sites, where the work is being conducted, sites benefit from the self-awareness of strengths and weaknesses, gaining tools for promoting themselves and making informed choices about which studies to pursue. CROs and sponsors gain partners in the sites who are motivated to collect the desired data for measuring and improving clinical trial operations. Laura Youngquist, is Director, Allegro Product Management, Forte Research Systems, Inc., www.forteresearch.com/ . References 1. L. B. Sullivan, "Defining 'Quality That Matters' in Clinical Trial Study Startup Activities," The Monitor, 22-26, December 2011. 2. K. A. Getz, "State of the Clinical Trials Industry," keynote presentation at the 2011 Spring Onsemble Conference, Las Vegas, NV, March 23-25, 2011.
个人分类: CT|0 个评论
分享 Translating Clinical Data
slimdell 2015-4-6 17:30
http://www.appliedclinicaltrialsonline.com/translating-clinical-data Translating Clinical Data How to define information requirements to enhance the understanding of a clinical trial. May 01, 2011 By Nikki Dowlman Applied Clinical Trials RELATED ARTICLES Integration Paves the Way EMA's Reflection on eTMF The Future of Clinical Trial IT is Only a Pulse Away MORE IN CLINTECH With the growing complexity of study design; the multiplying numbers of different trial technologies employed; and the ever-increasing globalization of trials, today's clinical trials create vast volumes of data. Given the fundamental importance of study management; data management; and logistics information to every trial, this article will explore how to approach and define information requirements. Theory and practice that has been around as far back as the 17th century will be used, in a manner that will enhance understanding of the trial performance, help identify areas requiring action, and accurately measure the outcome of those actions. Navigating a sea of data Paul Allen, the cofounder of Microsoft, once said "for me, goals and daily metrics are the key to keeping me focused. If I don't have access to the right stats, every day, it is so easy for me to move on mentally to the next thing. But if I have quick access to key metrics every day, my creativity stays within certain bounds—my ideas all center on how to achieve our goals." It is more critical than ever to quickly assimilate mammoth amounts of data to identify the key information required to facilitate effective conduct of a study, identify action areas, and share best practices. As a consequence, many companies are creating data warehouses to minimize the hunt for data in disparate places. The solution is not as simple as confederating all data into a single location and expecting it to become clear. Simply presenting information in data reports at a low, granular level alone will not provide meaningful clarity. Instead, we need to create visual stories that help isolate patterns or trends, and provide links between seemingly disconnected information. The concept of visually presenting data is not a new one, Descartes invented the two-dimensional graph in the 17th century. In the late 18th and early 19th century many of the charts in use today were either improved upon, or invented, by William Playfair. By presenting data visually, one is able to deliver metrics whose contextual insight and focus allow the user to appropriately decide which components to drill into for more concentrated granular content. This is essential if a user is not to be washed away in a tidal wave of information. This article is intended to discuss the thought process that goes into identifying a good metric as well as touching on some of the rules of visual display to ensure the story in the data is clearly and succinctly delivered. What is a good metric? There are a number of guiding principles that should be followed when identifying metrics. It should be simple. It should be a small set that is easily processed and understood. The manner in which the data is communicated is a key component. It should be measured by someone who can act on results. It should be used—why measure if no action will be taken as a result of having the data? A number of metrics are often taken for granted as being essential to understanding the performance of a clinical study. Some examples include site initiation, time from site activation to the first shipment received at site, first patient first visit (FPFV), recruitment rate, and the time taken to resolve queries. While these are important for helping management see that studies are getting up and running as expected, we need to question whether these metrics truly help those actually responsible for ensuring the successful conduct of the trial. Many of these commonly sought metrics indicate simple statuses regarding what is happening in the study now, so-called "lagging indicators," with limited or no reference to how the study got there or where it might go in the future. Lacking is the contextual information that provides the metrics' critical value. For example, FPFV identifies that a site has commenced recruitment, but does not reveal if a site is actively recruiting. More value would be obtained from the recruitment rate or the window between recruitment events yielding further, meaningful information to allow effective management of the site. Defining industry standards is still in its infancy, and there is an argument that off-the-peg metrics may not be optimal metrics, if used in isolation. However, some valuable information is available to help formulate the thinking as outlined by Dave Zuckerman.1He defined processes by which metrics can be identified and disseminated throughout an organization; to date it is the only book published on this topic within the pharma industry. More recently, the Metrics Champion Consortium (MCC)2has been developing performance metrics across labs, ECG, imaging, and clinical trial conduct in a collaborative effort across biotechnology, pharmaceutical, medical device, and service provider organizations. Other sources of information include the CMR International 2010 Pharmaceutical RD Factbook .3 At minimum, metrics should span four areas: Cycle time—a measure of how long an activity takes to complete. Timeliness—a measure of whether an activity was completed on time (i.e., a milestone was met). Quality—a measure of the number of errors in a completed task. Cost (or efficiency)—a measure of the resources required to complete the task. Market research indicates that within this industry the majority of metrics quoted as being in common use appear to be focussed on cycle time and timeliness. Without the balance of quality and cost, a true picture of performance cannot be obtained. Anything can be done fast and on time if there is no regard for the cost or extent of errors. It is also possible that over the duration of the study the importance and role of a metric may change, and the following section addresses optimal ways to approach this evolution along the study life cycle. Metrics in the workflow Starting with the information available from the above sources is a good foundation. However, as the study life cycle moves across the varying stages of inception; design; start-up; subject recruitment; study conduct; close out; and submission, the value and prominence of given information also changes over time. For example, once all countries and sites are initiated, focus may shift to recruitment, and likewise after all subjects are recruited, focus may shift to data management or retention. This is not to say that start-up information should not be available to the user once that activity is completed, but rather that a user is likely to be more interested in other information. Targeted information addressing the user's primary interest should be the first item that is presented to the user. Defining this process and matching the metric to the current activities within the study help users see what is most important at that particular time. The best method for identifying the most important metrics is directly asking the users what information is needed to do their job. There are a multitude of roles or "user personas" involved in a clinical trial including clinical study leader, clinical supply manager, statistician, country manager, CRA/monitor, and site personnel. Intimately understanding these roles and the activities each role is required to perform is key to accurately determining users' informational requirements. In this process, focusing on the underlying question of "what problems are you trying to manage?" rather than "what data do you need?" allows a framework for identifying pertinent and valuable metrics. Metrics for a given role or individual should be empowering. They must be motivated to have a positive impact on the metrics by modifying their behavior in the progress towards a defined goal. Consider the different user personas that have an interest in aspects of the data management process. For example, site-based personnel are interested in outstanding queries raised by monitors as well as the number of incomplete or outstanding eCRFs which dictate their activities. Although monitors will have a vested interest in incomplete forms and unanswered queries for their sites, they are also interested in the trend of queries being raised; the number of answered queries; and completed forms, as this drives the amount of activity they have to perform (Figure 1). Similarly, data managers focus on the answered queries and completed forms, as this impacts their workload. Study managers are interested in the query trends with upward and/or downward patterns that influence where they may place additional resources. Such trending information may also provide a visible outcome measure of actions already taken such as training interventions (Figure 1). The above examples highlight the real value of adding contextual trend information to the metric to help users rapidly identify whether or not performance is improving. Therefore, the ongoing monitoring of the metric is as important as the metric itself. When considering "what is a good metric?" it is important to note that there are often a number of alternative ways of displaying the same data, each illuminating a different aspect of the story (Table 2). A further example regarding EDC queries is detailed below. This also allows us to explore the concepts of effective and ineffective work.1Effective work materially contributes to the outcome, whereas ineffective work does not (e.g., waiting for something to happen). Displaying the number of queries at a site indicates how much work needs to be done at that site, helping determine visit frequency or resources required. Displaying the number of queries at a site as a proportion of the number of subjects at that site provides an indicator of how "dirty" the site is. A high number of queries with a high number of patients is perhaps more acceptable than a high number of queries with a low number of patients. This value may indicate a need for further investigation at the site as to why this is the case. For example, does the site require additional training? In the above scenario, the number of queries is an example of ineffective work. As it is a measure of rework, it would be better to never have the queries in the first place. Much attention is given to how long it takes to resolve queries. Instead, if that effort were focused on ensuring queries did not happen in the first place (i.e., efforts to ensure a downward trend in number of queries raised over time, as a proportion of the number of subjects), the rework would not be needed. In this case, a quality metric rather than a cycle time metric would focus efforts on affecting a change in effective work, resulting in a greater benefit for the organization. The total amount of time spent resolving queries, or doing ineffective work, would also decrease since there were fewer queries raised in the first place. Value of consolidated metrics By consolidating data from multiple sources and presenting it in a single location, new valuable metrics can surface. For example, using real time information from a randomization and trial supply management system regarding when subject visits have occurred, makes it possible to compare the number of expected eCRFs against the number of completed or partial eCRFs (Figure 2). This offers an additional metric to facilitate user workflow while pointing to the data visibility gap, namely the lag time between actual visits and entering visit data into the eCRFs. Providing this information to monitors gives additional insights to better manage the performance of the site prior to monitoring visits. Role of technology Having defined the content of your metrics and data reports, it is important to appropriately display the information. There are many best-practice methodologies that can be employed to ensure intuitive and informative display of data. The Gestalt Principles of visual perception as well as the importance of user interface design for data display are instrumental in ensuring a user can absorb as much information as possible as quickly as possible. Seminal papers published as far back as 1956 illustrate that we tend to remember seven (plus or minus two) pieces of information at any one time.4This remains one of the key data presentation principles for delivering meaningful data to users (although more recent work suggests this number may be as low as three pieces of information). Pre-attentive processing adds value to the data display. The use of color, grounding, closure, size, and contrast can aid the interpretation of data. These factors serve important substantive purposes beyond simply creating a visually attractive appearance. Starting his work in the 1970's, Edward Tufte is considered one of the fathers of information design. However, it has provenance back to the 1800's with John Snow's spot maps of a cholera outbreak in London in the 1850s,5leading to the discovery that the disease is spread by contaminated water; and Charles Joseph Minard's 1869 flow chart on Napoleon's Russian campaign of 1812.6One concludes the concept of metrics has been around for a long time, but has not been applied to the clinical trial industry with significant rigor. In fact in many ways the pharma industry is behind other areas as there are many examples of the use of metrics in any number of industries (i.e., the military,7supply chain,8and policing9). It is important to select the correct visual display mechanism in order to best display the information. There are rules to determine whether a chart or table is the most appropriate way to display a particular piece of information.10 The value of tables should not be overlooked when identifying which manner should be used to display data. Not everything has to be displayed in a chart, despite the current trends toward overly graphical dashboard displays. Tables should be used when the user persona is interested in "looking up" individual values or comparing pairs of related values. Examples include numbers of subjects in treatment, number of sites activated, and the number of CRFs incomplete. Tables should also be used when the user requires a level of precision that a graph cannot easily provide (e.g., data to two-decimal places). Charts convey a significant amount of data within a small space.11They are not good for reporting absolute values; charts reveal the shape of the data allowing identification of trends, comparisons, exceptions, similarities, and differences between data. If the user persona uses these key words, a chart should be considered for displaying data. It is also useful to note that charts and tables can be used together to complete a picture. For instance, tables showing current status of variables and charts showing more about the relationships within the data can help provide a holistic snapshot. If information is displayed using a chart, the user should be provided with a mechanism by which they can access the underlying data. Such mechanisms include downloading Excel sheets with tabular display of charted data, linking to a data report, or toggling between chart and table in cases where both are not displayed on the same screen. It is easy to be overwhelmed with the functionality available within many of the business intelligence tools in the market place (i.e., animation, and the large numbers of colors and chart possibilities). As a result, it is easy to fall into a number of traps or poor chart design, such as the use of 3D, sheen, the wrong chart type, or simply being inconsistent with charting implementations12(Figure 3). Understanding the various chart types and the relationships they are best suited to displaying data is critical to successful design. The trick is to be selective about what functionality to choose and not let the technology overwhelm the message. Creation of design standards to enhance consistency and design choice can help control the technology rather than allowing technology to run amok. Summary To derive salient measures that truly inform decision making, it is essential to reevaluate and challenge common industry metrics. Providing information about trends in addition to the current position is powerful in decision making; it not only shows how you got here but also where you may end up. One set of metrics does not fit all since different user personas have different needs. Although core similarities exist, understanding the differences between individual personas is even more critical. Consolidating data from multiple technologies provides value that cannot be obtained from individual databases. Information design is key to ensure users use information efficiently and effectively. The consolidation of data and creation of new and powerful metrics for a user outlined in this article are not possible without the underlying technology infrastructure. Well-defined data architecture strategies coupled with enabling integration/convergent technologies are essential to ensure ease of access to data as well as a robust enterprise reporting tool to effectively display the information. Blending cutting-edge technologies while helping users achieve their business objectives through intelligent use of data and metrics is the key to a successful reporting strategy. Nikki Dowlman, PhD is Director of Product Management, at Perceptive Informatics, Nottingham, UK, e-mail: nikki.dowlman@perceptive.com . References 1. Dave Zuckerman, Pharmaceutical Metrics: Measuring and Improving RD Performance, (Gower Press, 1996). 2. Metrics Champion Consortium (MCC), www.metricschampion.org/default.aspx/ . 3. CMR International, CMR International 2010 Pharmaceutical RD Factbook, (Thomson Reuters, 2010). 4. G. A. Miller, "The Magical Number Seven, Plus or Minus Two," Psychological Review, 63 81-97 (1956). 5. Broad Street Cholera Outbreak, 1855 Study by John Snow, http://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak/ . 6. Charles Joseph Minard's 1869 flow chart on Napoleon's Russian campaign of 1812, http://en.wikipedia.org/wiki/Charles_Joseph_Minard/ . 7. Molly Merrill, "Decision Support System Aids Military During Patient Transfers," Healthcare IT News, April 15, 2009, http://www.healthcareitnews.com/news/decision-support-system-aids-military-during-patient-transfers/ . 8. Powersim Solutions, "Case Study: Nestle," http://www.powersimsolutions.com/Nestle.aspx/ . 9. Jane's police review 6th Aug 2010 pg 42, Viewpoint – Intelligent Policing 10. Stephen Few, Information Dashboard Design: The Effective Visual Communication of Data, (O'Reilly, 2006). 11. Stephen Few, "Data Visualization for Human Perception," (2010), http://www.interaction-design.org/encyclopedia/data_visualization_for_human_perception.html . 12. Perceptual Edge, http://www.perceptualedge.com/examples.php/ .
个人分类: CT|0 个评论
GMT+8, 2025-12-26 10:07