Using Health Information To Make A Difference in the Clinical Domain

From SystemsWiki

Jump to: navigation, search

Contents

Learning Objectives

  • This section has as its main objective to show how to convert what has been said to date in previous sections into the real clinical outcomes and real quality and safety improvements that are needed. Each of the topics discussed is part of the bigger picture of how Health IT can add value and improve the quality of what is done for patients.
  • At the end of this section the reader should be aware of the broad and positive potential of Health IT if implemented appropriately and with sensible and realistic goals.
  • In a second section we review the approaches to both the generation of benefit and the measurement and assessment of such benefit.

Benefit Producing Application Areas

Clinical Decision Support

  • If there is one area of Health Information Technology (HIT) that is hoped to make a major difference and provide benefits it is the deployment and use of Clinical Decision Support (CDS).

The concept is really very simple. All CDS aims to do is to is to provide to the user of a clinical system information at the ‘point of care’ that can guide the user to make the best possible clinical decision. Essentially there are two essential types. The first is where relevant information and clinical guidelines are able to be accessed during clinical decision making to assist in having the best possible decision made. The user has the choice to use the system (or not) as they perhaps prescribe, order investigations or arrange other treatment. As such the system is passive but if well implemented - with easy to use searching - can make decision making more confident, consistent and accurate.

  • Various extra attributes can be added to such systems and they can also be used to support best practice treatment with evidence based order sets that define, for example, what is needed to be ordered when a patient is admitted for a routine hip operation. The order set may cover all steps from ordering routine pre-op investigations and x-rays, clot prevention, pain relief, post-op physiotherapy and so on.

You can read about the services offered by a major provider of such capabilities commercially here

  • A subset of this sort of CDS is where rather than a clinical guideline or order set access is provided to a diagnostic decision support system where the clinician can enter a range of clinical information - symptoms, diagnostic test results and so on and have a range of diagnostic suggestions made and ideas as to how to confirm suspicions. Such systems have the advantage of potentially preventing lack of consideration of more unusual possibilities which may be relevant.

Here is the web-site from the market leader in this area.

  • The second type of decision support is where as clinical decisions are being actioned in the background a process is running to check what is being done and to provide an automatic alert if the system detects a variation from ideal care. Examples can include attempted prescription of a medication that the patient has said they are allergic to or attempted prescription of a medication which can interact badly with present treatment and so on.

To have this happen the system takes what is already known from the patient EHR and the intended action and checks to see if any of the pre-programmed rules have been violated. To be able to do this typically the CDS has a database holding relevant information on each medication and potential risks with additional medications, food and so on. Most electronic prescribing systems contained in physician EHRs have such capabilities.

  • Of recent times we are also seeing increasing presence of a range of decision support capabilities on mobile devices and we are also seeing increasing use of such system in the clinical training environment.

You can explore the capabilities provided by the market leader in this sector here: http://www.epocrates.com/ All these systems recognise that the ‘knowledge management’ task that faces clinicians in 2013 is essentially just ‘too hard’ - the complexity of modern high-tech medicine, and remaining fully current with optimal care has essentially become overwhelming.

  • What is needed in optimal practice of clinical medicine are
    • avoidance of errors of both omission and commission. In practice what this means is to not forget required interventions - e.g. screening etc. while not doing things which evidence suggests are either of no value or even harmful.
    • in ideal circumstances care provided for patients with the same diagnoses should have a high degree of consistency. If this can be achieved there is good evidence that quality and safety of care is improved and costs are also reduced.
  • There is a recent systematic review which has shown good positive effects from the use of CDS.

Ann Intern Med. 2012 Jul 3;157(1):29-43. doi: 10.7326/0003-4819-157-1-201207030-00450. Effect of clinical decision-support systems: a systematic review. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L, Kendrick AS, Sanders GD, Lobach D. Source Duke Evidence-based Practice Center, Duke Clinical Research Institute, Duke University School of Medicine, Durham, North Carolina, USA.

  • Abstract
    • BACKGROUND:

Despite increasing emphasis on the role of clinical decision-support systems (CDSSs) for improving care and reducing costs, evidence to support widespread use is lacking.

    • PURPOSE:

To evaluate the effect of CDSSs on clinical outcomes, health care processes, workload and efficiency, patient satisfaction, cost, and provider use and implementation.

    • DATA SOURCES:

MEDLINE, CINAHL, PsycINFO, and Web of Science through January 2011.

    • STUDY SELECTION:

Investigators independently screened reports to identify randomized trials published in English of electronic CDSSs that were implemented in clinical settings; used by providers to aid decision making at the point of care; and reported clinical, health care process, workload, relationship-centered, economic, or provider use outcomes.

    • DATA EXTRACTION:

Investigators extracted data about study design, participant characteristics, interventions, outcomes, and quality.

    • DATA SYNTHESIS:

148 randomized, controlled trials were included. A total of 128 (86%) assessed health care process measures, 29 (20%) assessed clinical outcomes, and 22 (15%) measured costs. Both commercially and locally developed CDSSs improved health care process measures related to performing preventive services (n= 25; odds ratio [OR], 1.42 [95% CI, 1.27 to 1.58]), ordering clinical studies (n= 20; OR, 1.72 [CI, 1.47 to 2.00]), and prescribing therapies (n= 46; OR, 1.57 [CI, 1.35 to 1.82]). Few studies measured potential unintended consequences or adverse effects.

    • LIMITATIONS:

Studies were heterogeneous in interventions, populations, settings, and outcomes. Publication bias and selective reporting cannot be excluded.

    • CONCLUSION:

Both commercially and locally developed CDSSs are effective at improving health care process measures across diverse settings, but evidence for clinical, economic, workload, and efficiency outcomes remains sparse. This review expands knowledge in the field by demonstrating the benefits of CDSSs outside of experienced academic centers.

    • PRIMARY FUNDING SOURCE:

Agency for Healthcare Research and Quality.


End Abstract.
  • It is clear that building an evidence base that reflects outcomes of the use of CDS in the real world is still a work in progress. Intuitively it seems obvious that such capabilities can make a major difference but to date is has been harder than might have been imagined to collect firm evidence.

Also needing to be considered - and to date not well handled - is the issue of user alert fatigue where users of CDS systems become so annoyed with frequent alerts they start ignoring them of overriding them.

  • The following recent article makes the potential scale of the problem very clear.

Study: Half of CDS prescription alert overrides are inappropriate October 31, 2013 | By Julie Bird

  • Providers override about half of the alerts they receive when using electronic prescribing systems, according to a new study that also finds only about half of those overrides are medically appropriate.
  • Researchers reviewed more than 150,000 clinical decision support (CDS) alerts on 2 million outpatient medication orders for the study, published online this week by the Journal of the American Medical Informatics Association (JAMIA).
  • The most common CDS alerts were duplicate drug (33 percent), patient allergy (17 percent) and drug interactions (16 percent.) Alerts most likely to be overridden, however, were formulary substitutions (85 percent), age-based recommendations (79 percent), renal recommendations (78 percent) and patient allergies (77 percent).
  • On average, 53 percent of alert overrides were considered appropriate, according to the study abstract. Only 12 percent of renal recommendation alert overrides were deemed appropriate, compared with 92 percent for patient allergies.

The researchers concluded that refining the alerts could improve relevance and reduce alert fatigue.

  • The full article and references are here
  • In summary I believe CDS can make a major difference to the quality and safety of patient care, but that more work is needed to optimise the way it works.

Outcome Monitoring / Quality Improvement

These two areas are closely linked and as a pair are keys to many approaches to the objectives of continuous improvement with in the Health Sector enabled by Health IT.

Data Linkage

Monitoring the outcome of treatment and interventions of all kinds (including the introduction of Health IT) is a key step in ensuring care is both effective and safe. There are two broad approaches to monitoring outcomes. The first is to use various already existing databases and analytic software to ask relevant questions. The databases may be EHRs held for patient care management, payment databases (Medicare and the PBS databases for example) and potentially a range of others. A classic example is the use of the EHR databases held by Kaiser Permanente to identify the excess incidence of heart disease in patients who were prescribed Vioxx compared with matched patients who had not received the drug. This work permitted the prompt withdrawal of the drug from use with the attendant saving of life.

  • In Australia it is possible for genuine researchers to obtain the use anonymised information from the Medicare, PBS data bases and many others and additionally to request ethics committee approved research be undertaken linking these Commonwealth databases. The National Health and Medical Research Council has produced guidelines covering data linkage and the ethics and privacy issues involved.
  • This page provides an excellent range of links and pointers to the various entities conducting such research.

The range of entities involved is made clear here: Australian Data Linkage Units

Australian Data Linkage Organisation

  • A data-linkage workshop was held in mid 2013 and a number of useful presentations are available from this link illustrating the range of issues that are being addressed by such work and the administrative and ethical hoops that need to be faced. There is certainly some concern that the processes to authorise such work should be streamlined. See here

There is a discussion paper arising from the workshop available here

Disease Registries

Also important in this area are a range of disease registries which are targeted at particular groups or subsets of patients and which typically capture considerable detail information on those particular groups. Well known registries cover cardiac surgery outcomes, joint replacement outcomes and intensive care patient outcomes. These registries are typically managed by specialist clinical societies, are voluntary and provide professional and public feedback. Such registries have been very useful in identifying underperforming equipment and clinicians and in lifting the overall quality of care delivered. Some have been so valuable that Government is now assisting in funding their continuing operation. Additionally individual practitioners using EHRs can use their own patient data-bases to conduct audits of their own performance and there are now data-extraction tools where information can be submitted and reports created comparing a range of clinical quality measures with a number of peer practitioners. Such reporting can have a valuable effect in assisting practitioners identify weak areas where improvement is possible.

Quality Improvement

  • Most would agree it is very hard to make substantive improvement in any area of endeavour unless there is some form of reliable relevant measure that can be tracked over time to measure quality and performance.
  • Just as this can be done with a GP single practice, as mentioned a paragraph or two above, it can also be applied to hospitals, area health services and countries among others. In all these situations it is important to measure things that actually reflect what is going on rather than measuring an indicator because it is easy to measure. It is thus important - when seeking quality and performance improvement - to identify indicators that are reliable, easy to understand, robust, are clearly linked to sensible objectives and not able to be easily ‘gamed’.
  • In the US Hospitals are accredited and assessed by the so called ‘Joint Commission’. Their most recent report explains just how measurements are made and the strongly evidence based approach to selection of the criteria used to rank hospitals.
  • When the various requirements (e.g. all heart attack victims receive a statin on discharge) are reviewed it becomes very clear just how important electronic medical records are in allowing the information to be collected and reported accurately. Clearly extraction of data from actual patient records allows an accurate assessment of just how patients are being managed against the criteria with minimal levels of error and cost.
  • It is worth pointing out that as the information is collected and reported it can obviously be used for internal quality control, education etc.

Clearly the value of an Hospital EMR system is at least in part due to the capabilities offered in this area for quality management and improvement.

Pay for Performance

  • Another area of health policy that has a significant need to technology support is so called ‘Pay for Performance’ or P4P. The concept is intrinsically very simple - pay an incremental amount to clinicians for delivery of demonstrably better care. To be able to implement such approaches at minimum cost is it clearly important to be able to accurately monitor the levels of desired behaviours and this is made a great deal easier through the use of electronic patient records.
  • It should be noted that decisive evidence of the success of these programs - especially when linked to population health outcomes - has been quite hard to evince - and commentators in the area are rather mixed in their overall views.

This diagram from a recent article on the topic shows just how difficult things can be and some of the associated issues.

  • The full article is found here

Doubts About Pay-for-Performance in Health Care by Andrew M. Ryan and Rachel M. Werner | 12:00 PM October 9, 2013

  • The first three paragraphs of the article put it well.
  • “While health spending in the United States far surpasses that in other industrialized nations, the quality of care in the US is no better overall, and on several measures it is worse. This stark fact has led to a wave of payment reforms that shift from rewarding volume (as fee for service does) to rewarding quality and efficiency. Such pay-for-performance schemes seem to be common sense and are now widely used by private payers and Medicare. But astonishingly, there’s little evidence that they actually improve quality.
  • What do we really know about the effectiveness of using financial incentives to improve quality and reduce costs in health care? There is robust evidence that health care providers respond to certain financial incentives: medical students have a higher demand for residencies in more lucrative specialties, physicians are more likely to order tests when they own the equipment, and hospitals seek to expand care for profitable services at the expense of unprofitable services. *It would seem that increasing payment for high-quality care (and, conversely, lowering payment for low-quality care) is an obvious way to improve value in health care. But evidence suggests that health care is no different from other settings where similar payment incentives have been tried, such as education and private industry. Not only do these payment policies often fail to motivate the desired behaviors, they may also encourage cheating or other unintended responses.
  • Overall, evidence of the effectiveness of pay-for-performance in improving health care quality is mixed, without conclusive proof that these programs either succeed or fail. Some evaluations of pay-for-performance programs have found that they can modestly improve adherence to evidence-based practice.”
  • With all this said it seems clear that what value that can be achieved by P4P programs is most certainly only possible with a degree of IT enablement.

Rapid Learning

  • The best explanation of just what Rapid Learning is about comes from those who invented the term.

The Rapid Learning Project “The Rapid Learning Project explores national strategies to accelerate the pace of learning about best uses of new biomedical technologies, products, and treatments. Despite tremendous advances in medical science and technology, too often we don't know which treatments are most effective for improving patients' health.

  • According to Project Director Lynn Etheredge, who coined the term Rapid Learning System, "we are developing treatments and technologies faster than we know how to use them." The Rapid Learning Project seeks to create a nationwide system of databases, providing access to millions of patients' clinical experiences. Using this network of medical databases, researchers are able to access vast amounts of (de-identified) patient data that hold enormous potential for advancing collaborative health policy and clinical research.
  • The Rapid Learning Project aims to take advantage of two trends — (1) the increasing availability of medical data in electronic health records; and (2) the explosion of Web-based computing capacity — to swiftly gather information on new treatments, drugs, and medical technologies so physicians can immediately apply the findings in medical practice and better tailor care to individual patients.”
  • This link provides access to a great deal of information on the topic
  • The idea is important as it aims to use real time information to assist in understanding those interventions that are working and those that are not. With the pace of clinical change - and the impact of some changes on the overall costs and safety of the health system it is important to ensure that innovations are properly evaluated and that unexpected or unanticipated consequences are recognised early.
  • This approach feeds rather naturally into the next section.

Support of Comparative Effectiveness Research

  • The concept of Comparative Effectiveness Research (CER) is based on the idea that much of medical and nursing care has grown up via tradition rather than by an evidence based process. If the quality, cost efficiency and safety of care is going to be optimised then there needs to be a careful review of all activities and treatments to see we are doing all the things we need to be doing and not doing those things we shouldn’t.
  • As Wikipedia correctly defines CER.

“The core question of comparative effectiveness research is which treatment works best, for whom, and under what circumstances.”

  • The Obama administration has provided a very considerable budget allocation (billions) to this research as part of its attempts to reign in health care costs.

There is a detailed discussion of a lot of what is happening in the US found here

  • A crucial issue with research of this nature is just how the evidence-based findings are to be applied to those who have more than one illness and those who have variants of a particular illness. It is in this dilemma that clinical professional skill - to mix and match and to interpret the available evidence is needed. Slavish adherence to the various guidelines - which may have been written by individuals with some vested interests - is never a wise strategy, even in the simplest case.
  • Another issue that also needs to be considered - and which impacts the need for large scale technology support - is the potential difference between clinical results found in clinical trials and the results of the same treatment when applied in the field on a relevant population.
  • Here is a very useful blog entry from Health Affairs which discusses all this in considerable depth.

Applying Comparative Effectiveness Research To Individuals: Problems And Approaches Posted By Joel Kupersmith On October 29, 2013

  • A Comparative Effectiveness Research (CER) study shows that surgery is better than medical treatment for a particular cardiac condition. My patient is 78 years old and has complicated diabetes. – does the study apply? Another patient 48 years old and otherwise healthy. Does it apply here?

Can the overall results of a CER study be applied to all patients in the target population? Are there substantial, undetected variations among patients in the results of CER? What is the extent of exceptions? These are important policy questions in applying results of CER to day-to-day decisions, clinical guidelines, performance measures and other facets of the modern healthcare system.

  • The “gold standard” approach to CER is the randomized (RCT), a scientific comparison of two or more clinical strategies, with the downsides that it is generally conducted in a special environment and usually has a rather narrow (and possibly unrepresentative) population spectrum. Two variants, the Practical (or Pragmatic) Clinical Trial [1] (PCT) and the Large Simple Trial [2] (LST) are inclusive of a wider spectrum of patients and more diverse clinical settings.

These approaches provide “average” results and for the most part it is thought that averages do apply to a large segment of the population at large for which they are intended. However, there are clearly differences in effect (heterogeneities of treatment effect – HTE’s) that manifest among CER study subjects and presumably to a greater extent in the intended population outside the study. Two approaches may be equivalent on the average but one may be better in a particular group, and differences may be less apparent when the study’s population base is narrow. A long list of factors contribute to these HTEs for CER and other trials – comorbidities, severity of illness, genetics, age, medication adherence, susceptibility to adverse events, ethnicity, site, economics and others [3].

  • Important metrics for HTEs are variation in risk [4] and the benefit/risk ratio. Variations in baseline risk can be substantial among risk quartiles within studies, e.g. over 10 fold in a group of heart attack studies and 70 fold in studies of kidney disease progression [5].

There is a balance between risk and benefits. In general the main (at times the only) benefit of an intervention is for those in the highest risk quartile. At the lowest risk quartile or in those with diminished benefit (such as later rather than earlier treatment of stroke [6] with clot dissolving tPA therapy), the benefit/risk may be modest, absent or possibly, if there are significant adverse effects, opposite. In reviews of trials, primary angioplasty for heart attack compared to medical tPA benefited the 26 percent at highest risk [7] and was estimated to have a likely benefit threshold of about a 2.0-2.6 percent 30 day mortality rate [8]. Between the highest and lowest risk quartiles, there is a large middle risk zone, influenced by many elements.

  • On the other hand, benefits are unlikely at extreme risk or if the “pay off time [9]” of a beneficial strategy is likely to be longer than the expected harm. Neither colorectal screening nor coronary prevention benefits persons with short life expectancy. Also, risk may be greater than benefit in other situations. In diabetic patients with complications, intensive glucose lowering therapy increased mortality and hypoglycemia [10] but was beneficial in others [11]. Another qualification is that the numbers at the high risk levels are small and most events in fact occur in modest and low risk patients (the “prevention paradox [12],” as it has been called). If the only focus is on the highest risk individuals, many or most targets for treatment will be missed.
  • Site differences also occur in trials as they do in application of care. While some statistical variation is inevitable and there may be protocol variations in trials, causes also include administrative effectiveness, e.g. in creating collaborative efforts and technical support; skill and experience of providers and teams; adequate hospital capacity for intervention; geographic availabilities; communication problems; and, economics.

Full blog is here

  • These issues are also well explored here
  • Finally it is useful to be aware that population based CER is rapidly becoming a major application of the various data-mining techniques as useful databases of treatment and outcomes evolve.

This link explains how one group is doing such research.

Telemedicine / Tele-radiology / Tele-monitoring

  • The use of technology at a distance to provide support and care to remote individuals has emerged quite rapidly since the emergence of reliable wide-scale networking and most proximately the emergence of the global Internet.
  • In the application of networking technology we find two essential approaches.
    • The first is where a link is established for a specific patient related event be it a single consultation or a part of a continuing relationship for follow-up etc. These days some varieties of telehealth involve not a clinician but a web-site that may offer advice and in some instances treatment - as is telepsychiatry.
    • The second is where technology provides in intermediate (sensor or similar) link to a patient for the purpose of diagnosis and monitoring and which then provides remote access to that information. This approach can deploy a very wide range of different sensors either singly or as a total physiological monitoring package.
  • The key issue from a clinical and management perspective with the various tele-technologies is whether they actually make a clinical difference and are able to be justified on a reasonable cost benefit basis. The factors that might be considered in such considerations might include answers to questions such as:
    • 1. Can the same results be achieved as conveniently and at the same or less cost?
    • 2. Is the technology being used sufficiently good to provide information which is actionable?
    • 3. Are there benefits which can’t be realised without the use of the technology?
    • 4. Are there any possible risks in a particular application which need to be considered / ameliorated?
    • 5. Are there particular circumstances that make the approach crucial - e.g. providing care in space or Antarctica!
  • To explore the issue further and identify those approaches that seem justified let us consider a few specific examples.

As a precursor it is worth reviewing the Wikipedia entry on telemedicine which covers the field from a very US centric perspective but does mention a really early example as being the use of peddle wireless for obtaining advice from and summoning the Royal Flying Doctor Service. See here

  • For the purposes of the discussion that follows it seems reasonable to describe all the various modalities being discussed as ‘telehealth’ - although strict definitions are not really possible and there is a lot of cross over in the terminology used.

Australian Use Of Tele-health

  • Australia is - along with parts of the United States - ideally positioned geographically to use remote tele-health approaches to try and ameliorate the consistently observed differences between city and rural and remote health outcomes. It is no surprise that most tele-health activity has been focussed on Qld, SA and WA and the NT as here the distances and need are obviously more obvious.
  • The Labor Government, which was defeated in September 2013, was a very strong advocate of the use of tele-health believing it was useful and it fitted well with their National Broadband Agenda. At the time of writing it is not yet clear what the new Government’s view is.
  • There have been considerable financial incentives provided to encourage use - especially in remote and rural area.

The widely used applications are to provide remote specialist consultation in relevant disciplines and real time advice to remote practitioners.

  • Additionally there are a number of web sites who are providing remote care for anxiety and depression which have been able to demonstrate high effectiveness at very reasonable cost.

In 2012 there were some large grants made - amounting to over $20M dollars - to assess various aspects of care provision in distance, remote and aged care. This link provides a listing of the project covered and will illustrate the types of activities be researched.

  • NBN Enabled Telehealth Pilots Program

The National Broadband Network (NBN) Enabled Telehealth Pilots Program aims to demonstrate how the NBN infrastructure enables better access to high quality healthcare services, particularly aged care, palliative care and cancer care, using telehealth services in the home. The University of Queensland has an active research program in the area. Details are available here

  • There is some recent evidence of good outcomes being obtained using telehealth.
  • This blog provides links to a number of articles in the domain.
  • Overall remote technologies appear to gradually demonstrating their value - despite many previous studies which may have provided equivocal results.

Benefit Assessment and Benefits Realisation

In this section there are two areas to be addressed.

  • The first is to develop an understanding of how one may assess the benefits of the implementation / deployment may be assessed.
  • The second is to discuss what factors need to be considered during implementation to ensure the maximum value is obtained for the investment made.

Benefits Assessment

  • Benefits assessments are usually done as part of developing a business case for a new system implementation or extension. Up front it must be said this is a remarkably unreliable science and that the likely benefits are often hard to really have any confidence or justify. In the case of business case development it is usually possible to develop reasonably accurate costs of the system procurement and implementation but is it usually considerably harder to estimate overall impact both at a system and then at the larger organisational level.
  • While there are many ways to approach understanding possible benefits a sensible approach can be to divide the benefits into a number of categories. Typically benefits will be divided into hard financial (e.g. staff, time or resource savings) and non-financial benefits (to which some attempt typically is made to ascribe a dollar value). These can include such things as improvements in quality, safety patient satisfaction which will have indirect but often significant value at an organisational level.
  • Another way to think about the issues is at a functional level.

In this approach you might

    • First consider those things that are simply not possible without technology support for example computerised order entry with clinical decision support or closed loop medication management.
    • Second consideration can be given to processes which can be reshaped or redesigned as part of a system implementation which offers improvements in quality, speed, accuracy and so on.
    • Third consideration can be given improved outcomes which come as a result of the capacity to compare information over time to detect trends as a result of electronic storage and manipulation of information.

The impacts of each of these can be modelled against the predicted change resulting from the planned system.

  • As an example NEHTA modelled the impact of a Shared EHR system at a national level. The following illustrates the findings.

While the actual numbers need not be taken too seriously this does provide a snapshot of the areas of relative interest in terms of building a benefits case and an estimate of their relative proportions. There is, of course, a degree of ‘rubberyness’ in all figures of this sort as well as an important need, when projecting forward, to consider what changes in the external environment which can potentially amplify or diminish the actual realised outcomes. Organisations are not a stable entity on which to develop projections as they are always evolving and changing.

  • Additionally when considering a business case and its supporting evidence it is vital to consider
    • 1, Theoretical vs. actually demonstrated benefits
    • 2. Indicated vs. actual benefit found
    • 3. Specific research studies vs. in the wild (‘Real World’ experience) estimates of benefit.
  • Lastly in this section it is essential not note that the number of Business Cases vs. the number of Post Implementation Reviews is very, very skewed toward the former and so in many situations the actual level of cost, efficiency and quality / safety improvement can be very tricky to estimate. The lesson from all this, is than in management preserve a healthy level of scepticism with benefit and cost saving claims.

A good example is the PWC modelling (BEP Study) found in the files section for this section.

Benefits Realisation

  • The key issue here is to keep a clear perspective the Health IT is a tool which is used to enable operational, clinical and financial systems to work at their best and thus provide benefits which otherwise may not be possible. Also important is to keep in mind that there has to be a clear intent as well as resources focussed on actually obtaining the benefits that are able to be obtained
  • Key to this discussion of often consideration of the Return on Investment (ROI) that can be achieved with a particular implementation.

Achievement of a particular planned ROI often depends as much on change management and organisational leadership skills as it does on the actual technology, and the projected returns, as well as the necessary resources to obtain them, should all be built into the business case and thus the considered in the ROI expectation.

  • Additionally before any work is done on the actual implementation that the implementation plan has a deliberate properly planned and resourced benefits realisation work stream.
  • The steps to be considered in a generic benefits realisation (BR) approach are covered here
  • This quote from the article gets to the core of what BR is.

“The idea behind BR is that an investment is only successful if the benefits stakeholders were hoping to get are actually realised (actually happen).” Usefully the article also emphasises that measurement is important both pre, during and post system implementation against well considered goals and objectives.

  • The following diagram provides a useful summary of one generic approach to BR.

Source

  • The NHS has made available an interesting pro-forma to illustrate just how a BR plan might look. See here
  • Lastly we have recently had the Victorian Health Department release a very useful and well researched report and associated documents on BR in the Health Sector with local, nation and international information and frameworks. These are essential reading for all managers who have an interest in how Health IT can be properly exploited.
  • In summary, obtaining benefits from Health IT requires active and early planning and ongoing review and modification as necessary. Section 11 on practicalities will explore more issues in this area.

Note: Chapter 15 of the Recommended Textbook provides additional insight in this area and should be read.

Review Questions

  • 1. In which Health IT applications do you believe there are the largest possibilities for improving the quality and safety of patient care and reducing the cost of healthcare to the individual and the community?
  • 2. What do you think will be the major impacts of the National Broadband Network (NBN) initiative on the health sector?
  • 3. How effective do you think BR activities around the PCEHR have been? Slide presentation on the planned approach here


Return to HI Homepage

Questions & Comments to Geoff McDonnell
Personal tools