Ways to Improve Electronic Health Record Safety

Rigorous testing and establishment of voluntary criteria can protect patients

Report August 28, 2018 Read time: Ways to Improve Electronic Health Record Safety

Overview

Electronic health records have transformed modern medicine, giving doctors and nurses better data to guide care, supporting enhanced patient safety through new automated tools, and creating more efficient processes by connecting different health systems.

However, the design, customization, and use of electronic health records (EHRs) by doctors, nurses, and other clinicians can also lead to inefficiencies or workflow challenges and can fail to prevent—or even contribute to—patient harm. For example, an unclear medication list could result in a clinician ordering the wrong drug for a patient. Laboratory tests that are displayed without the date and time of the results could lead to clinical decisions based on outdated information. And failures of systems to issue alerts about harmful medication interactions—situations that can stem from changes made by facilities, how clinicians enter data, or EHR design—could lead to medical errors.

These safety hazards can be associated with EHR usability, which refers to the design and use of the technology and how individuals interact with it. Usability challenges can frustrate clinicians because they make simple tasks take longer, lead to workarounds, or even contribute to patient safety concerns. These challenges can stem not only from the layout of EHRs, but also from how the technology is implemented and operated in health care facilities; how clinicians are trained to use it; and how the EHR is maintained, updated, and customized. Each stage of EHR development and use—the software life cycle from development through implementation and use in a health care environment—can affect the usability and safety of the technology.

While usability and patient safety are related, not every usability challenge will represent a risk to patients, and not every risk to patients stems from an EHR usability problem. In fact, some changes to EHRs might improve safety but result in less-efficient workflows—for example, if clinicians were prompted to enter “lbs.” or “kg.” every time they entered a patient’s weight. But when a system is challenging to use or patient information is difficult for a clinician to find, safety risks could occur.

As part of federal criteria that provide the certification standards for EHRs, technology developers must state that they engage end users and conduct usability testing during design and development. However, the certification requirements can fall short in two ways when it comes to assessing whether the use of products contributes to patient harm.

First, current federal testing criteria do not address circumstances in which customized changes are made to an EHR as part of the implementation process or after the system goes live. Instead, current rules focus only on the design and development stage of the EHR. While federal regulations mandate the testing of certain safety-related features—such as medication-allergy checks—the requirements do not focus on whether those functions operate in a safe way.

The second key challenge is the absence of requirements and guidance on how to test clinician interaction with the EHR for safety issues. Clinical test cases, which are scenarios that reflect realistic patient conditions and how health care providers treat individuals, can help detect hazards. However, there are no clear criteria for what constitutes a rigorous test scenario. Similarly, some of the scenarios for certification, while testing that certain functions work, may not effectively evaluate the EHR for usability or safety. Current certification test cases can be too specific, lack relevant details, or may not test aspects of the EHR that are recognized as posing safety risks.

Unlike many other high-risk sectors, such as the airline and medical device industries, there is no standard for routinely testing health care software for safety issues and concerns.

To address these two challenges, The Pew Charitable Trusts, MedStar Health’s National Center for Human Factors in Healthcare, and the American Medical Association conducted a literature review and convened a multidisciplinary expert panel composed of physicians, nurses, pharmacists, EHR vendors, patients, and health information technology experts. This information led to the development of:

Use of the voluntary certification tenets and test cases by health care facilities and technology developers can improve the usability and safety of EHRs. They also allow for the proactive identification of potential harm associated with the implementation and customization of EHRs.

Existing usability tests for certification fall short

EHR software must meet minimum certification criteria established by the federal government to ensure that it can share data, provide key capabilities to clinicians, and protect patient privacy. The current EHR certification program—implemented by the federal Office of the National Coordinator for Health Information Technology (ONC)—is intended to set the baseline standards that EHRs must meet so that hospitals and health care providers can confidently adopt and use the technology to meet requirements in certain federal programs.

Under current certification requirements for EHR usability, which were released in 2015, developers must:

Regardless of which test cases are used, ONC’s usability criteria require the testing of certain EHR functions, such as the ability to order medications electronically and receive medication alerts. The NIST-developed test cases do not explicitly address each of the required functions laid out in ONC’s certification regulations. 6 Because these cases do not overlap with high-prevalence safety hazards, some important EHR features may not be sufficiently evaluated.

EHR Certification Program

The Health Information Technology for Economic and Clinical Health (HITECH) Act, passed in 2009, established the Medicare and Medicaid EHR Incentive Program—known as Meaningful Use—to provide financial incentives for hospitals and doctors’ offices to purchase and utilize electronic health records. The program has provided more than $37 billion 7 for the adoption of EHRs.

To obtain those incentive payments, hospitals and doctors had to demonstrate that they used EHRs in certain ways—such as by sending prescriptions to pharmacies electronically instead of on paper.

To provide assurances to hospitals and doctors that the EHRs they use can perform those functions, HITECH also created the EHR certification program administered by ONC. This establishes technological requirements for EHRs, including their functions, how to record information, and security features. ONC has periodically updated the requirements. The initial certification requirements were published in 2010 (2011 Edition), with updates in 2012 (2014 Edition) and 2015 (2015 Edition).

To obtain certification, EHR developers submit data on their products to accredited testing labs for review. The labs forward their findings to an ONC-authorized certification body, which issues a certification based on the lab’s finding.

In September 2017, ONC modified its testing requirements to require health IT developers to state that they meet 30 certification criteria (approximately half of the total) instead of having them reviewed by an accredited testing lab. 8 This reflected a major change in the approach to vendor certification.

In 2016, ONC issued regulations giving it direct authority to enter health care facilities and review and test EHRs that posed serious risks to patients. If a risk was found, ONC could require the health IT developer to fix the problem or suspend certification of products with unresolved problems. 9 In 2017, ONC announced it would no longer require testing labs to perform random surveillance to allow labs to focus on complaint-based investigations. 10

ONC has also indicated that it will not update the 2015 Edition. 11 However, the 21st Century Cures Act requires health IT products to meet new criteria for EHRs used in the care of children, which affords another opportunity to improve safety. 12

Best practices already adopted to improve safety

Many EHR developers and health care providers have engaged in additional efforts to improve usability and safety throughout the product life cycle:

These practices can serve as the foundation for a more comprehensive certification program during product development and after implementation to improve EHR safety.

Opportunities to improve usability certification test cases

Several factors throughout the EHR life cycle affect usability and safety. Current certification tests are focused on evaluating the usability of key system requirements. According to published articles and experts consulted, best practices for testing, which are not required, include:

Given the gaps of current regulations and practices implemented by health care facilities and technology developers, the literature review and expert panel discussions identified additional initiatives that could improve EHR safety.

Criteria to support usability and safety throughout the EHR life cycle

Several additional best practices, criteria, and factors emerged from the literature review and expert panel discussions that could help EHR developers and health care facilities improve product usability and safety. These criteria could provide a foundation for a voluntary certification program. Given that both EHR developers and health care providers have roles in ensuring the safe use of products, specific criteria were established for each. Additional standards for improving safety are also under development by the American Association of Medical Instrumentation.

Several discrete actions and criteria were also identified that developers and health care providers should undertake. A voluntary certification program that encompasses these components could ask developers and providers to consider each criterion and, where appropriate, to adopt and implement these methods and processes. While the criteria provide a framework for factors that can be included in voluntary certification programs, each institution creating such a program would have to tailor it to its specific goals and mission.

By focusing on the entire EHR life cycle and having specific criteria in place to improve usability and safety, the voluntary certification framework can augment the current certification process. Adherence to these recommendations by EHR developers and health care providers can reduce the likelihood of unintended patient harm from clinician use of this technology.

Establishing rigorous, safety-focused test case scenarios

To identify and address usability and safety challenges with EHRs before health care facilities use them in patient care, one method developers typically use is to evaluate their products with clinical test cases. 27 These cases are scenarios that reflect realistic patient conditions as well as the clinician tasks that would occur in caring for an individual. The scenarios allow for the observation of clinicians interacting with the EHR so that specific usability and safety challenges can be identified and addressed.

Given the importance of detecting safety challenges early, test cases should focus on EHR interactions that have the potential for serious harm in addition to low-risk but frequent interactions that are unlikely to adversely affect the patient. These test cases should also reflect real-world clinical interactions so that unique workflows and the opportunity for clinicians to make mistakes can be factored in.

Challenges with current certification usability test case scenarios

Despite the importance of test case scenarios for evaluating and improving EHR usability and safety, the usability scenarios submitted for certification can lack rigor. They were simple, did not reflect realistic clinical conditions, or included prescriptive instructions that may not be present in a clinical setting—making it more difficult to identify challenges that may arise when using the technology for actual care. 28 Here is an example of a test case scenario that lacks rigor: 29

“Looking at patient John Leeroy’s record, enter a new lab order for the patient.”

Specific shortcomings of this type of test case include:

Developing test case criteria and relevant cases

To address the need for enhanced safety-focused test cases, specific criteria were developed to help guide the creation of rigorous scenarios. Fourteen cases, based on seven identified EHR usability and safety challenges, were developed based on the criteria.

Making these criteria and test cases available for use by both EHR developers and health care providers can help clinician interaction with EHRs be tested more effectively to identify usability and safety challenges before patients are harmed. These test case scenarios can be used in conjunction with other tools—such as the tests from Leapfrog or safety-related guides from ONC—to evaluate safety.

Criteria development for rigorous test cases

The insights from the EHR developers, clinicians (including physicians and nurses), researchers, and other stakeholders on the expert panel were integrated with existing literature, and four general features of a rigorous test case were defined. They should:

Each of these feature categories were further divided into subfeatures.

Developing and using test cases that adhere to these criteria will provide greater rigor to the evaluation of clinician interaction with EHRs and can serve to better highlight specific usability and safety challenges in the design, customization, or use of products before patients are harmed.

Test cases for prevalent usability and safety challenges

The criteria were used to develop 14 test use cases to demonstrate how scenarios can adhere to the identified principles and address prevalent usability and safety challenges. This includes two use cases each for seven prevalent patient safety hazards. The safety challenges were previously identified through an analysis of 557 patient safety event reports—free-text descriptions of potential patient safety hazards submitted by health care facilities—related to EHRs. 30

For each of these usability and safety challenges, we developed both a basic and advanced test. The basic scenarios are more narrowly scoped tasks that represent a single aspect of the entire clinical workflow focused on a single clinical process and generally do not involve interaction with other EHR components or clinical processes. The advanced cases represent a more detailed aspect of the clinician’s workflow, including factors such as teamwork and communication with other clinicians.

The use of both basic and advanced cases helps test the range of EHR capabilities and supports evaluation of a product early in design. Basic cases can help evaluate single EHR features but should be used in combination with advanced ones throughout development and implementation. The advanced cases should be used to test broader workflows that involve several system features and interactions with multiple clinicians. The use of both types of test cases during development and after implementation can help detect problems.

The test cases include both inpatient and outpatient clinical settings. Each test case is provided on a template to support use by EHR developers, health care providers, or other stakeholders. The template provides a standardized format for each of the test cases, detailing:

Sample test case scenario

Below is an example of a basic scenario. The remaining scenarios are listed in the appendix.

This example describes challenges with how clinicians may enter allergy information. Based on how clinicians enter data and the design of systems, prescribing medication to which the patient is allergic should trigger an alert. This example is a basic scenario that tests the usability and safety of the allergy alerting function if allergies are entered in the EHR using the free-text option. Many allergies are entered as structured text, which is generally predefined allergies already in the system. However, clinicians sometimes need to enter free-text descriptions when structured options are not available or hard to find.

As a basic scenario, this test case aims to represent a single aspect of the clinical workflow and identify whether the EHR supports the provider’s expectation that an allergy alert will be triggered if there are relevant known allergies. This basic test case reflects the mental process a clinician would use with the EHR and the complexities of clinical care in a way that can be clearly evaluated.

The background information provided in the testing and robustness sections enable nonclinical moderators to understand the nature of the test. The clear scoring definition helps testers establish whether the system passed or failed the assessment.

Inclusion of the nonspecific EHR terms, necessary participants, and estimated completion time support use of this test case across institutions and EHR systems. This type of test case is one example of a more rigorous case that better represents how EHRs might be used in the actual clinical environment.

The test scenarios are based on actual patient safety reports that involve technology and potential harm. Having these test cases available as examples of more rigorous scenarios will allow both developers and providers to create their own usability and safety assessments.

Conclusion and next steps

Usability challenges associated with EHRs frustrate clinicians and can pose safety risks that contribute to patient harm. 31 These challenges stem from the design of EHRs, decisions of health care facilities that implement the technology, and how clinicians use the systems. While the government, EHR developers, and health care providers have initiatives focused on improving the usability and safety of EHRs, gaps still exist with the scope and depth of federal requirements and in the test cases used to evaluate systems.

We sought to fill these gaps through the development of a more comprehensive certification framework focused on the entire EHR life cycle that engages both developers and providers, and by developing test case criteria with examples based on prevalent usability and safety challenges. Achieving the benefits of both the certification framework and test cases requires their adoption by EHR developers and health care providers. Once adopted, the test cases should be evaluated for their ability to detect safety events, assessed for challenges that arise in their use, and adjusted accordingly.

Adoption of the voluntary criteria

Some EHR developers and health care providers may choose to adopt the criteria to improve the safety and usability of systems. However, their adoption by other organizations may also require some financial or nonmonetary incentives, since resources will be required to adhere to the recommendations. We examined four potential approaches for adoption:

  1. The ONC could recognize elements of this certification as alternatives to its current requirements. However, given that these voluntary certification criteria include provisions that surpass those in federal regulations and address the entire life cycle and provider roles, such recognition by ONC is unlikely, though the agency could highlight private sector efforts.
  2. EHR developers and health care organizations could voluntarily adopt the criteria as an indication that they prioritize safety. EHR developer adoption would provide greater transparency to purchasers—such as hospitals—on the actions that vendors take to enhance safety. Meanwhile, health care facilities adopting these requirements would communicate to patients that safety is an institutional priority and help mitigate hazards that could become liabilities, such as high-risk customization made despite EHR developer concerns. To assist organizations in knowing what and how to evaluate their systems and processes, third parties could create voluntary certification programs to offer guidance and certificates upon meeting the expectations.
  3. Organizations that are already prioritizing health IT safety could embed these recommendations into their programs. For example, The Leapfrog Group has encouraged hospitals to take its computerized physician order entry tool to analyze the ability of clinicians to safely prescribe medicines. 32 Hospitals take these tests and strive to achieve good scores, because the results are made public, fueling adoption throughout the industry. Similarly, the Association for the Advancement of Medical Instrumentation is publishing several standards associated with health IT safety and encourages their adoption. Adherence to the standards can be used to show that health IT safety is a priority. 33
  4. Organizations that have some role in overseeing health care facilities, including the Joint Commission (which serves to accredit health care providers in order to promote safety and more effective care), may be able to drive health care providers to incorporate these recommendations and pressure EHR vendors to also incorporate best practices. The Joint Commission could incorporate these criteria into its requirements, so that its inspectors seek evidence that health care facilities—and perhaps the technology they use—adhere to best practices. While not all health care organizations receive Joint Commission accreditation, its program is influential and provides guidance for all organizations on how to improve safety.

Use of the test case criteria and sample test cases

Adoption of the test case criteria and sample test cases by EHR developers and health care facilities can enhance safety by better evaluating products throughout the system life cycle.

For developers, use of these test cases can take place early in design and development and as the product matures. Because federal regulations do not stipulate the rigor needed in these scenarios, the thoroughness and depth of the test cases used show wide variability. The test scenario criteria can serve as a potential standard for the EHR accrediting bodies and a resource for developers. Adoption of these criteria by the accrediting bodies as to the level of rigor for test cases could immediately improve the current certification process and help identify safety risks before use of the product. Similarly, these criteria could be incorporated into future updates to ONC’s EHR certification requirements for usability scenarios or if other organizations develop their own criteria and test products.

Health care organizations can use the test criteria and sample cases to evaluate the usability and safety of their product during the implementation phase, after changes are made, and to inform customization decisions. The criteria can be used to develop test cases for specific areas that are recognized by the health care provider as potential areas of risk. Organizations can immediately leverage the example test cases to quickly evaluate system safety to identify challenges and prevent harm.

The future of health IT

EHRs have revolutionized health care delivery by giving clinicians and patients better tools to foster safe, higher-quality care. Despite the benefits, however, system design, health care organization implementation decisions, and their use by clinicians can contribute to unintentional safety challenges. The adoption of best practices—including the tenets of the safety-focused certification criteria and more robust testing scenarios—can help give EHR developers and health care facilities better information to detect challenges and reduce the potential of avoidable patient harm.

Endnotes

  1. Office of the National Coordinator for Health Information Technology, “2015 Edition Health Information Technology Certification Criteria, 2015 Edition Base Electronic Health Record (EHR) Definition, and ONC Health IT Certification Program,” 80 Fed. Reg. 62602 (Oct. 16, 2015), https://www.gpo.gov/fdsys/pkg/FR-2015-10-16/pdf/2015-25597.pdf.
  2. Ibid.
  3. Svetlana Z. Lowry et al., “NISTIR 7804: Technical Evaluation, Testing, and Validation of the Usability of Electronic Health Records,” National Institute of Standards and Technology (2012), https://nvlpubs.nist.gov/nistpubs/ir/2012/NIST.IR.7804.pdf.
  4. Svetlana Z. Lowry et al., “NISTIR 7804-1: Technical Evaluation, Testing and Evaluation of the Usability of Electronic Health Records: Empirically-Based Use Cases for Validating Safety-enhanced Usability and Guidelines for Standardization,” National Institute of Standards and Technology (2015), http://nvlpubs.nist.gov/nistpubs/ir/2015/NIST.IR.7804-1.pdf.
  5. Office of the National Coordinator for Health Information Technology, “2015 Edition.”
  6. Ibid.
  7. Centers for Medicare and Medicaid Services, “April 2018 – EHR Incentive Program Active Registrations” (2018), https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/Downloads/April2018_SummaryReport.pdf.
  8. Office of the National Coordinator for Health Information Technology, “Self-Declaration Approach for ONC-Approved Test Procedures” (2017), https://www.healthit.gov/sites/default/files/policy/selfdeclarationapproachprogramguidance17-04.pdf.
  9. The Pew Charitable Trusts, “Improving Patient Care Through Safe Health IT” (December 2017), http://www.pewtrusts.org/-/media/assets/2017/12/hit_improving_patient_care_through_safe_health_it.pdf.
  10. Elise Sweeney Anthony and Steven Posnack, “Certification Program Updates to Support Efficiency & Reduce Burden,” Health IT Buzz (blog), Office of the National Coordinator for Health Information Technology, Sept. 21, 2017, https://www.healthit.gov/buzz-blog/healthit-certification/certification-program-updates-support-efficiency-reduce-burden/.
  11. Ibid.
  12. Pub. L. 114-255, 21 st Century Cures Act (2016), https://www.congress.gov/114/plaws/publ255/PLAW-114publ255.pdf.
  13. HIMSS Electronic Health Record Association, “EHR Code of Conduct” (2016), http://www.ehra.org/resource-library/ehr-code-conduct.
  14. Cheryl McDonnell, Kristen Werner, and Lauren Wendel, “Electronic Health Record Usability: Vendor Practices and Perspectives” (2010), Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, https://healthit.ahrq.gov/sites/default/files/docs/citation/EHRVendorPracticesPerspectives.pdf; eClinicalWorks, “Patient Safety and the Use of eCW’s Electronic Health Records Software,” news release, Dec. 6, 2016, https://www.eclinicalworks.com/eclinicalworks-patient-safety-use-of-electronic-health-records-software; Tarah Hirschey, “The Role EHR Vendors Play in Patient Safety” (2014), Athenahealth, https://www.athenahealth.com/blog/2014/03/04/the-role-ehr-vendors-play-in-patient-safety.
  15. James Walker et al., “EHR Safety: The Way Forward to Safe and Effective Systems,” Journal of the American Medical Informatics Association 15, no. 3 (2008): 272–77, http://dx.doi.org/10.1197/jamia.M2618.
  16. Ibid.; Shailaja Menon et al., “Safety Huddles to Proactively Identify and Address Electronic Health Record Safety,” Journal of the American Medical Informatics Association 24, no. 2 (2017): 261–67, http://dx.doi.org/10.1093/jamia/ocw153.
  17. Vishnu Mohan et al., “Using Simulations to Improve Electronic Health Record Use, Clinician Training and Patient Safety: Recommendations From a Consensus Conference,” AMIA Annual Symposium Proceedings (2016): 904–13, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5333305.
  18. Jane Metzger et al., “Mixed Results in the Safety Performance of Computerized Physician Order Entry,” Health Affairs 29, no. 4 (2010), https://doi.org/10.1377/hlthaff.2010.0160.
  19. Blackford Middleton et al., “Enhancing Patient Safety and Quality of Care by Improving the Usability of Electronic Health Record Systems: Recommendations From AMIA,” Journal of the American Medical Informatics Association 20, no. e1 (2013): e2–e8, http://dx.doi.org/10.1136/amiajnl-2012-001458.
  20. Raj M. Ratwani et al., “Electronic Health Record Usability: Analysis of the User-Centered Design Processes of Eleven Electronic Health Record Vendors,” Journal of the American Medical Informatics Association 22, no. 6 (2015): 1179–82, https://academic.oup.com/jamia/article/22/6/1179/2357601; National Center for Human Factors in Healthcare, MedStar Health, “EHR User-Centered Design Evaluation Framework,” https://www.medicalhumanfactors.net/ehr-vendor-framework.
  21. McDonnell, Werner, and Wendel, “Electronic Health Record Usability.”
  22. Raj M. Ratwani et al., “Electronic Health Record Vendor Adherence to Usability Certification Requirements and Testing Standards,” Journal of the American Medical Association 314, no. 10 (2015): 1070–71, http://dx.doi.org/10.1001/jama.2015.8372; Office of the National Coordinator for Health Information Technology, “2015 Edition”; Raj M. Ratwani et al., “A Framework for Evaluating Electronic Health Record Vendor User-Centered Design and Usability Testing Processes,” Journal of the American Medical Informatics Association 24, no. e1 (2017): e35–e39, http://dx.doi.org/10.1093/jamia/ocw092.
  23. Middleton et al., “Enhancing Patient Safety”; Elizabeth M. Borycki and Andre W. Kushniruk, “Towards an Integrative Cognitive-Socio- Technical Approach in Health Informatics: Analyzing Technology-Induced Error Involving Health Information Systems to Improve Patient Safety,” Open Medical Informatics Journal 4 (2010): 181–87, http://dx.doi.org/10.2174/1874431101004010181.
  24. Derek W. Meeks et al., “An Analysis of Electronic Health Record-Related Patient Safety Concerns,” Journal of the American Medical Informatics Association 21, no. 6 (2014): 1053-59, http://dx.doi.org/10.1136/amiajnl-2013-002578.
  25. Raj M. Ratwani et al., “Mind the Gap: A Systematic Review to Identify Usability and Safety Challenges and Practices During Electronic Health Record Implementation,” Applied Clinical Informatics 7, no. 4 (2016): 1069–87, https://www.ncbi.nlm.nih.gov/pubmed/27847961.
  26. Min Soon Kim et al., “Usability Challenges and Barriers in EHR Training of Primary Care Resident Physicians” (2014), https://link.springer.com/chapter/10.1007/978-3-319-07725-3_39.
  27. Office of the National Coordinator for Health Information Technology, “2015 Edition.”
  28. Ratwani et al., “A Framework for Evaluating.”
  29. Drummond Group, “EHR Usability Test Report of Amazing Charts Version 7.0” (2014), 45.
  30. Jessica L. Howe et al., “Electronic Health Record Usability Issues and Potential Contribution to Patient Harm,” Journal of the American Medical Association 319, no. 12 (2018): 1276–78, http://dx.doi.org/10.1001/jama.2018.1171.
  31. Middleton et al., “Enhancing Patient Safety”; Maryam Zahabi, David B. Kaber, and Manida Swangnetr, “Usability and Safety in Electronic Medical Records Interface Design: A Review of Recent Literature and Guideline Formulation,” Human Factors 57, no. 5 (2015): 805–34, http://dx.doi.org/10.1177/0018720815576827.
  32. The Leapfrog Group and Castlight Health, “Preventing Medication Errors in Hospitals: Data by Hospital on Nationally Standardized Metrics” (2016), http://www.leapfroggroup.org/sites/default/files/Files/Leapfrog-Castlight%20Medication%20Safety%20Report.pdf.
  33. Association for the Advancement of Medical Instrumentation, “AAMI Launches Health IT Standards Initiative,” AAMI News, August 2015, http://www.aami.org/productspublications/articledetail.aspx?ItemNumber=2663.
SIGN UP FOR WEEKLY UPDATES

Don’t miss our latest facts, findings, and survey results in The Rundown