Information

Click on a row to explore the results for that model. When you wish to explore a different model, then select the new result row and the tabs will be updated.

Demo Video

Can we trust the prediction model? Demonstrating the importance of external validation by investigating the COVID-19 Vulnerability (C-19) Index across an international network of observational healthcare datasets

Development Status: Completed

Information

This shiny application contains the results of the external validations of a model developed to predict risk of hospitalization with pneumonia in patients with flu or covid-19.

During manuscript development and the subsequent review period, these results are considered under embargo and should not be disclosed without explicit permission and consent from the authors.

Below are links for study-related artifacts that have been made available as part of this study:

Protocol: link

Abstract

Below is the abstract of the manuscript that summarizes the findings:

Background: SARS-CoV-2 is straining healthcare systems globally. The burden on hospitals during the pandemic could be reduced by implementing prediction models that can discriminate between patients requiring hospitalization and those who do not. The COVID-19 vulnerability (C-19) index, a model that predicts which patients will be admitted to hospital for treatment of pneumonia or pneumonia proxies, has been developed and proposed as a valuable tool for decision making during the pandemic. However, the model is at high risk of bias according to the Prediction model Risk Of Bias ASsessment Tool and has not been externally validated.

Methods: We followed the OHDSI framework for external validation to assess the reliability of the C-19 model. We evaluated the model on two different target populations: i) patients that have SARS-CoV-2 at an outpatient or emergency room visit and ii) patients that have influenza or related symptoms during an outpatient or emergency room visit, to predict their risk of hospitalization with pneumonia during the following 0 to 30 days. In total we validated the model across a network of 14 databases spanning the US, Europe, Australia and Asia.

Findings: The internal validation performance of the C-19 index was a c-statistic of 0.73 and calibration was not reported by the authors. When we externally validated it by transporting it to SARS-CoV-2 data the model obtained c-statistics of 0.36, 0.53 and 0.56 on Spanish, US and South Korean datasets respectively. The calibration was poor with the model under-estimating risk. When validated on 12 datasets containing influenza patients across the OHDSI network the c-statistics ranged between 0.40-0.68.

Interpretation: The results show that the discriminative performance of the C-19 model was lower than the reported internal validation across influenza cohorts. More importantly, we report very poor performance in the first ever validation of C-19 amongst COVID-19 patients in the US, Spain and South Korea. These results suggest that C-19 should not be used to aid decision making during the COVID-19 pandemic. Our findings highlight the importance of performing external validation to determine a prediction model’s reliability. In the field of prediction, extensive validation is required to create appropriate trust in a model.

Study Packages

  • Model validation: link

The Observational Health Data Sciences and Informatics (OHDSI) international community is hosting a COVID-19 virtual study-a-thon this week (March 26-29) to inform healthcare decision-making in response to the current global pandemic. The preliminary research results on this web-based application are from a retrospective, real-world, observational study in support of this activity and will subsequently be submitted to a peer-reviewed, scientific journal.

Data Information

The following databases were used in this study:

Database Name Country Type Years
Clinformatics Optum® De-Identified Clinformatic Data Mart Database – Date of Death (DOD)     USA     Claims     2000-2019
AU_ePBRN     Australian Electronic Practice Based Research Network Australia     Linked EHR (GP + Hospital) 2012-2019
AUSOM     Ajou University School of Medicine Database Korea     EHR     1999-2018
CCAE     IBM MarketScan® Commercial Database     USA     Claims 2000-2019
CUIMC     Columbia University Irving Medical Center Data Warehouse USA     EMR 1990-2020
VA-OMOP     Department of Veterans Affairs USA     EMR 2009-2010, 2014-2020
HIRA     Health Insurance and Review Assessment Korea     Claims     2013-2020
IPCI     Integrated Primary Care Information Netherlands     GP     2006-2020
JMDC     Japan Medical Data Center     Japan     Claims     2000-2019
MDCD     IBM MarketScan® Multi-State Medicaid Database     USA     Claims 2006-2019
MDCR     IBM MarketScan® Medicare Supplemental Database     USA     Claims 2000-2019
optumEhr     Optum® de-identified Electronic Health Record Dataset     USA EHR     2006-2019
SIDIAP_FLU     The Information System for Research in Primary Care (SIDIAP) Spain     GP     2006-2020
SIDIAP_COVID     The Information System for Research in Primary Care (SIDIAP) COVID-19 patients and their medical histories Spain     GP     2020
TRDW     Tufts Researrch Data Warehouse USA     EHR     2006-2020

All databases obtained IRB approval or used deidentified data that was considered exempt from IRB approval.

Filters

Model Settings: help

Population Settings: help

Covariate Settings: help

Binary

Loading...

Measurements

Loading...

Covariates

Model Table