Description |
Background. Prognosis of kidney transplant outcomes, while clinically important, represents a challenging problem. Existing prediction models use the predictors that are available in the post-transplant period. However, the real value of a model is to predict outcomes prior to the transplantation, yet there is little experience in predicting graft survival using pre-transplant variables. Furthermore, the prediction models development using national registry data in previous studies have not been validated in the local clinical environment. This study is one of the first to apply a model derived from aggregate national dataset to a local dataset for validation. Methods. Five classification tree models predicting 1, 3, 5, 7 and 10 years post-transplant graft survival have been derived form the United States Rental Data System (USRDS) data in our previous studies. The models included only those predictors that are available prior to kidney transplantation. In this study, the local clinical data were used to validate the models. The local data sources included the Enterprise Data Warehouse (DW), the Solid Organ Transplant Program, and the Dialysis Program at the University of Utah Health Science Center (UUHSC). A comparative analysis was conducted regarding the data characteristics of the transplant recipients between the national (USRDS) and local (UUHSC) datasets. The performance of the prediction models were evaluated using both the national and local data. Results. In the USRDS dataset, the number of the patients with a long enough follow-up to reach the graft outcome ("fail" or "survive") at 1, 3, 5, 7, and 10 years post-transplant were 92,844, 73,672, 58,005, 46,791, and 35,279, respectively. In the UUHSC dataset, the numbers of the patients who had known graft outcomes at 1, 3, 5, 7, and 10 years post-transplant were 854, 635, 462, 325, and 213, respectively. We found that the graft survival rates among the transplant recipients from the UUHSC dataset are significantly higher than those from the USRDS dataset (94% vs. 86%l 87% vs. 72%; 77% vs. 54%; 61% vs.36%; and 33% vs. 8% at 1, 3, 5, 7, and 10 years post-transplant, respectively, all p<0.001). The majority of the recipients (>93%) and the donors (>95%) were white at the UUHSC. In addition, the UUHSC dataset showed a significantly higher proportion of living donors than the USRDS dataset at 1 year (42% vs, 21%, p<0.001), 3 years (42% vs. 24%, p<0.001), 5 years 41% vs. 22%, p<0.001), 7 years (40% vs, 20%, p<0.001), and 10 years (39% vs. 18%, p<0.001). Discrimination of the prediction models was measured by the area under the ROC curve (AUC). The AUC values of the models predicting 1,3,5,7, and 10 years graft survival on the USRDS data were 0359, 0.63, 0.76, 0.91, and 0.97, respectively. In contrast, the AUC values of the models predicting 1, 3, 5, 7, and 10 years graft survival on the UUHSC data were 0.54, 0.58, 0.58, 0.61, and 0.70, respectively. Conclusion. The prediction models performed better on the national data than on the local data. This is almost certainly due to the difference in the data characteristics between the two datasets. Researchers routinely extrapolate the results from national studies to local circumstances, but this study is one of the first to show the potential dangers of doing so with real world data. Adopting wholesale a prediction model developed on a large national dataset for local purposes should be done with caution. |