Background & Aims
One of the most vexing problems facing pain clinicians and researchers is how to predict treatment outcomes based on the presenting characteristics of the patient. Advances in machine learning have allowed the analysis of detailed pain registry data to fill this gap and provide this type of decision support. The University of Pittsburgh’s Patient Outcomes Repository for Treatment (PORT) combines electronic medical record (EMR) data (such as diagnosis, past medical history, medications prescribed, or procedures performed) with patient-reported outcomes (PROs) to form a registry linking PROs with EMR data at every visit. Collecting PROs during routine clinical care in a high quality and detailed fashion with regular frequency is necessary for valid comparative effectiveness determinations using observational data.(1) There has been limited analysis of pain registries to predict which new pain treatments are most likely to work based on a patient’s individual pain syndrome characteristics
Methods
PORT collects PROs in every patient at every clinic visit electronically. Baseline assessments included each visit where a new treatment class was prescribed or performed, such as medications, injections, physical therapy, and/or psychology. “Responders” were defined had to meet clinically important difference criteria using IMMPACT criteria(2) for changes in pain, function, or impression of change at 3 months follow-up.
We used a random forest (RF) models trained on a random sub-sample of training data. RFs run efficiently on large datasets with many variables and are less prone to overfitting. We optimized hyper-parameters using a development set and the Scikit-learn package in Python. We used 80/20% training/test data split sets and performed 5-fold cross validation to tune hyperparameters. We calculated response rates, probabilities of treatment response, Area Under the Curve Receiver Operating Characteristics (AUROC), and confidence intervals for 19 different treatments.
Results
46,080 patients met criteria for analysis, of which 60% were from initial evaluations and 40% follow-ups. Average baseline pain level was 6.4/10, and 65% had chronic back or neck pain, followed by arthritis pain, neuropathic pain, and fibromyalgia in frequency. 40% of patients met responder criteria for response at 3 months. Of the top 10 most important feature variables selected by the RF models, none were from the EMR and all were from PROs. AUROC’s ranged from 0.70-.77 with 95% CIs ranging from 0.68-0.84.
Conclusions
We used high-quality registry data suitable for generating valid practice-based evidence and machine learning methods to create random forest models which are accurate for predicting an individual patient’s response to a specific treatment for chronic pain. Recent reports indicate that in predicting treatment responses, AUROC’s >.70 are considered highly accurate and clinically reliable (versus >.80 for a diagnostic test).(3) These findings are a basis to use individual phenotype gathered in a routine clinic visit to make personalized treatment recommendations so that patients can be directed to the most effective treatment with high confidence. These models can become a shared decision-making tool to illustrate to patients which treatments are most likely to be effective in them using a personalized medicine approach.
References
1.Vollert, J., B.A. Kleykamp, J.T. Farrar, I. Gilron, D. Hohenschurz-Schmidt, R.D. Kerns, S. Mackey, J.D. Markman, M.P. McDermott, A.S.C. Rice, D.C. Turk, A.D. Wasan, and R.H. Dworkin, Real-world data and evidence in pain research: a qualitative systematic review of methods in current practice. Pain Rep, 2023. 8(2): p. e1057
2.Dworkin, R.H., D.C. Turk, K.W. Wyrwich, D. Beaton, C.S. Cleeland, J.T. Farrar, J.A. Haythornthwaite, M.P. Jensen, R.D. Kerns, D.N. Ader, N. Brandenburg, L.B. Burke, D. Cella, J. Chandler, P. Cowan, R. Dimitrova, R. Dionne, S. Hertz, A.R. Jadad, N.P. Katz, H. Kehlet, L.D. Kramer, D.C. Manning, C. McCormick, M.P. McDermott, H.J. McQuay, S. Patel, L. Porter, S. Quessy, B.A. Rappaport, C. Rauschkolb, D.A. Revicki, M. Rothman, K.E. Schmader, B.R. Stacey, J.W. Stauffer, T. von Stein, R.E. White, J. Witter, and S. Zavisic, Interpreting the clinical importance of treatment outcomes in chronic pain clinical trials: IMMPACT recommendations. J Pain, 2008. 9(2): p. 105-21
3.Kroenke K, Krebs EE, Turk D, et al. Core Outcome Measures for Chronic Musculoskeletal Pain Research: Recommendations from a Veterans Health Administration Work Group. Pain Med. 2019;20(8):1500-1508. doi:10.1093/pm/pny279
Presenting Author
Ajay Wasan
Poster Authors
Ajay D. Wasan
MD, MSc
University of Pittsburgh, Department of Anesthesiology & Perioperative Medicine, and Psychiatry
Lead Author
Brian O'Connell
MS
University of Pittsburgh, Department of Anesthesiology & Perioperative Medicine
Lead Author
Rebecca Desensi
MS
University of Pittsburgh, Department of Anesthesiology & Perioperative Medicine
Lead Author
Dan Sokolowski
MS
University of Pittsburgh Department of Biomedical Informatics
Lead Author
Sean McDermott
MD
University of Pittsburgh, Department of Anesthesiology & Perioperative Medicine
Lead Author
Greg Cooper
MD, PhD
University of Pittsburgh Department of Biomedical Informatics
Lead Author
Topics
- Informatics, Coding, and Pain Registries