Abstract:Objective: The purpose of this study was to establish the applicability of a predictive model based on spectral computed tomography (CT) parameters and radiomics features through machine learning for differentiating between benign and malignant thyroid nodules.Methods: A retrospective analysis was conducted on imaging and clinical data from 118 patients with thyroid nodules who underwent spectral CT enhancement scanning (comprising 46 benign and 97 malignant nodules). These patients were randomly divided into a training set (n=100) and a validation set (n=43) in a 7:3 ratio. Discriminative testing, intraclass correlation coefficient (ICC), and the Least Absolute Shrinkage and Selection Operator (LASSO) were employed to select features and calculate a radiomics score (radscore). Six machine learning algorithms—decision tree (DT), random forest (RF), extreme gradient boosting (XGboost), support vector machine (SVM), K-nearest neighbors (KNN), and logistic regression (LR)—were utilized to develop models. The optimal model was selected to construct nomograms.Results: The XGboost model demonstrated optimal performance in the validation set(AUC: 0.938; Accuracy: 86.05%; Sensitivity:89.29%; Specificity: 80.00%), with normalized iodine concentration (NIC), radscore, and age identified as significant predictive factors. The ensuing nomograms exhibited robust performance.Conclusion: The machine learning model that combines spectral CT and radiomics features with the nomograms provides a highly accurate reference for non-invasive prediction of the benignity or malignancy of thyroid nodules.