Abstract:[Abstract] Objective:To establish classification and grading models for gastric cancer pathological sections based on deep learning technology and to evaluate the performance of these models. Methods:Classification and grading datasets for gastric cancer and non-cancerous tissues were collected from publicly available online resources. Data augmentation was performed, and the dataset were divided into training, validation, and test sets. In the initial stage, 17 convolutional neural network(CNN)architectures were constructed, and the initial training parameters were uniformly set to train these 17 models for the classification of gastric cancer and non-cancerous tissues. After training, the recognition accuracy on the test set and the training time were used as evaluation indicators to comprehensively assess the efficacy of different model architectures. Based on these indicators, the optimal architecture was selected for further optimization and training to construct the gastric cancer classification model. After the completion of the classification model, the gastric cancer grading model was built based on the foundation of the classification model. During the training of the gastric cancer grading model, 17 grading networks were trained, and suitable base models were selected according to performance indicators. After the base model was determined, Voting and Stacking methods were applied for ensemble learning and compared with single models to explore the impact of ensemble learning on performance improvement and to construct the gastric cancer grading model. Results:In the training of the gastric cancer classification model, the Xception network was selected as the final classification model after comparison. After parameter adjustment and training, the final gastric cancer classification model achieved an accuracy of 98.13%, sensitivity of 98.11%, specificity of 98.11%, F1 score of 98.12%, and AUC of 0.9983 on the test set. In the training of the gastric cancer grading model, the stacking method represented by random forest showed significant improvement compared to the voting method represented by hard voting. The ensemble model based on random forest was selected as the final grading model, with an accuracy of 95.06%, sensitivity of 94.77%, specificity of 98.36%, and F1 score of 94.82%. The AUC(Area Under the Receiver Operating Characteristic curve) values were 0.9994 for benign, 0.9811 for poorly differentiated tubular adenocarcinoma, 0.9896 for moderately differentiated tubular adenocarcinoma, and 0.9951 for well-differentiated tubular adenocarcinoma. Conclusion:Both models demonstrated excellent recognition performance, proving the feasibility of using convolutional neural networks to achieve high-precision classification and grading of gastric tumor pathological images. The transfer-learning and ensemble-learning framework was successfully applied to the grading of gastric tumor images and holds promise for integration into hospital intelligent diagnostic assistance systems.