Robert S. Ohgami, MD, PhD, Professor of Pathology, University of Utah, and Founding Vice President and Chief Medical Director, ARUP Institute for Research and Innovation, discusses the utilization of artificial intelligence (AI) to analyze Castleman disease (CD) histopathology.
CD is a heterogeneous group of rare lymphoproliferative disorders, affecting the lymph nodes and related tissues. There are two main forms: unicentric CD and multicentric CD. Unicentric CD is a localized condition that is generally confined to a single set of lymph nodes, while multicentric CD is a systemic disease that affects multiple sets of lymph nodes and other tissues throughout the body. The exact underlying cause of CD is currently unknown.
Diagnosis of CD requires histopathologic interpretation of lymph node biopsies where key histologic features, atretic germinal centers, follicular dendritic cell prominence, vascularity, hyperplastic germinal centers, and plasmacytosis, are graded on an ordinal scale from 0 to 3. This process often results in variability due to its subjective nature. A recent analysis, presented at the 2025 American Society of Hematology meeting, evaluated the use of AI computational pathology techniques (attention-based multiple instance learning; ABMIL) in automating CD grading reliability and accuracy as compared to hematopathology experts.
A proof-of-concept ABMIL model was developed to predict slide-level CD histology scores from whole-slide images of H&E-stained lymph node tissue. Each whole-slide image was divided into tiles and a pre-trained foundation model was used to extract embeddings, which were then aggregated by the ABMIL model into slide-level predictions across the five established histologic features and follicular twinning.
The dataset consisted of 154 whole-slide images featuring CD or CD-like histology that were annotated by eight hematopathologists for each feature. To evaluate model performance and interpretability, model predictions were compared to expert consensus and against the range of interobserver agreement among experts who were not CD specialists.
Significant inter-rater variability was confirmed with leave-one-out analysis of hematopathologist graders. Model disagreement was typically less than or equal to the average hematopathologist inter-rater spread and model predictions showed moderate concordance with expert ground-truth annotations within the range of inter-rater variability seen among hematopathologists.
Visualizations of tile-level attention weights confirmed that the model attends to diagnostically relevant regions and ignores irrelevant regions, supporting biological interpretability despite weak supervision. Dr. Ohgami explains that training such AI models allows for diagnostic histopathology that is reproducible, efficient, consistent, and gets patients into the right treatment quicker.
For more information, click here.
For more information on CD and other rare hematologic conditions, visit https://checkrare.com/diseases/hematologic-disorders/
