Summary: A study reveals how brain cell interactions influence aging, showing that rare cell types either accelerate or slow brain aging. Neural stem cells provide a rejuvenating effect on neighboring cells, while T cells drive aging through inflammation. Researchers used advanced AI tools and a spatial single-cell atlas to map cellular interactions across the lifespan
Study Highlights Necessity of Image Quality for Deep Learning Model Diagnostic Success
These data resulted from an investigation of the effect of varying image qualities on the diagnostic abilities of a convolutional neural network (CNN) model and clinicians.
High quality dermoscopic images may improve the diagnostic abilities of the convolutional neural network model and clinicians, according to new findings, although high dynamic range (HDR) images simulating HDR smartphone technologies have led to diminished diagnostic performance among both.1
These data resulted from an analysis in which investigators sought to assess the impact of varying image qualities on the diagnostic capabilities of physicians and on a convolutional neural network model. Some of these included HDR-enhanced dermoscopic images.
The research was led in part by A.I. Oloruntoba, from the School of Public Health and Preventive Medicine at Monash University in Melbourne, Australia. Oloruntuba et al. highlighted that convolutional neural networks are a unique deep learning model made for visual data analysis and that these networks have mirrored the diagnostic skill of dermatologists in their evaluations of benign and malignant lesions on patient skin.2
“This study evaluates the potential effects of varying image qualities, including HDR-enhanced dermoscopic images on the diagnostic capabilities of [convolutional neural network] models and clinicians,” Oloruntoba and colleagues wrote.1
Background and Design Details
This study was made up of 2 parts, with investigators using a cross-sectional comparative design and evaluating the diagnostic capabilities of a convolutional neural network and its diagnostic accuracy. They also looked at confidence levels and clinical management decisions of dermatology professionals when reviewing images of different degrees of quality.
The investigative team invited, between May – August 2023, a set of 42 dermatology consultants, registrars, and residents to take part in the research. They would look at 101 lesions, with each being represented by 3 images of differing quality.
Those participating in this analysis looked at around 150 images in each session, with the 2 sessions being spaced out by the team at least 6 weeks apart. This was to minimize the likelihood of the subjects recognizing lesions based on image quality distinctions alone.
These images which were implemented in the team’s analysis originated from the Department of Dermatology and Allergy Centre at Odense University Hospital. They had been collected in the period between January – October 2018 as part of a teledermatology study and predominantly were of patients with Fitzpatrick skin types II and III.
All of the research team’s data, including their dermoscopic images and participant information such as years of experience in dermoscopy and devices utilized, were managed through REDCap. Following a review of each dermoscopic image, those participating in the analysis answered multiple-choice questions related to the classification of lesions.
Specifically, subjects would identify lesions within 1 of 8 predefined disease categories, determine whether they were benign or malignant, rate their level of confidence on a 5-point scale, and decide upon appropriate strategies for management.
Notable Findings
The investigators found that in binary classification tasks, clinicians achieved their highest diagnostic skill level with high-quality images. They noted that these subjects achieved a sensitivity of 77.3%, specificity of 63.1%, and accuracy level of 70.2%.
In a similar vein, the research team determined that higher quality images led to the best results in multi-class classification. The team highlighted that this led to the highest specificity (91.9%) and level of accuracy (51.5%).
Notably, while clinicians were shown to have outperformed the model in binary classification tasks when assessing lower- and medium-quality images, their performance on the higher quality images was found to be comparable to the learning model.
However, the team added that in multi-class classification, the model had significantly outperformed physicians when assessing higher quality images (P < .01).
“Given our study’s finding that AI performance decreases with lower image quality, these models can alert clinicians when image quality may compromise diagnostic accuracy,” they wrote. “This can be particularly useful in addressing clinician confidence and reliance on image quality, ensuring more accurate assessments.”1
References
- Oloruntoba AI, Asghari-Jafarabadi M, Mar V, et al. Assessment of image quality on the diagnostic performance of clinicians and deep learning models: Cross-sectional comparative reader study. J Eur Acad Dermatol Venereol. 2024 Dec 10. doi: 10.1111/jdv.20462. Epub ahead of print. PMID: 39655640.
- Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017; 542(7639): 115–118.