Abstract / Description of output
Skin cancer image classification across skin tones is a challenging problem due to the fact that skin cancer can present differently on different skin tones. This study evaluates the performance of image only models and fusion models in skin malignancy classification. The fusion models we consider are able to take in additional patient data, such as an indicator of their skin tone, and merge this information with the features provided by the image-only model. Results from the experiment show that fusion models perform substantially better than image-only models. In particular, we find that a form of multiplicative fusion results in the best performing models. This finding suggests that skin tones add predictive value in skin malignancy prediction problems. We further demonstrate that feature fusion methods reduce, but do not entirely eliminate, the disparity in performance of the model on patients with different skin tones.
Original language | English |
---|---|
Pages | 1-17 |
Number of pages | 17 |
Publication status | Published - 3 Jul 2024 |
Event | The 7th Medical Imaging with Deep Learning Conference - Sorbonne University Pierre and Marie Curie Campus, Paris, France Duration: 3 Jul 2024 → 5 Jul 2024 Conference number: 7 https://2024.midl.io/ |
Conference
Conference | The 7th Medical Imaging with Deep Learning Conference |
---|---|
Abbreviated title | MIDL 2024 |
Country/Territory | France |
City | Paris |
Period | 3/07/24 → 5/07/24 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- bias reduction
- fairness evaluation
- fusion models
- malignancy classification
- multimodal learning
- patient data integration
- skin cancer