I'm a researcher in machine learning and computer vision currently working on interpretable AI and robust adaptation of foundation models. I previously did research at various institutions working with collaborators including Prof. Jaegul Choo, Dr. Steffen Schneider, Dr. Dongyoon Han, and Dr. Sungha Choi.
My work focuses on understanding how neural networks adapt to new domains and making AI systems more reliable and interpretable. I believe AI systems should maintain robust performance under subtle distribution shifts and seamlessly adapt to new environments without showing overly strange behavior.
I've developed novel approaches for interpretability tools, including PatchSAE (Patch-level Sparse Autoencoder), which extracts visual concepts of CLIP visual encoder and presents them in a human-interpretable way, and CytoSAE (Cytology Sparse Autoencoder), where I applied PatchSAE for medical imaging. To address domain shift challenges, I worked on approaches for test-time adaptation, including TTN (Test-Time Normalization), which addresses domain shift challenges in batch normalization strategies. Additionally, I studied a robust fine-tuning method of foundation models and calibration, including CaRot (Calibrated Robust Fine-Tuning).
I'm particularly interested in bridging the gap between model performance and interpretability, ensuring that AI systems are not only accurate but also trustworthy and understandable.
Wonwoo Cho, Dongmin Choi, Hyesu Lim, Jinho Choi, Saemee Choi, Hyun-seok Min, Sungbin Lim, Jaegul Choo
IEEE Workshop/Winter Conference on Applications of Computer Vision 2024