Jaeyoung Lee

barcelona.jpg

I aim to elicit model behavior through the lens of interpretability, which I refer to as Interpretability + X where X includes multimodality, agents, training, and alignment. Previously, I have worked on multimodal language models including reasoning, fact-checking and retrieval.

I received my B.S. in Computer Science and Engineering from Seoul National University. Also, I worked as a software engineer for 3 years at Hyperconnect (acquired by Match Group ), where I developed an human-in-the-loop AI content moderation system.

Email / Scholar / X / LinkedIn


Publications

  1. AAAI (Oral)
    Do Language Models Associate Sound with Meaning? A Multimodal Study of Sound Symbolism
    Jinhong Jeong*, Sunghyun Lee*Jaeyoung Lee, Seonah Han, and Youngjae Yu
    AAAI (Oral), 2025
  2. Preprint
    v1: Learning to Point Visual Tokens for Multimodal Grounded Reasoning
    Jiwan Chung, Junhyeok Kim, Siyeol Kim, Jaeyoung Lee, Minsoo Kim, and Youngjae Yu
    arxiv, 2025
  3. NeurIPS
    AI Debate Aids Assessment of Controversial Claims
    Salman Rahman, Sheriff Issaka, Ashima Suvarna, Genglin Liu, James Shiffer, Jaeyoung Lee, Md Rizwan Parvez, Hamid Palangi, Shi Feng, Nanyun Peng, Yejin Choi, Julian Michael, Liwei Jiang, and Saadia Gabriel
    NeurIPS, 2025
  4. EMNLP
    How to Train Your Fact Verifier: Knowledge Transfer with Multimodal Open Models
    Jaeyoung Lee, Ximing Lu, Jack Hessel, Faeze Brahman, Youngjae Yu, Yonatan Bisk, Yejin Choi, and Saadia Gabriel
    EMNLP Findings, 2024
  5. CVPR Workshops
    FS-NCSR: Increasing Diversity of the Super-Resolution Space via Frequency Separation and Noise-Conditioned Normalizing Flow
    Kiung Song, Dongseok Shim, Kangwook Kim, Jaeyoung Lee, and Younggeun Kim
    CVPR Workshops, 2022