publications
This page includes only papers published in international conferences.
2025
- ACLEnabling Chatbots with Eyes and Ears: An Immersive Multimodal Conversation System for Dynamic InteractionsJihyoung Jang, Minwook Bae, Minji Kim, and 2 more authorsarXiv preprint arXiv:2506.00421, May 2025ACL 2025
As chatbots continue to evolve toward human-like, real-world, interactions, multimodality remains an active area of research and exploration. So far, efforts to integrate multimodality into chatbots have primarily focused on image-centric tasks, such as visual dialogue and image-based instructions, placing emphasis on the "eyes" of human perception while neglecting the "ears", namely auditory aspects. Moreover, these studies often center around static interactions that focus on discussing the modality rather than naturally incorporating it into the conversation, which limits the richness of simultaneous, dynamic engagement. Furthermore, while multimodality has been explored in multi-party and multi-session conversations, task-specific constraints have hindered its seamless integration into dynamic, natural conversations. To address these challenges, this study aims to equip chatbots with "eyes and ears" capable of more immersive interactions with humans. As part of this effort, we introduce a new multimodal conversation dataset, Multimodal Multi-Session Multi-Party Conversation (M^3C), and propose a novel multimodal conversation model featuring multimodal memory retrieval. Our model, trained on the M^3C, demonstrates the ability to seamlessly engage in long-term conversations with multiple speakers in complex, real-world-like settings, effectively processing visual and auditory inputs to understand and respond appropriately. Human evaluations highlight the model’s strong performance in maintaining coherent and dynamic interactions, demonstrating its potential for advanced multimodal conversational agents.
@article{jang2025enabling, title = {Enabling Chatbots with Eyes and Ears: An Immersive Multimodal Conversation System for Dynamic Interactions}, author = {Jang, Jihyoung and Bae, Minwook and Kim, Minji and Hakkani-Tur, Dilek and Kim, Hyounghun}, journal = {arXiv preprint arXiv:2506.00421}, year = {2025}, month = may, eprint = {2506.00421}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, note = {ACL 2025}, }
2024
- EMNLPRevealing the Inherent Instructability of Pre-Trained Language ModelsSeokhyun An, Minji Kim, and Hyounghun KimarXiv preprint arXiv:2410.02465, Oct 2024EMNLP 2025 (Findings)
Instruction tuning – supervised fine-tuning using instruction-response pairs – is a key step in making pre-trained large language models (LLMs) instructable. Meanwhile, LLMs perform multitask learning during their pre-training, acquiring extensive knowledge and capabilities. We hypothesize that the pre-training stage can enable them to develop the ability to comprehend and address instructions. To verify this, we propose Response Tuning (RT), which removes the instruction and its corresponding mapping to the response from instruction tuning. Instead, it focuses solely on establishing a response distribution. Our experiments demonstrate that RT models, trained only on responses, can effectively respond to a wide range of instructions akin to their instruction-tuned counterparts. In addition, we observe that the models can recognize and reject unsafe queries after learning a safety policy only from the response data. Furthermore, we find that these observations extend to an in-context learning setting. These findings support our hypothesis, highlighting the extensive inherent capabilities of pre-trained LLMs.
@article{an2024revealing, title = {Revealing the Inherent Instructability of Pre-Trained Language Models}, author = {An, Seokhyun and Kim, Minji and Kim, Hyounghun}, journal = {arXiv preprint arXiv:2410.02465}, year = {2024}, month = oct, eprint = {2410.02465}, archiveprefix = {arXiv}, primaryclass = {cs.CL}, note = {EMNLP 2025 (Findings)}, }