profile photo

Jinsu Yoo -

I'm a Ph.D. student at The Ohio State University advised by Wei-Lun (Harry) Chao.

My research develops affordable and robust 3D spatial intelligence models focused on perception, enabling reliable intelligent systems under real-world constraints such as limited sensor data, diverse environments, and safety-critical requirements.

I’m currently seeking a research internship—any opportunities or referrals would be greatly appreciated! Please feel free to reach out!

(2025.09)  /   /   /   / 

📰 News


  • 2025.07: GRAFT-Stereo is available on arXiv.
  • 2025.07: R&B-POP is accepted to ICCV DriveX Workshop as an oral presentation.
  • 2025.04: SST is accepted to CVPR CV4Animals Workshop as an oral presentation.
  • 2025.02: TYP is accepted to CVPR 2025.
  • 2025.01: R&B-POP is accepted to ICLR 2025.
  • 2024.09: Nominated as an Outstanding Reviewer in ECCV 2024.
  • 2024.05: One paper on video super-resolution accepted to Pattern Recognition.
  • 2023.07: One paper on video inpainting accepted to ICCV 2023.
  • 2023.06: Awarded Study Abroad Scholarship from my alma mater.
  • 2023.04: I'll be joining Harry Chao's group at OSU this fall as a PhD student.
  • 2022.11: Nominated as an Outstanding Reviewer in ECCV 2022.
  • 2022.10: One paper on super-resolution accepted to WACV 2023.
  • 2022.06: One paper on vision transformers accepted to IROS 2022.
  • 2021.07: Joined LG AI Research as a research intern.
  • 2020.07: One paper on super-resolution accepted to ECCV 2020.


🧐 Research


continual-unlearning
Justin Lee*, Zheda Mai*, Jinsu Yoo, Chongyu Fan, Cheng Zhang, Wei-Lun Chao
arXiv 2025
Study of continual unlearning for text-to-image diffusion models with regularizers that prevent drift and maintain semantic fidelity.
graftstereo
Jinsu Yoo, Sooyoung Jeon, Zanming Huang, Tai-Yu Pan, Wei-Lun Chao
With minimal architectural changes and an analysis of RAFT-Stereo's internal mechanism, we show it can be effectively adapted for LiDAR-guided stereo.
sst
Zhenyang Feng, ..., Jinsu Yoo, ..., Wei-Lun Chao (25 authors)
CV4Animals@CVPR 2025 (Oral)
Label-efficient fine-grained segmentation for biological specimen images.
typ
Tai-Yu Pan, Sooyoung Jeon, Mengdi Fan, Jinsu Yoo, Zhenyang Feng, Mark Campbell, Kilian Q Weinberger, Bharath Hariharan, Wei-Lun Chao
CVPR 2025
Diffusion-based point cloud generation to synthesize collaborative driving data.
rnb-pop
Jinsu Yoo, Zhenyang Feng, Tai-Yu Pan, Yihong Sun, Cheng Perng Phoo, Xiangyu Chen, Mark Campbell, Kilian Q Weinberger, Bharath Hariharan, Wei-Lun Chao
ICLR 2025; DriveX@ICCV 2025 (Oral); X-Sense@ICCV 2025
A new way to build ego 3D object detectors: learning from the predictions of nearby expert agents.
ssa
Jinsu Yoo, Jihoon Nam, Sungyong Baik, Tae Hyun Kim
Pattern Recognition 2024
Restored video frames can be used as pseudo-labels during test time.
savit
Eunhye Lee*, Jinsu Yoo*, Yunjeong Yang, Sungyong Baik, Tae Hyun Kim
ICCV 2023
Combining semantic maps with video inpainting helps produce better results.
Jinsu Yoo, Taehoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim
WACV 2023
Combining CNN and ViT features improves image restoration.
lga
Eojindl Lee*, Sihaeng Lee*, Janghyeon Lee, Jinsu Yoo, Honglak Lee, Seung Hwan Kim
IROS 2022
Flexible and versatile attention mechanism for dense prediction.
Seobin Park*, Jinsu Yoo*, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
Meta-learning the SR networks allows the model to adapt efficiently to each test image.


📝 Service
Conference Reviewer: CVPR, ICCV, ECCV, NeurIPS, WACV, ACCV

🏆 Outstanding Reviewer in ECCV 2022, 2024



😎 Misc

🏃 I enjoy running. In my free time, I tend to run on a treadmill and have occasionally participated in local marathons. Someday I hope to complete an entire six stars (Tokyo, Boston, London, Berlin, Chicago, and NYC)! Here are my (selected) records so far:

Half 1:59:37 (Seoul, 2016), 10km 54:58 (Seoul, 2023), 10km 57:20 (Hot Chocolate Run - Columbus, 2023)


Template inspired from Jon Barron and Chris Agia. This page has been visited several times since March 10, 2023! 🥂