profile photo

Jinsu Yoo -

I'm a Ph.D. student at The Ohio State University, where I'm fortunate to be advised by Wei-Lun (Harry) Chao.

My goal is to build autonomous robots with minimal human supervision. My current research lies at the intersection of computer vision, machine learning, and autonomous driving. I develop label-efficient, scalable, and robust 3D models---spanning from perception to planning---that operate reliably under real-world constraints, such as limited sensor data and diverse environments.

CV (2025.07)  /   /   /   / 

📰 News


  • 2025.07: GRAFT-Stereo is available on arXiv.
  • 2025.07: R&B-POP is accepted to ICCV DriveX Workshop as an oral presentation.
  • 2025.04: SST is accepted to CVPR CV4Animals Workshop as an oral presentation.
  • 2025.02: TYP is accepted to CVPR 2025.
  • 2025.01: R&B-POP is accepted to ICLR 2025.
  • 2024.09: Nominated as an Outstanding Reviewer in ECCV 2024.
  • 2024.05: One paper on video super-resolution accepted to Pattern Recognition.
  • 2023.07: One paper on video inpainting accepted to ICCV 2023.
  • 2023.06: Awarded Study Abroad Scholarship from my alma mater.
  • 2023.04: I'll be joining Harry Chao's group at OSU this fall as a PhD student.
  • 2022.11: Nominated as an Outstanding Reviewer in ECCV 2022.
  • 2022.10: One paper on super-resolution accepted to WACV 2023.
  • 2022.06: One paper on vision transformer accepted to IROS 2022.
  • 2021.07: Joined LG AI Research as a research intern.
  • 2020.07: One paper super-resolution accepted to ECCV 2020.


🧐 Research


graftstereo
Leveraging Sparse LiDAR for RAFT-Stereo: A Depth Pre-Fill Perspective
Jinsu Yoo, Sooyoung Jeon, Zanming Huang, Tai-Yu Pan, Wei-Lun Chao
arXiv preprint
arXiv / project page
tl;dr: With minimal architectural changes and an analysis of RAFT-Stereo's internal mechanism, we show that it can be effectively adapted for LiDAR-guided stereo.

sst
Static Segmentation by Tracking: A Frustratingly Label-Efficient Approach to Fine-Grained Segmentation
Zhenyang Feng, Zihe Wang, Saul Ibaven Bueno, Tomasz Frelek, Advikaa Ramesh, Jingyan Bai, Lemeng Wang, Zanming Huang, Jianyang Gu, Jinsu Yoo, Tai-Yu Pan, Arpita Chowdhury, Michelle Ramirez, Elizabeth G Campolongo, Matthew J Thompson, Christopher G Lawrence, Sydne Record, Neil Rosser, Anuj Karpatne, Daniel Rubenstein, Hilmar Lapp, Charles V Stewart, Tanya Berger-Wolf, Yu Su, Wei-Lun Chao
CVPR 2025 CV4Animals Workshop (Oral)
arXiv / code
tl;dr: A label-efficient fine-grained image segmentation in the biological domain.

typ
Transfer Your Perspective: Controllable 3D Generation from Any Viewpoint in a Driving Scene
Tai-Yu Pan, Sooyoung Jeon, Mengdi Fan, Jinsu Yoo, Zhenyang Feng, Mark Campbell, Kilian Q Weinberger, Bharath Hariharan, Wei-Lun Chao
CVPR 2025
arXiv
tl;dr: Diffusion-based point cloud generation to synthesize collaborative driving data.

rnb-pop
Learning 3D Perception from Others' Predictions
Jinsu Yoo, Zhenyang Feng, Tai-Yu Pan, Yihong Sun, Cheng Perng Phoo, Xiangyu Chen, Mark Campbell, Kilian Q Weinberger, Bharath Hariharan, Wei-Lun Chao
ICLR 2025; ICCV 2025 DriveX Workshop (Oral)
arXiv / project page / code
tl;dr: A new way to build 3D object detectors: learning from the predictions of nearby agents.

ssa
Looking Beyond Input Frames: Self-Supervised Adaptation for Video Super-Resolution
Jinsu Yoo, Jihoon Nam, Sungyong Baik, Tae Hyun Kim
Pattern Recognition 2024
paper / code
tl;dr: Restored video frames can be used as pseudo-labels during test time.

savit
Semantic-Aware Dynamic Parameter for Video Inpainting Transformer
Eunhye Lee*, Jinsu Yoo*, Yunjeong Yang, Sungyong Baik, Tae Hyun Kim
ICCV 2023
arXiv / open access
tl;dr: Combining semantic maps with video inpainting helps produce better results.
Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution
Jinsu Yoo, Taehoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim
WACV 2023
arXiv / open access / code
tl;dr: Leveraging CNN and ViT features together gives better results for image restoration.

lga
Fully Convolutional Transformer with Local-Global Attention
Eojindl Lee*, Sihaeng Lee*, Janghyeon Lee, Jinsu Yoo, Honglak Lee, Seung Hwan Kim
IROS 2022
paper
tl;dr: Flexible and versatile attention mechanism for dense prediction.

Fast Adaptation to Super-Resolution Networks via Meta-Learning
Seobin Park*, Jinsu Yoo*, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
tl;dr: Meta-learning the SR networks allows the model to adapt efficiently to each test image.



📝 Service
Conference Reviewer: CVPR, ICCV, ECCV, NeurIPS, WACV, ACCV

🏆 Outstanding Reviewer in ECCV 2022, 2024



😎 Misc

🏃 I enjoy running. In my free time, I tend to run on a treadmill and have occasionally participated in local marathons. Someday I hope to complete an entire six stars (Tokyo, Boston, London, Berlin, Chicago, and NYC)! Here are my (selected) records so far:

Half 1:59:37 (Seoul, 2016), 10km 54:58 (Seoul, 2023), 10km 57:20 (Hot Chocolate Run - Columbus, 2023)


Template inspired from Jon Barron and Chris Agia. This page has been visited several times since March 10, 2023! 🥂