profile photo

Jinsu Yoo -

I'm a second-year Ph.D. student at The Ohio State University fortunately advised by Wei-Lun (Harry) Chao. I'm broadly interested in computer vision and machine learning, and their application to autonomous driving.

Previously, I received my M.S. and B.S. degrees from Hanyang University with Tae Hyun Kim.

I have interned at LG AI Research.

CV  /   /   /   / 

📰 News


  • 10/2024: R&B-POP, a new way to learn 3D detector, is on arXiv.
  • 09/2024: Nominated as an Outstanding Reviewer in ECCV 2024.
  • 05/2024: One paper on video super-resolution accepted to Pattern Recognition! 🎉
  • 07/2023: One paper on video inpainting accepted to ICCV 2023! 🎉
  • 06/2023: Awarded Study Abroad Scholarship from my alma mater! 🎊
  • 04/2023: I'll be joining Harry Chao's group at OSU this fall as a PhD student! 🛫
  • 11/2022: Nominated as an Outstanding Reviewer in ECCV 2022.
  • 10/2022: One paper on super-resolution accepted to WACV 2023.
  • 06/2022: One paper on vision transformer accepted to IROS 2022.
  • 07/2021: Joined LG AI Research as a research intern.
  • 07/2020: One paper super-resolution accepted to ECCV 2020.


🧐 Research


rnb-pop
Learning 3D Perception from Others' Predictions
Jinsu Yoo, Zhenyang Feng, Tai-Yu Pan, Yihong Sun, Cheng Perng Phoo, Xiangyu Chen, Mark Campbell, Kilian Q. Weinberger, Bharath Hariharan, Wei-Lun Chao
preprint / arXiv / project website
tl;dr: A new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector.

ssa
Looking Beyond Input Frames: Self-Supervised Adaptation for Video Super-Resolution
Jinsu Yoo, Jihoon Nam, Sungyong Baik, Tae Hyun Kim
Pattern Recognition 2024 / paper / code
tl;dr: Restored test video frames can be used as pseudo-labels for improving VSR network performance further.

savit
Semantic-Aware Dynamic Parameter for Video Inpainting Transformer
Eunhye Lee*, Jinsu Yoo*, Yunjeong Yang, Sungyong Baik, Tae Hyun Kim
ICCV 2023 / arXiv / open access
tl;dr: Combining semantic maps with video inpainting helps produce restored frames with clearer semantic structures and textures.
Enriched CNN-Transformer Feature Aggregation Networks for Super-Resolution
Jinsu Yoo, Taehoon Kim, Sihaeng Lee, Seung Hwan Kim, Honglak Lee, Tae Hyun Kim
WACV 2023 / arXiv / open access / code
tl;dr: Leveraging rich CNN and multi-scale ViT features together lets SR model restore the given image better.

lga
Fully Convolutional Transformer with Local-Global Attention
Eojindl Lee*, Sihaeng Lee*, Janghyeon Lee, Jinsu Yoo, Honglak Lee, Seung Hwan Kim
IROS 2022 / paper
tl;dr: Flexible attention mechanism that can operate both upsampling and downsampling for dense prediction.

Fast Adaptation to Super-Resolution Networks via Meta-Learning
Seobin Park*, Jinsu Yoo*, Donghyeon Cho, Jiwon Kim, Tae Hyun Kim
ECCV 2020 / arXiv / conference version / code / video
tl;dr: Meta-learning can be used to train SR networks that can be efficiently adapted to each test image.



📝 Service
Conference Reviewer: CVPR, ICCV, ECCV, WACV, ACCV

🏆 Outstanding Reviewer in ECCV 2022, 2024



😎 Misc

🏃 I enjoy running. In my free time, I tend to run on a treadmill and have occasionally participated in local marathons. Someday I hope to complete an entire six stars (Tokyo, Boston, London, Berlin, Chicago, and NYC)! Here are my (selected) records so far:

Half 1:59:37 (Seoul, 2016), 10km 54:58 (Seoul, 2023), 10km 57:20 (Hot Chocolate Run - Columbus, 2023)


Template inspired from Jon Barron and Chris Agia. This page has been visited several times since March 10, 2023! 🥂