Kailin Li | ζŽζΊζž—

I'm a third-year Ph.D. student in the Department of Computer Science at Shanghai Jiao Tong University. Currently, I am a member of the SJTU MVIG lab under the supervision of Prof. Cewu Lu. Before that, I received my Bachelor's degree (Computer Science) in 2019 from Huazhong University of Science and Technology. My research interests include Computer Vision, 3D Vision, and Robotics.

Email  /  Google Scholar  /  Github

profile photo
Research

DART: Articulated Hand Model with Diverse Accessories and Rich Textures
Daiheng Gao*, Yuliang Xiu*, Kailin Li*, Lixin Yang*,
Feng Wang, Peng Zhang, Bang Zhang, Cewu Lu, Ping Tan
(*=equal contribution)
NeurIPS, 2022 - Datasets and Benchmarks Track
project (dataset) / paper / arxiv / code / video

In this paper, we extend MANO with more Diverse Accessories and Rich Textures, namely DART. DART is comprised of 325 exquisite hand-crafted texture maps which vary in appearance and cover different kinds of blemishes, make-ups, and accessories. We also generate large-scale (800K), diverse, and high-fidelity hand images, paired with perfect-aligned 3D labels, called DARTset.

OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction
Lixin Yang*, Kailin Li*, Xinyu Zhan*, Fei Wu, Anran Xu, Liu Liu, Cewu Lu
(*=equal contribution)
CVPR, 2022
project / arxiv / code / dataset

Learning how humans manipulate objects requires machines to acquire knowledge from two perspectives: one for understanding object affordances and the other for learning human interactions based on affordances. In this work, we propose a multi-modal and rich-annotated knowledge repository, OakInk, for the visual and cognitive understanding of hand-object interactions. Check our website for more details!

ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis
Lixin Yang*, Kailin Li*, Xinyu Zhan, Jun Lv, Wenqiang Xu, Jiefeng Li, Cewu Lu
(*=equal contribution)
CVPR, 2022   (Oral Presentation)
project / arxiv / code

We propose a lightweight online data enrichment method that boosts articulated hand-object pose estimation from the data perspective. During training, ArtiBoost alternatively performs data exploration and synthesis. Even with a simple baseline, our method can boost it to outperform the previous SOTA on several hand-object benchmarks.

CPF: Learning a Contact Potential Field to Model the Hand-Object Interaction
Lixin Yang, Xinyu Zhan, Kailin Li, Wenqiang Xu, Jiefeng Li, Cewu Lu
ICCV, 2021
project / paper / supp / arxiv / code / ηŸ₯乎

We highlight contact in the hand-object interaction modeling task by proposing an explicit representation named Contact Potential Field (CPF). In CPF, we treat each contacting hand-object vertex pair as a spring-mass system, Hence the whole system forms a potential field with minimal elastic energy at the grasp position.