Guan Ming Lim

I'm a researcher at Rehabilitation Research Institute of Singapore (RRIS) Nanyang Technological University (NTU), where I work on markerless mocap of hand and object tracking.

Google Scholar  /  Github  /  YouTube

Research

I'm interested in combining research on computer vision, computer graphics and machine learning to understand human motion. Much of my research is on hand tracking from images.

clean-usnob Real-time Tracking of Handheld Object from Color or Depth Images
Guan Ming Lim, Prayook Jatesiktat, Wei Tech Ang
EMBC, 2023
project page

Real-time, accurate, and robust tracking of a rigid object using image-based methods such as efficient projective point correspondence and precomputed spare viewpoint information.

clean-usnob Wireless Pressure Sensor Array Module for Sensorized Object
Guan Ming Lim, Prayook Jatesiktat, Christopher Wee Keong Kuah, Wei Tech Ang
EMBC, 2023
project page

A low-cost, modular, and wireless pressure sensor array that can generate real-time pressure distribution map for object pose estimation and grasp classification.

clean-usnob MobileHand: Real-Time 3D Hand Shape and Pose Estimation from Color Image
Guan Ming Lim, Prayook Jatesiktat, Wei Tech Ang
ICONIP, 2020
project page / code / video

Real-time estimation of 3D hand shape and pose from a single RGB image running at over 110 Hz on a GPU or 75 Hz on a CPU.

clean-usnob Camera-based Hand Tracking using a Mirror-based Multi-view Setup
Guan Ming Lim, Prayook Jatesiktat, Christopher Wee Keong Kuah, Wei Tech Ang
EMBC, 2020
project page / code / video

A camera-based system for markerless hand pose estimation using a mirror-based multi-view setup to eliminate the complexity of synchronizing multiple cameras and reduce the issue of occlusion.

clean-usnob Hand and Object Segmentation from Depth Image using Fully Convolutional Network
Guan Ming Lim, Prayook Jatesiktat, Christopher Wee Keong Kuah, Wei Tech Ang
EMBC, 2019
project page / code / video

Semantic segmentation of body parts (e.g. hand and arm) and objects from depth image. A Fully Convolutional Neural Network is trained on synthetic data with some level of generalization on real data.

clean-usnob SDF-Net: Real-Time Rigid Object Tracking Using a Deep Signed Distance Network
Prayook Jatesiktat, Ming Jeat Foo, Guan Ming Lim, Wei Tech Ang
ICCS, 2018
supplementary material

SDF-Net is a simple multilayer perceptron (with a memory footprint of < 10 kB) that is used to model the signed distance function (SDF) of a rigid object for real-time tracking (≈ 1.29 ms per frame on 1 CPU core) of the object using a single depth camera.

Other Projects
clean-usnob Development of an Accessible Camera-based System for Measuring Hand Joint Range of Motion
Guan Ming Lim, Prayook Jatesiktat, Christopher Wee Keong Kuah, Wei Tech Ang
CAREhab, Singapore Rehabilitation Conference, 2020

Design and source code from Jon Barron's website