Host: The Japan Society of Mechanical Engineers
Name : [in Japanese]
Date : June 05, 2019 - June 08, 2019
We utilize the depth visual information to present the surface detection/classification for the stable and energy-efficient locomotion of point-footed bipedal robot under the unstructured environment. The latest depth-vision systems offer very accurate 3D-point-clouds along with the RGB-images. The proposed method segments and classifies the 3D-images using machine-learning tools as random forests (RF), support-vector-machines (SVM) and relevance vector machines (RVM) for pixel-level classification and object-based image analysis (OBIA) to achieve accurate object segmentation. Contrary to the other existing methods, surface-recognition based robotic locomotion control is useful to estimate the optimal parameters to avoid unwanted-areas and to plan the collision-free energy-efficient walking. The major contribution of this work is to integrate the accurate surface modeling and locomotion rules for the bipedal walking control. Finally, we evaluate the results for average distance traveled and average energy consumption by the bipedal robot walking trajectory under different settings of unstructured environment.