-
[in Japanese]
2001 Volume 19 Issue 8 Pages
921
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Hiroyuki Yoshikawa
2001 Volume 19 Issue 8 Pages
922-923
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Hideo Tsukune
2001 Volume 19 Issue 8 Pages
924-927
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Tetsuo Kotoku
2001 Volume 19 Issue 8 Pages
928-932
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Tetsuo Kotoku
2001 Volume 19 Issue 8 Pages
933-936
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Michio Hamano
2001 Volume 19 Issue 8 Pages
937-940
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Mamoru Mitsuishi
2001 Volume 19 Issue 8 Pages
941-945
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
—Proposal-Based R & D Program of NEDO & JSPS—
Hideo Mori, Ryouhei Matsumoto, Yosiki Kobayasi, Atsushi Mototsune
2001 Volume 19 Issue 8 Pages
946-949
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Yoji Umetani
2001 Volume 19 Issue 8 Pages
950-954
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
[in Japanese], [in Japanese], [in Japanese]
2001 Volume 19 Issue 8 Pages
955-956
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
-
Yoshihiro Nakabo, Masatoshi Ishikawa
2001 Volume 19 Issue 8 Pages
959-966
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
We introduce visual impedance, a new scheme for vision based control which realizes task-level dynamical robot control. This method is simply described as applying image features to the impedance equation so that integration of visual servo and conventional servo system can be naturally accomplished. With visual impedance, an adaptive motion is obtained for real robot tasks in dynamically changing or unknown environments based on a frame work of an impedance control.
In such cases, very high rate visual feedback is necessary to control robot dynamics but most conventional vision systems using CCD cameras can never satisfy this condition because their sampling rate is limited by the video signal. To solve this problem, we developed a general-purposed vision chip SPE and the 1 [ms] visual feedback system which can achieve an adequate servo rate to control robot dynamics.
In this paper, we first illustrate a concept of visual impedance. Then our 1 [ms] visual feedback system for robot control system is described. Last we show some experimental results with some real robot tasks.
View full abstract
-
Hideaki Takanobu, Takeyuki Yajima, Atsuo Takanishi, Kayoko Ohtsuki, Ma ...
2001 Volume 19 Issue 8 Pages
967-973
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
This paper describes the mechanism and real training with the three degrees of freedom (3-DOF) mouth opening and closing training robot WY-2 (Waseda Yamanashi-2) that is developed to train the patients who have problems on their jaw movement. WY-2 consists of a mechanical, actuation, sensor, and control systems. The seesaw mechanism inserted in the patient's mouth opens the mouth by squeezing the master manipulator that is controlled by the doctor. The force sensor measures the patient's biting force. A personal computer integrates and controls the other systems. As a result of therapy with this robot for the real patient, the mouth opening distance increased.
View full abstract
-
-Shaping and Control of Dynamic Compliance of Humanoid Shoulder Mechanisms-
Masafumi Okada, Yoshihiko Nakamura, Shin-ichiro Hoshino
2001 Volume 19 Issue 8 Pages
974-982
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
Design and control of mechanical compliance would be one of the most important technical foci in making humanoid robots really interactive with the humans. For task execution and safety insurance the issue must be discussed and offered useful and realistic solutions. In this paper, we propose a theoretical design principle of mechanical compliance. Passive compliance implies mechanically embedded one in drive systems and is reliable but not-tunable in nature, while active compliance is a controlled compliance and, therefore, widely tunable, but less reliable specially in high frequency domain. The basic idea of this paper is to use active compliance in the lower frequency domain and to rely on passive compliance in the higher frequency.
H∞ control theory based on systems identification allows a systematic method to design the hybrid compliance in frequency domain. The proposed design is applied to the shoulder mechanism of a humanoid robot. Its implementation and experiments are to be shown with successful results.
View full abstract
-
Tetsunari Inamura, Masayuki Inaba, Hirochika Inoue
2001 Volume 19 Issue 8 Pages
983-990
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
In this paper, we propose a novel method for personal robots to acquire autonomous behaviors based on interaction between users and robots. This method have two advantages, first is loadless for users by comparison with users' teaching, second is that robots can acquire behaviors as users wish by comparison with autonomous learning. In this method, robots store sensory information and results of interaction, and represent the relationship between sensor and behavior as stochastic behavior decision models. The robot advances the learning through making suggestions and questions for the user using the stochastic model. We investigate the feasibility of this method on obstacle avoidance tasks for mobile robots. Through experiments, we have confirmed that the mobile robot acquires avoidance behavior against change of environment through only several teaching. Also we have confirmed that the acquired models reflect the experience of interaction, therefore the model reflects personal preferences of teaching operation.
View full abstract
-
Takashi Yoshioka, Hiroshi Noborio, Shoji Tominaga
2001 Volume 19 Issue 8 Pages
991-1002
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
The sensor-based navigation is the problem of how to select a sequence of sensor-based behaviors between start and goal positions of a mobile robot. If the robot does not know its 2-D environment completely or partially, it is obliged to rely on sensor information reflected from closer obstacles in order to avoid them in an on-line manner. In the on-line framework, we should consider how a mobile robot reaches its goal position in an uncertain 2-D environment. In this paper, we survey almost all the previous sensor-based navigation algorithms, classify their convergences of a mobile robot to its goal position into four categories by basic conditions, and then discuss about acceleration of the convergences by additional conditions.
View full abstract
-
Kiyosumi Kidono, Jun Miura, Yoshiaki Shirai
2001 Volume 19 Issue 8 Pages
1003-1009
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
It is necessary for a robot to have environmental information in order to move autonomously. Although we can usually give a map to the robot, making such a map is quite a tedious work. So we propose a navigation strategy which requires the minimum user assistance. In the method, we first guide a mobile robot to a destination. During this movement, the robot observes the surrounding environment to make a map. Once the map is generated, the robot computes and follows the shortest path to the destination. To realize this navigation strategy, we develop: (1) a method of map generation by integrating multiple observation results considering the uncertainties in observation and motion, and (2) a fast robot localization method which does not use explicit feature correspondence. We also propose an observation planning for efficient autonomous navigation, which takes advantage of human-guided experience. Experimental results using a real robot show the feasibility of the proposed strategy.
View full abstract
-
Yusuke Maeda, Hirokazu Kijimoto, Jun Ota, Yasumichi Aiyama, Tamio Arai
2001 Volume 19 Issue 8 Pages
1010-1017
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
Graspless manipulation is to manipulate objects without grasping. The graspless methods (pushing, tumbling, etc.) enable robots to achieve manipulation goal with the lighter workload and simpler mechanism than conventional pick-and-place. Graspless manipulation, however, has difficulties in planning. It is very time-consuming to plan a general graspless manipulation problem, because it requires not only geometrical analysis but also complicated mechanical analysis including friction. To reduce the load of computation, we adopt a two-step approach: 1) construction and simplification of contact-state network at geometry level, and 2) planning of manipulation at mechanics level. In this paper, we focus the latter, and propose an algorithm for planning of planar graspless manipulation. It generates digraphs that represent C-Subspaces for all the contact states, and unites them into one big graph, which we call “manipulation-feasibility graph.” Manipulation plan can be obtained by searching the graph. This algorithm is implemented for graspless manipulation by multiple robot fingers, and planned results are shown.
View full abstract
-
Ken Ito, Shigeyuki Sakane
2001 Volume 19 Issue 8 Pages
1018-1026
Published: November 15, 2001
Released on J-STAGE: August 25, 2010
JOURNAL
FREE ACCESS
Visual tracking plays an important role in various robotic tasks. We have developed a view based visual tracking system to cope with change of the appearance of the template in a 3D environment using affine transformed templates. In view-based visual tracking, however, there are still critical situations in which the tracking fails because of occlusion by general objects and/or human hand in the environment, particularly in cases where humans and robots perform cooperative handling of objects. This paper presents two methods to cope with such occlusion problems. The first method, which we call tessellated template method, detects the occlusion based on an evaluation of correlation errors in a tessellated template. The second method deals with occlusion caused by human hand and we use infrared images to detect the occluded region in the target template. The system then creates a mask of the occluded region and eliminates pixels in the mask from calculating correlation with the template image. In the prototype system, the generation of the mask and the correlation can be performed, typically, in one or two TV frame. We have integrated the proposed methods into the view-based visual tracking system which we have already developed. Experimental results demonstrate the usefulness of the proposed methods.
View full abstract