IEEJ Transactions on Electronics, Information and Systems
Online ISSN : 1348-8155
Print ISSN : 0385-4221
ISSN-L : 0385-4221
<Softcomputing, Learning>
An Efficient Learning Method for the Layered Neural Networks Based on the Selection of Training Data and Input Characteristics of an Output Layer Unit
Isao TaguchiYasuo Sugai
Author information
JOURNAL FREE ACCESS

2009 Volume 129 Issue 4 Pages 726-734

Details
Abstract
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neutral networks; pulse neural networks, quantum neuro computation, etc, the multilayer neural network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. The aims of this paper are to suggest solutions of these problems and to reduce the total learning time. The total learning time means the total computational time required to learn certain objects including adjusting parameter values and restarting the learning from the beginning. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. Focusing on the oscillatory characteristics, it is determined whether the learning will move on to the next stage or the learning will restart from the beginning. Computational experiments suggest that the proposed method has the capability of higher learning performance and needs less learning time compared with the conventional method.
Content from these authors
© 2009 by the Institute of Electrical Engineers of Japan
Previous article Next article
feedback
Top