Abstract
In this paper, we propose a new type of membership functions (MSFs) and its efficient use to improve an optimization of fuzzy reasoning using a steepest descent method. In self-tuning of fuzzy rules using the steepest descent method, an algorithm to avoid suboptimal solutions by modifying learning coefficients has been proposed, where piecewise linear MSFs were introduced. In such an algorithm, when learning data has a radically changing distribution, it is impossible to avoid suboptimal solutions.
To improve such a difficulty, we propose to apply double right-angled triangles's MSFs to the self-tuning of fuzzy reasoning. By using the MSFs, a radically changing grade can be represented easily. Besides, through a technique of the simulated annealing (SA) we propose to move peak positions of MSFs according to the progress of the learning, in order to arrange the MSFs on some positions where learning data is changing radically. Compared with the algorithm by the piecewise linear MSFs, this new algorithm can avoid the suboptimal solution more effectively. Advantages of this new technique are shown by numerical examples of some function approximations.