Ouyou toukeigaku
Online ISSN : 1883-8081
Print ISSN : 0285-0370
ISSN-L : 0285-0370
Volume 33, Issue 2
Displaying 1-5 of 5 articles from this issue
  • Yoichi Seki, Isamu Nojima
    2004Volume 33Issue 2 Pages 111-130
    Published: December 25, 2004
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    In this paper, we propose a method to construct tree regression models including linear regression terms. Ordinary tree regression models reliably detect high order interactions compared with multiple regression models, but they tend to make large, deep trees because they explain all covariate effects using only stratification of samples. In particular, if there are many effects common to all samples or some sample group, they estimate a tree with many almost identical subtrees. In order to avoid this redundancy, we propose a tree regression model that explains main effects by linear regression terms in each node, and heterogeneity of the effects by stratification. When we take this strategy, there may be a huge number of candidate models; hence model selection is one of the main tasks. Thus we propose an algorithm to estimate the models using a criterion based on the MDL principle. This criterion can be interpreted as a criterion to select split variables which have the maximum interaction effects. Finally, we demonstrate the efficiency of our method using numerical examples as well as real data on the amount of time caregivers provided individually to elderly persons.
    Download PDF (965K)
  • Masahiro Yoshizaki, Kanta Naito
    2004Volume 33Issue 2 Pages 131-155
    Published: December 25, 2004
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    We discuss anew the kernel method, which is representative of approaches to nonparametric scatter plot smoothing. This paper proposes a new estimator obtained by adding an adjustment term to an initial estimator, where the initial estimator is the well known local polynomial estimator. An appealing feature of the proposed estimator is that it reduces bias; the effect can be observed especially when the true regression function has large curvature. In this paper, we emphasize practical aspects of the use of our proposal, such as introducing a reliable bandwidth selection method and its evaluation, constructing a pointwise approximate confidence interval for the true regression function based on asymptotic normality of the estimator, and comparing our proposal with existing estimators by conducting a large size simulation study.
    Download PDF (911K)
  • Yoko Konishi, Yoshihiko Nishiyama, Tomohiro Ando, Yoshinori Kawasaki
    2004Volume 33Issue 2 Pages 157-179
    Published: December 25, 2004
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    Production functions play many important roles both in economic theory and in empirical econometrics. Usually, a production function is postulated as a mapping from several input variables to a scalar 'output'. In its simplest form, the output level (y) is assumed to be dependent on capital (x1) and labor (x2). When it comes to empirical analysis, the most commonly used specification is the so-called Cobb-Douglas function, which was extended to the translog production function long afterward. Though the great bulk of past empirical works on the estimation of production function heavily relies on a parametric specification such as Cobb-Douglas or translog, it is often singled out as a major flaw that the statistical analysis under model misspecification may give rise to incorrect statistical inference and therefore, to fallacious economic implications. In consideration of this possible danger, this article addresses two issues. At first, we apply the nonparametric misspecification test (proposed by Hong and White, 1995) to investigate whether or not the parametric specifications (Cobb-Douglas and translog) are appropriate for the production functions of firms. Data are the cross section of y, x1, x2 by companies whose stocks are listed on the first division of the Tokyo Stock Exchange. Firms are classified into two groups (manufacturing and non-manufacturing), and the misspecification test is performed separately on each group, year by year from 1965 to 2001. To summarize the results of the test, parametric specifications are considered reasonable and proper until the 1970's, while they do not fit well after 1980. Observing these results, we proceed to the nonparametric estimation of production functions as the second step. We exploit the generalized additive model (GAM) employing B-spline basis functions. In model estimation, we make use of a penalized likelihood approach where smoothness constraints on the coefficients of basis functions are imposed. What is essential in the estimation of such smoothing spline models as the GAM is the objective choice of smoothing parameter. In this article a version of generalized information criteria (GIC) is derived to determine the smoothing parameter and the number of basis functions. As is expected, the estimated production functions exhibit substantial nonlinearities after 1980. As an application of nonparametric analysis, we investigate the inefficiency of the companies that went bankrupt during the sample period.
    Download PDF (1404K)
  • Masatsugu Wakaura
    2004Volume 33Issue 2 Pages 181-200
    Published: December 25, 2004
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    With pricing weather derivatives, it is important to investigate the structures of the mean and variance of the temperature processes. However, complicated features of variation in temperature cannot be modeled in terms of a parametric statistical model. This paper investigates the features of temperature processes in Japan using nonparametric regression. Seasonal-trend decomposition using generalized additive models shows the structure of the mean and the variance clearly. In par-ticular, it is proved that seasonal periodicities in the variance vary among areas due to their different climatic characters.
    Download PDF (714K)
  • Yuuki Matsushima, Shingo Shirahata, Wataru Sakamoto
    2004Volume 33Issue 2 Pages 201-219
    Published: December 25, 2004
    Released on J-STAGE: June 12, 2009
    JOURNAL FREE ACCESS
    The wavelet is an effective tool for representing an irregular or discontinuous function (or signal and image), which uses a linear combination of two types of basis functions: a scaling function and a wavelet function. The pair is also called a "wavelet", and some types of wavelet have already been developed. Thresholding is the most common method of noise reduction for wavelets. With this method, coefficients, which represent the observed data, are compared with a "threshold" value, and are replaced with zero if their absolute values are less than the threshold. This replacement removes noise since it disregards small changes in the data. However, estimates obtained by thresholding depend on the type of wavelet. Therefore, we need to adopt a specific wavelet according to the observed data. We consider a numerical criterion, AIC (Akaike Information Criterion), and adopt the wavelet that corresponds to the least AIC. We illustrate our method with figures and simulations.
    Download PDF (859K)
feedback
Top