Journal of Natural Language Processing
Online ISSN : 2185-8314
Print ISSN : 1340-7619
ISSN-L : 1340-7619
General Paper (Peer-Reviewed)
Analyzing Transferable Knowledge by Pretraining with Artificial Language
Ryokan RiYoshimasa Tsuruoka
Author information
JOURNAL FREE ACCESS

2023 Volume 30 Issue 2 Pages 664-688

Details
Abstract

We conducted a study to determine what kind of structural knowledge learned in neural network encoders is transferable to the processing of natural language. We designed artificial languages with structural properties that mimic those of natural language, pretrained encoders on the data, and examined the encoders' effects on downstream tasks in natural language. Our experimental results demonstrate the importance of statistical dependency, as well as the effectiveness of the nesting structure in implicit dependency relations. These results indicate that position-aware context dependence represents knowledge transferable across different languages.

Content from these authors
© 2023 The Association for Natural Language Processing
Previous article Next article
feedback
Top