Friday, April 2, 2021

COMPARISON OF TURKISH WORD REPRESENTATIONS TRAINED ON DIFFERENT MORPHOLOGICAL FORMS

Author :  Gökhan Güler

Affiliation :  Istanbul Technical University

Country :  Turkey

Category :  Computer Science & Information Technology

Volume, Issue, Month, Year :  10, 01, January, 2020

Abstract :

Increased popularity of different text representations has also brought many improvements in Natural Language Processing (NLP) tasks. Without need of supervised data, embeddings trained on large corpora provide us meaningful relations to be used on different NLP tasks. Even though training these vectors is relatively easy with recent methods, information gained from the data heavily depends on the structure of the corpus language. Since the popularly researched languages have a similar morphological structure, problems occurring for morphologically rich languages are mainly disregarded in studies. For morphologically rich languages, context-free word vectors ignore morphological structure of languages. In this study, we prepared texts in morphologically different forms in a morphologically rich language, Turkish, and compared the results on different intrinsic and extrinsic tasks. To see the effect of morphological structure, we trained word2vec model on texts which lemma and suffixes are treated differently. We also trained subword model fastText and compared the embeddings on word analogy, text classification, sentimental analysis, and language model tasks.

Keyword :  embedding, vector, morphology, Turkish, word2vec, fast

For More Detailshttps://aircconline.com/csit/papers/vol10/csit100110.pdf

No comments:

Post a Comment