Document Type: Research Paper
Kharazmi International Campus Shahrood University of Technology, Shahrood, Iran
Word embeddings (WE) have received much attention recently as word to numeric vectors architecture for all text processing approaches and has been a great asset for a large variety of NLP tasks. Most of text processing task tried to convert text components like sentences to numeric matrix to apply their processing algorithms. But the most important problems in all word vector-based text processing approaches are different sentences size and as a result, different dimension of sentences matrices. In this paper, we suggest an efficient but simple statistical method to convert text sentences into equal dimension and normalized matrices Proposed method aims to combines three most efficient methods (averaging based, most likely n-grams, and word’s mover distance) to use their advantages and reduce their constraints. The unique size resulting matrix does not depend on language, Subject and scope of the text and words semantic concepts. Our results demonstrate that normalized matrices capture complementary aspects of most text processing tasks such as coherence evaluation, text summarization, text classification, automatic essay scoring, and question answering.