Authors :
Shilpi Kulshretha; Lokesh Lodha
Volume/Issue :
Volume 8 - 2023, Issue 12 - December
Google Scholar :
http://tinyurl.com/2srx5nww
Scribd :
http://tinyurl.com/32zt34uu
DOI :
https://doi.org/10.5281/zenodo.10443962
Abstract :
This study intends to explore the field of word
embedding and thoroughly examine and contrast various
word embedding algorithms. Words retain their semantic
relationships and meaning when they are transformed
into vectors using word embedding models. Numerous
methods have been put forth, each with unique benefits
and drawbacks. Making wise choices when using word
embedding for NLP tasks requires an understanding of
these methods and their relative efficacy. The study
presents methodologies, potential uses of each technique
and discussed advantages, disadvantages. The
fundamental ideas and workings of well-known word
embedding methods, such as Word2Vec, GloVe,
FastText, contextual embedding ELMo, and BERT, are
evaluated in this paper. The performance of these
algorithms are evaluated for three datasets on the basis of
words similarity and word analogy and finally results are
compared.
Keywords :
Embedding, Word2Vec, Global Vectors for Word Representation (GloVe), Embedding from Language Models (ELMo), BERT.
This study intends to explore the field of word
embedding and thoroughly examine and contrast various
word embedding algorithms. Words retain their semantic
relationships and meaning when they are transformed
into vectors using word embedding models. Numerous
methods have been put forth, each with unique benefits
and drawbacks. Making wise choices when using word
embedding for NLP tasks requires an understanding of
these methods and their relative efficacy. The study
presents methodologies, potential uses of each technique
and discussed advantages, disadvantages. The
fundamental ideas and workings of well-known word
embedding methods, such as Word2Vec, GloVe,
FastText, contextual embedding ELMo, and BERT, are
evaluated in this paper. The performance of these
algorithms are evaluated for three datasets on the basis of
words similarity and word analogy and finally results are
compared.
Keywords :
Embedding, Word2Vec, Global Vectors for Word Representation (GloVe), Embedding from Language Models (ELMo), BERT.