Find Paper, Faster
Example:10.1021/acsami.1c06204 or Chem. Rev., 2007, 107, 2411-2502
Comprehensive analysis of embeddings and pre-training in NLP
Computer Science Review  (IF7.872),  Pub Date : 2021-09-20, DOI: 10.1016/j.cosrev.2021.100433
Jatin Karthik Tripathy, Sibi Chakkaravarthy Sethuraman, Meenalosini Vimal Cruz, Anupama Namburu, Mangalraj P., Nandha Kumar R., Sudhakar Ilango S, Vaidehi Vijayakumar

The amount of data and computing power has drastically increased over the last decade, which leads to the development of several new fronts in the field of Natural Language Processing (NLP). In addition to that, the entanglement of embeddings and large pre-trained models have pushed the field forward, covering a wide variety of tasks starting from machine translation to more complex tasks such as contextual text classification. This paper covers the underlying idea behind all embeddings and pre-trained models and provides an insight into fundamental strategies and implementation details of innovative embeddings. Further, it imparts the pros and cons of each specific embedding design and the associated impact on the result. It also comprehends the comparison of all the different strategies, datasets, architectures discussed in different papers with the help of standard metrics used in NLP. The content covered in this review work aims to shed light on different milestones reached in NLP, allowing the reader to deepen their understanding of NLP, which would motivate to explore the field further.