Enhancing Machine Learning Algorithms using GPT Embeddings for Binary Classification
The Language Model Models (LLMs) have demonstrated their ability to process and understand natural language inputs accurately. This indicates that LLMs are capable of superior natural language processing capabilities. The GPT embeddings are words generated from the GPT backend of the ChatGPT developed by OpenAI, which can produce accurate outputs for user inputs. In this paper, we generate GPT embeddings and apply machine learning algorithms for fake news prediction and sentiment analysis. The proposed method produces remarkable results with an improvement of approximately 12.59% in accuracy compared to traditional embeddings. For fake news detection, GPT embeddings with SVM outperformed LSTM with Glove embeddings. We evaluated 10 machine learning models using 4 versions of GPT embeddings, namely Ada, Babbage, Curie, and Davinci. The sentiment analysis execution resulted in an impressive accuracy of 98.6%. We made our embeddings publicly available for both datasets. We believe that this is a valuable contribution since generating such embeddings requires access to GPT, which is not freely available to researchers.
History
Email Address of Submitting Author
mandavasathvik6@gmail.comORCID of Submitting Author
0000-0003-4544-4011Submitting Author's Institution
IIIT DharwadSubmitting Author's Country
- India