Within the vast expanse of computerized language processing, a revolutionary entity known as Large Language Models (LLMs) has emerged, wielding immense power in its capacity to comprehend intricate linguistic patterns and conjure coherent and contextually fitting responses. Large language models (LLMs) are a type of artificial intelligence (AI) that have emerged as powerful tools for a wide range of tasks, including natural language processing (NLP), machine translation, and question-answering. This survey paper provides a comprehensive overview of LLMs, including their history, architecture, training methods, applications, and challenges. The paper begins by discussing the fundamental concepts of generative AI and the architecture of generative pre- trained transformers (GPT). It then provides an overview of the history of LLMs, their evolution over time, and the different training methods that have been used to train them. The paper then discusses the wide range of applications of LLMs, including medical, education, finance, and engineering. It also discusses how LLMs are shaping the future of AI and how they can be used to solve real-world problems. The paper then discusses the challenges associated with deploying LLMs in real-world scenarios, including ethical considerations, model biases, interpretability, and computational resource requirements. It also highlights techniques for enhancing the robustness and controllability of LLMs, and addressing bias, fairness, and generation quality issues. Finally, the paper concludes by highlighting the future of LLM research and the challenges that need to be addressed in order to make LLMs more reliable and useful. This survey paper is intended to provide researchers, practitioners, and enthusiasts with a comprehensive understanding of LLMs, their evolution, applications, and challenges. By consolidating the state-of-the-art knowledge in the field, this survey serves as a valuable resource for further advancements in the development and utilization of LLMs for a wide range of real-world applications. The GitHub repo for this project is available at https://github.com/anas-zafar/LLM-Survey

Chnoor M. Rahman

and 5 more

The dragonfly algorithm developed in 2016. It is one of the algorithms used by the researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.

Ahlem Aboud

and 8 more

Ahlem Aboud

and 6 more