In the rapidly evolving landscape of natural language processing (NLP), ChatGPT has emerged as a powerful tool for various industries and applications. To fully harness the potential of ChatGPT, it is crucial to understand and master the art of prompt engineering-the process of designing and refining input prompts to elicit desired responses from an AI NLP model. This article provides a comprehensive guide to mastering prompt engineering techniques, tips, and best practices to achieve optimal outcomes with ChatGPT. The discussion begins with an introduction to ChatGPT and the fundamentals of prompt engineering, followed by an exploration of techniques for effective prompt crafting, such as clarity, explicit constraints, experimentation, and leveraging different types of questions. The article also covers best practices, including iterative refinement, balancing user intent, harnessing external resources, and ensuring ethical usage. Advanced strategies, such as temperature and token control, prompt chaining, domain-specific adaptations, and handling ambiguous inputs, are also addressed. Real-world case studies demonstrate the practical applications of prompt engineering in customer support, content generation, domain-specific knowledge retrieval, and interactive storytelling. The article concludes by highlighting the impact of effective prompt engineering on ChatGPT performance, future research directions, and the importance of fostering creativity and collaboration within the ChatGPT community.
Artificial neural networks (ANNs) have won numerous contests in pattern recognition, machine learning, and artificial intelligence in recent years. The neuron of ANNs was designed by the stereotypical knowledge of biological neurons 70 years ago. Artificial Neuron is expressed as f(wx+b) or f(WX). This design does not consider dendrites’ information processing capacity. However, some recent studies show that biological dendrites participate in the pre-calculation of input data. Concretely, biological dendrites play a role in extracting the interaction information among inputs (features). Therefore, it may be time to improve the neuron of ANNs. In this study, some dendritic modules with excellent properties are proposed and added to artificial neurons to form new neurons named Gang neurons. E.g., The dendrite function can be expressed as Wi,i-1Ai-1 ○ A0|1|2|…|i-1 . The generalized new neuron can be expressed as f(W(Wi,i-1Ai-1 ○ A0|1|2|…|i-1)).The simplified new neuron be expressed as f(∑(WA ○ X)). After improving the neurons, many networks can be tried. This paper shows some basic architecture for reference in the future. Up to now, others and the author have applied Gang neurons to various fields, and Gang neurons show excellent performance in the corresponding fields. Interesting things: (1) The computational complexity of dendrite modules (Wi,i-1Ai-1 ○ Ai-1) connected in series is far lower than Horner’s method. Will this speed up the calculation of basic functions in computers? (2) The range of sight of animals has a gradient, but the convolution layer does not have this characteristic. This paper proposes receptive fields with a gradient. (3) The networks using Gang neurons can delete Fully-connected Layer. In other words, the parameters in Fully-connected Layers are assigned to a single neuron, which reduces the parameters of a network for the same mapping capacity. (4) ResDD(ResDD modules+One Linear module) can replace the current ANNs’ Neurons. ResDD has controllable precision for better generalization capability. Gang neuron code is available at https://github.com/liugang1234567/Gang-neuron.
World Health Organization declared COVID-19 as a pandemic after the breakout in the city of Wuhan, China. The disease has negatively affected the global economy and daily life. Most of the countries around the world have imposed travel restrictions, locked down, and social distancing measures. In the current situation, Information and Communications Technology is playing a significant role in connecting people. Majority of the education organizations have adopted online platforms, students and staff are working from home. Besides, these businesses, e-healthcare systems, food deliveries, and online grocery shopping have witnessed a very high demand. Malicious attackers have considered COVID-19 as an opportunity to launch attacks for financial gains and to promote their evil intents. Healthcare systems are being attacked with ransomware and resources such as patient’s records confidentiality, and integrity is being compromised. People are falling prey to phishing attacks through COVID-19 related content. In this research, we have identified the top ten cybersecurity threats that had and could take place during the pandemic. We have also discussed the privacy concerns raised amid COVID-19.
Within the vast expanse of computerized language processing, a revolutionary entity known as Large Language Models (LLMs) has emerged, wielding immense power in its capacity to comprehend intricate linguistic patterns and conjure coherent and contextually fitting responses. Large language models (LLMs) are a type of artificial intelligence (AI) that have emerged as powerful tools for a wide range of tasks, including natural language processing (NLP), machine translation, and question-answering. This survey paper provides a comprehensive overview of LLMs, including their history, architecture, training methods, applications, and challenges. The paper begins by discussing the fundamental concepts of generative AI and the architecture of generative pre- trained transformers (GPT). It then provides an overview of the history of LLMs, their evolution over time, and the different training methods that have been used to train them. The paper then discusses the wide range of applications of LLMs, including medical, education, finance, and engineering. It also discusses how LLMs are shaping the future of AI and how they can be used to solve real-world problems. The paper then discusses the challenges associated with deploying LLMs in real-world scenarios, including ethical considerations, model biases, interpretability, and computational resource requirements. It also highlights techniques for enhancing the robustness and controllability of LLMs, and addressing bias, fairness, and generation quality issues. Finally, the paper concludes by highlighting the future of LLM research and the challenges that need to be addressed in order to make LLMs more reliable and useful. This survey paper is intended to provide researchers, practitioners, and enthusiasts with a comprehensive understanding of LLMs, their evolution, applications, and challenges. By consolidating the state-of-the-art knowledge in the field, this survey serves as a valuable resource for further advancements in the development and utilization of LLMs for a wide range of real-world applications. The GitHub repo for this project is available at https://github.com/anas-zafar/LLM-Survey
Engineering education is constantly evolving to keep up with the latest technological developments and meet the changing needs of the engineering industry. One promising development in this field is the use of generative artificial intelligence technology, such as the ChatGPT conversational agent. ChatGPT has the potential to offer personalized and effective learning experiences by providing students with customized feedback and explanations, as well as creating realistic virtual simulations for hands-on learning. However, it is important to also consider the limitations of this technology. ChatGPT and other generative AI systems are only as good as their training data and may perpetuate biases or even generate and spread misinformation. Additionally, the use of generative AI in education raises ethical concerns such as the potential for unethical or dishonest use by students and the potential unemployment of humans who are made redundant by technology. While the current state of generative AI technology represented by ChatGPT is impressive but flawed, it is only a preview of what is to come. It is important for engineering educators to understand the implications of this technology and study how to adapt the engineering education ecosystem to ensure that the next generation of engineers can take advantage of the benefits offered by generative AI while minimizing any negative consequences.
COVID-19, an infectious disease caused by the SARS-CoV-2 virus, was declared a pandemic by the World Health Organisation (WHO) in March 2020. At the time of writing, more than 2.8 million people have tested positive. Infections have been growing exponentially and tremendous efforts are being made to fight the disease. In this paper, we attempt to systematise ongoing data science activities in this area. As well as reviewing the rapidly growing body of recent research, we survey public datasets and repositories that can be used for further work to track COVID-19 spread and mitigation strategies. As part of this, we present a bibliometric analysis of the papers produced in this short span of time. Finally, building on these insights, we highlight common challenges and pitfalls observed across the surveyed works.
The COVID-19 pandemic has accelerated methods to facilitate contactless evaluation of patients in hospital settings. By minimizing unnecessary in-person contact with individuals who may have COVID-19 disease, healthcare workers (HCW) can prevent disease transmission, and conserve personal protective equipment. Obtaining vital signs is a ubiquitous task that is commonly done in-person. To eliminate the need for in-person contact for vital signs measurement in the hospital setting, we developed Dr. Spot, an agile quadruped robotic system that comprises a set of contactless monitoring systems for measuring vital signs and a tablet computer to enable face-to-face medical interviewing. Dr. Spot is teleoperated by trained clinical staff to facilitate enhanced telemedicine. Specifically, it has the potential to simultaneously measure skin temperature, respiratory rate, heart rate, and blood oxygen saturation simultaneously while maintaining social distancing from the patients. This is important because fluctuations in vital sign parameters are commonly used in algorithmic decisions to admit or discharge individuals with COVID-19 disease. Here, we deployed Dr. Spot in a hospital setting with the ability to measure the vital signs from healthy volunteers from which the measurements of elevated skin temperature screening, respiratory rate, heart rate, and SpO2 were carefully verified with ground-truth sensors.
Machine Learning (ML) workloads have rapidly grown in importance, but raised concerns about their carbon footprint. Four best practices can reduce ML training energy by up to 100x and CO2 emissions up to 1000x. By following best practices, overall ML energy use (across research, development, and production) held steady at <15% of Google’s total energy use for the past three years. If the whole ML field were to adopt best practices, total carbon emissions from training would reduce. Hence, we recommend that ML papers include emissions explicitly to foster competition on more than just model quality. As estimates of emissions in papers that omitted them have been off 100x–100,000x, publishing emissions has the added benefit of ensuring accurate accounting. Given the importance of climate change, we must get the numbers right to make certain that we work on its biggest challenges.
Spreading fake news has become a serious issue in the current social media world. It is broadcasted with dishonest intentions to mislead people. This has caused many unfortunate incidents in different countries. The most recent one was the latest presidential elections where the voters were mis lead to support a leader. Twitter is one of the most popular social media platforms where users look up for real time news. We extracted real time data on multiple domains through twitter and performed analysis. The dataset was preprocessed and user_verified column played a vital role. Multiple machine algorithms were then performed on the extracted features from preprocessed dataset. Logistic Regression and Support Vector Machine had promising results with both above 92% accuracy. Naive Bayes and Long-Short Term memory didn’t achieve desired accuracies. The model can also be applied to images and videos for better detection of fake news.
We present a general theory of payment systems that is capable of describing both traditional and electronic forms of payment. Starting from the three basic functions of money and general non-functional requirements, we derive the necessary and sufficient properties of technical implementations of money and payments. We describe possible scalable implementations of e-money schemes based on a general description of their data structures (money distributions) and payments. We define the notion of bill scheme, in which the value units are bills with invariant values, and show that only the bill scheme allows for scalable and practically efficient implementations through decomposition, where the components have to process a considerably smaller amount of data and a number of payment requests, compared to the whole system.
No one predicted the corona virus outbreak, and naturally, nobody was prepared. The corona virus, otherwise, adoringly known as COVID-19, has put the entire world in a difficult position because of the apparent life-threatening danger it poses and the rate of spread and infection worldwide. Besides taking away so many lives, it has caused a variety of problems such as unemployment, social distancing, disruption of businesses and daily life. Circumstances are worse for those who live from hand to mouth. Within the three months of its presence, COVID-19 has forced humanity to find and implement alternative ways to sustain businesses and life. In this research paper, we have unearthed 7 lessons learned from the COVID-19 pandemic. These lessons include aspects of business, education, online presence, network communication, cyber security, healthcare, and the purpose of life. This research delves deeper into the response to the unannounced pandemic in hand. It aims to provide the right direction to address a potential pandemic in the future should it happen.
Accurate and high-resolution spatio-temporal information about crop phenology obtained from Synthetic Aperture Radar (SAR) data is an essential component for crop management and yield estimation at a local scale. Crop growth monitoring studies seldom exploit complete polarimetric information contained in dual-pol GRD SAR data. In this study, we propose three polarimetric descriptors: the pseudo scattering-type parameter (θc), the pseudo scattering entropy parameter (Hc), and the co-pol purity parameter (mc) from dual-pol S1 GRD SAR data. We also introduce a novel unsupervised clustering framework using Hc and θc with six clustering zones to represent various scattering mechanisms. We implemented the proposed algorithm on the cloud-based Google Earth Engine (GEE) platform for Sentinel-1 SAR data. We have shown the sensitivity of these descriptors over a time series of data for wheat and canola crops at a test site in Canada. From the leaf development stage to the flowering stage for both crops, the pseudo scattering-type parameter θc changes by approximately 17°. Moreover, within the entire phenology window, both mc and Hc varies by about 0.6. The effectiveness of θc and Hc to cluster the phenological stages for the two crops is also evident from the clustering plot. During the leaf development stage, about 90 % of the sampling points were clustered into the low to medium entropy scattering zone for both the crops. Throughout the flowering stage, the entire cluster shifted into the high entropy vegetation scattering zone. Finally, during the ripening stage, the clusters of sample points were split between the high entropy vegetation scattering zone and the high entropy distributed scattering zone, with > 55 % of the sampling points in the high entropy distributed scattering zone. This innovative clustering framework will facilitate the operational use of S1 GRD SAR data for agricultural applications. This article is submitted to ISPRS Journal of Photogrammetry and Remote Sensing
If reality is being augmented as a simulation; then from the modern norm of physics, it is possible that everywhere around us, including us and what we perceive is a simulation simulated by a super powerful computer from a farfetched future times, taken the Einstein’an notion, that, past, present and future occur simultaneously. It is quite probable between two consecutive amplitudes that, the simulation can either be done by the super advanced civilization in greater than or equal to Kardashev 3.0 scale or by some higher order entities existing in a dimensional domain beyond our perceiveness and notion of our understanding. To agree to the fact of the simulation hypothesis, there exists a mathematical foundation of the desired logic behind this simulation, which will be investigated throughout this paper whose another consequence might be the déjà vu or the Mandela effect. The errors arise in this simulation is a form of glitch in the matrix that should happen because of the commutable lagging of the super-intelligent computers of either future ones or higher-order ones. Preciseness about the calculations of dimensions opened a way for t + s = 2 + 10 where the non-locality of the time being perceived as a 2-dimensional entity opens up the door for further investigations. The more will be discussed in detail in this paper.
The superposition theorem, a particular case of the superposition principle, states that in a linear circuit with several voltage and current sources, the current and voltage for any element of the circuit is the algebraic sum of the currents and voltages produced by each source acting independently. The superposition theorem is not applicable to power, because it is a non-linear quantity. Therefore, the total power dissipated in a resistor must be calculated using the total current through (or the total voltage across) it. The theorem proposed and proved in this paper states that in a linear DC network consisting of resistors and independent voltage and current sources, the total power dissipated in the resistors of the network is the sum of the power supplied simultaneously by the voltage sources with the current sources replaced by open circuit, and the power supplied simultaneously by the current sources when the voltage sources are replaced by short-circuit. This means that the power is superimposed. The theorem can be used to simplify the power analysis of DC networks. The analysis results are validated via numerical examples.
Malware behavioral graphs provide a rich source of information that can be leveraged for detection and classification tasks. In this paper, we propose a novel behavioral malware detection method based on Deep Graph Convolutional Neural Networks (DGCNNs) to learn directly from API call sequences and their associated behavioral graphs. In order to train and evaluate the models, we created a new public domain dataset of more than 40,000 API call sequences resulting from the execution of malware and goodware instances in a sandboxed environment. Experimental results show that our models achieve similar Area Under the ROC Curve (AUC-ROC) and F1-Score to Long-Short Term Memory (LSTM) networks, widely used as the base architecture for behavioral malware detection methods, thus indicating that the models can effectively learn to distinguish between malicious and benign temporal patterns through convolution operations on graphs. To the best of our knowledge, this is the first paper that investigates the applicability of DGCNN to behavioral malware detection using API call sequences.
This is a preprint version of the manuscript submitted to IEEE on June 4, 2020. This paper gives an overview of the Artificial Intelligence (AI) applications for power electronic systems. The three distinctive life-cycle phases, design, control, and maintenance are correlated with one or more tasks to be addressed by AI, including optimization, classification, regression, and data structure exploration. The applications of four categories of AI are discussed, which are expert system, fuzzy logic, metaheuristic method, and machine learning. More than 500 publications have been reviewed to identify the common understandings, practical implementation challenges, and research opportunities in the application of AI for power electronics.
Social media has revolutionized the way we communicate and interact with each other. While it has brought many benefits, it has also presented many ethical challenges. Social media platforms have access to an enormous amount of personal data, and there are concerns about how this data is being stored, collected, and used. Users often need to fully understand the risks of sharing sensitive information. Social media platforms have made it easy for fake news to spread rapidly, which can be dangerous and have serious consequences. Misinformation and propaganda can influence people’s decisions and beliefs. In this paper, we will analyze the issues and challenges that may arise, and it will be necessary for individuals and society to address these challenges ethically and responsibly.