Radio over Fiber (RoF) is pivotal for extending reliable 5G connectivity to enhanced remote area communications (ERAC) use cases, that can be used for transporting analog signals from the central office to a simplified remote base station, composed only of an optical detector and radio-frequency front-end. However, the RoF link introduces undesired nonlinear effects that can severely degrade overall system performance and prohibitively increase the out-of-band emissions. We investigate and propose the use of reinforcement learning (RL) algorithms based digital predistortion (DPD) called as RLDPD method for linearizing next-generation Analog Radio over Fiber (A-RoF) links within the 5G landscape. We experimentally compare the proposed RLDPD with the conventional methods including generalized memory polynomial (GMP), canonical piecewise linearization (CPWL) and deep learning based convolutional neural networks (CNN). The experiment evaluation involves multiband 5G new radio (NR) flexible-waveform signals at 3 GHz and 10 GHz carrier signal transmitted over a 10 km single mode fiber length. The performance is compared in terms of error vector magnitude (EVM), adjacent channel leakage ratio (ACLR) and computation complexities. The RLDPD achieves a EVM of 2.85% for 5G NR waveform, surpassing GMP's 4.8%, CPWL's 3.5%, CNN's 3.08% and 11.25% without linearization, while also reducing ACPR by 19 dBc when compared to absence of linearization.
Objective: Techniques that are based on artificial intelligence, specifically machine learning, have played a major role in the enhancement of pharmacological methodologies and development of medical treatments, especially those that are individualized or those which fall in the province of precision medicine. In this article, we attempt to examine how graph neural networks have revolutionized certain important aspects of pharmacology.Background: Pharmacological data is replete with unidirectional as well as bidirectional associations, with regards to, for example, drug interactions, patient-centered medicine, precision medicine, multi-omics data analysis, drug discovery, and optimization of experimental processes, and other fields. These associations can be more readily modeled using advanced computational methods and machine learning techniques like graph neural networks. The revolutionary advancements in the field of data mining have further fueled the need to create models that can resolve pharmacological correlations and dependencies into facilely interpretable outcomes. Methods: We conducted a literature review to find those documents which provide relevant information about our objectives. With a comprehensive search plan in place, we sequestered applicable articles and studied them to identify pertinent points that assisted our understanding of graph neural networks as a tool to improvise, automate, and simplify the practical applications in pharmacology and pharmacotherapeutics.Conclusion: The review of relevant research has confirmed our hypothesis that graph neural networks can be used to create an innovative, lasting, and radical departure in pharmaceutical therapeutics. Graph Neural Networks can automate and simplify many tasks based on large and complex datasets which are inherent in pharmacological science. Such techniques can help us achieve innovative methods in therapeutics using extant pharmaceuticals and in the development of new drugs, and therefore bode well for the future of healthcare.
This research paper presents a comprehensive investigation into the development of an innovative and novel custom neural network model for intrusion detection systems (IDS). In the current era of rapid data transfer facilitated by the internet and advancements in communication technologies, the security of sensitive information is of paramount concern. As attackers continuously devise new methodologies to steal or tamper with data, IDSs face significant challenges in effectively detecting and mitigating intrusions. While extensive research has been conducted to enhance IDS capabilities, the need for improved detection accuracy and reduced false alarm rates remains a pressing issue. Moreover, the identification of zeroday attacks continues to pose a formidable obstacle. In contrast to conventional IDS approaches that heavily rely on statistical methodologies and rule-based expert systems, this study embraces data mining techniques, specifically Neural Networks (NNs), to overcome the limitations associated with large datasets. This research paper proposes a meticulously designed custom neural network model that leverages machine learning (ML) algorithms to analyze contemporary host activity and cloud service data. The paper extensively discusses the utilized dataset, meticulously evaluates the performance of various classifiers, and introduces our innovative neural network model. Emphasizing the significance of our model in anomaly detection, the findings underscore the importance of robust ML models to ensure the efficacy and longevity of deployed defensive systems. By capitalizing on its innovative design and leveraging the power of ML algorithms, our model not only addresses the limitations of traditional IDS approaches but also paves the way for enhanced accuracy, reduced false alarms, and improved resilience against zero-day attacks. This research contributes to the advancement of the field, shedding light on the novel possibilities and remarkable innovation offered by our custom neural network model in safeguarding critical information in an increasingly hostile digital landscape.
In the contemporary era of rapid technological advancement, the Industrial Internet of Things (IIoT) has become a pivotal element in revolutionizing industrial operations. This paper delves into the escalating cybersecurity challenges posed by the sprawling networks of IIoT, accentuating the inadequacy of traditional cybersecurity methods in the face of sophisticated cyber threats. We introduce machine learning (ML) as a transformative approach to fortify the cybersecurity landscape of IIoT systems. Our research primarily focuses on the application of machine learning algorithms to detect, analyze, and counteract diverse cyber threats in IIoT environments. These algorithms are trained to recognize and respond to a spectrum of cyber threats, thereby enhancing the resilience of IIoT networks. We present a novel Convolutional-GRU autoencoder model, which demonstrates superior performance over traditional machine learning models in terms of accuracy, precision, recall, and F1score. This model is adept at learning and adapting from complex data patterns, ensuring robust defense against cyber intrusions. We also address the challenges in applying ML to IIoT cybersecurity, considering the varied nature of IIoT devices and the dynamic landscape of cyber threats. This study is an important stride towards enhancing IIoT cybersecurity, highlighting the symbiotic relationship between ML and IIoT. It serves as a foundation for future research and a guide for current implementations, aiming to create more secure, reliable, and efficient IIoT environments. By exploring the potential of ML in cybersecurity, we pave the way for a new era in industrial digital protection, one that is adaptable, forward-thinking, and resilient against the ever-evolving digital threats.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive technique for monitoring brain activity. To better understand the brain, researchers often use deep learning to address the classification challenges of fNIRS data. Our study shows that while current networks in fNIRS are highly accurate for predictions within their training distribution, they falter at identifying and excluding abnormal data which is out-ofdistribution, affecting their reliability. We propose integrating metric learning and supervised methods into fNIRS research to improve networks capability in identifying and excluding out-of-distribution outliers. This method is simple yet effective. In our experiments, it significantly enhances the performance of various networks in fNIRS, particularly transformer-based one, which shows the great improvement in reliability. We will make our experiment data available on GitHub.
Service meshes are emerging software frameworks to manage communications among microservices of distributed applications. With a service mesh, each microservice is flanked by an L7 sidecar proxy that intercepts any incoming and outgoing requests for better observability, traffic management, and security. The sidecar proxy uses an application-level load balancing policy to route outbound requests towards possible replicas of destination microservices. A widely used load balancing policy is the Least Outstanding Request (LOR), which routes requests to the microservice replica with the fewest outstanding requests. While the LOR policy significantly reduces request latency in scenarios with a single load balancer, our comprehensive investigation, spanning analytical, simulation, and experimental methodologies, reveals that its effectiveness decreases in environments with multiple load balancers, typical of service meshes serving applications with several microservice replicas. Specifically, the resulting request latency asymptotically tends to that provided by a random load balancing policy as the number of microservice replicas increases. To address this loss in efficacy, we propose a solution based on a new Kubernetes custom resource, named Proxy-Service, offering potential improvements in performance and scalability.
We address the problem of real-time remote tracking of a partially observable Markov source in an energy harvesting system with an unreliable communication channel. We consider both sampling and transmission costs. Different from most prior studies that assume the source is fully observable, the sampling cost renders the source partially observable. The goal is to jointly optimize sampling and transmission policies for two semantic-aware metrics: i) a general distortion measure and ii) the age of incorrect information (AoII). We formulate a stochastic control problem. To solve the problem for each metric, we cast a partially observable Markov decision process (POMDP), which is transformed into a belief MDP. Then, for both AoII under the perfect channel setup and distortion, we express the belief as a function of the age of information (AoI). This expression enables us to effectively truncate the corresponding belief space and formulate a finite-state MDP problem, which is solved using the relative value iteration algorithm. For the AoII metric in the general setup, a deep reinforcement learning policy is proposed to solve the belief MDP problem. Simulation results show the effectiveness of the derived policies and, in particular, reveal a non-monotonic switching-type structure of the real-time optimal policy with respect to AoI.
The rise in robotics technology has led to increased interest in three-wheeled mobile robots (TWMRs) due to their agility and adaptability across various applications. However, effectively controlling TWMRs presents a significant challenge owing to their inherent nonholonomic constraint, which restricts their independent movement in all directions. Additionally, factors like sensor noise, nonlinear system dynamics, and uncertain system parameters add to the complexity controlling of TWMRs. This research endeavors to enhance the precision of trajectory tracking in TWMRs. Specifically, it employs Backstepping Fuzzy Sliding Mode Control (BFSMC) with parameters optimized through Particle Swarm Optimization (PSO), coupled with the Extended Kalman Filter (EKF) for state estimation. The study conducts a comprehensive performance comparison between BFSMC and BSMC across various trajectory patterns, revealing substantial improvements in trajectory tracking accuracy with BFSMC. BFSMC demonstrates improved performance compared to BSMC across various trajectory types, quantified by calculating the percentage improvement in trajectory tracking using Integral Absolute Error (IAE). Specifically, it achieves a 51.97% improvement for circular trajectories, an 82.09% improvement for infinity trajectories, and an 84.073% improvement for spiral trajectories.. Moreover, BFSMC demonstrates superior robustness in the presence of disturbances, noise, parameter variations, and unmodeled dynamics compared to BSMC. The integration of the Extended Kalman Filter further improve accuracy, particularly in noisy conditions. Simulation results conducted using MATLAB/Simulink software validate the effectiveness of this approach in achieving superior trajectory tracking accuracy in TWMRs.
Identification of sources of seizures in the brain is of paramount importance, particularly for drug-resistant epilepsy patients who may require surgical operation. Interictal epileptiform discharges (IEDs), which may or may not be frequent, are known to originate from seizure networks. Delayed responses (DRs) to brain electrical stimulation have been recently discovered. If DRs and IEDs come from the same location and the DRs can be accurately localized, there will be a significant step in the identification of the source of seizures. The solution to this important question has been investigated in this paper. For this, we have exploited the morphology of these spike-type, events as well as the variability in their temporal location, to develop new constraints for an adaptive Bayesian beamformer that outperforms the conventional and recently proposed beamformers. This beamformer is applied to an array (a.k.a mat) of cortical EEG electrodes. As the significant outcome of applying this beamformer, it is very likely (if not certain) that the IEDs and DRs for an epileptic subject originate from the same location in the brain. This paves the way for a quick identification of the source(s) of seizure in the brain.
Remaining useful life (RUL) is crucial to the condition and health monitoring. This paper proposes an adaptive RUL prediction method for DC-link film capacitors for power electronic applications. By using the proportional hazards model framework, this method integrates the degradation data and the time-to-failure data, to quantify the component failure behavior in a probabilistic way. It employs a mixed-effects model to characterize the degradation behavior of capacitors. The hazard rate is applied to characterize the likelihood of capacitor failure. The operational conditions are incorporated into the hazard model to capture the influence on the RUL. The Bayesian updating mechanism is developed to calibrate and tailor the offline model for the in-situ component, enabling the adaptive RUL prediction along with sequential monitoring data in real time. The method is experimentally verified with DC film capacitors subjected to accelerating humidity conditions. It is accompanied by an online tool at https://rul-capacitor.streamlit.app that can interactively investigate the method details.
This paper presents a metal-only reflectarray based on a 3D unit cell with dual-band capability. The 3D unit cell is a square waveguide whose vertical walls include resonator elements with independent frequency performance. Different resonator geometries are analyzed to obtain a reflected phase variation in the target frequency band and to be feasible for 3-D manufacturing. C-shaped, triangle-shaped, and circle-shaped resonators are selected to obtain the required phase shift in reflection. Two reflectarray (RA) prototypes are designed, including pairs of these resonators where the C-shaped resonator controls the low-frequency band, and the circle and triangle resonators do so for the high-frequency band. The main reflected beam directions for each frequency band are different to show the independent phase tuning of the resonators. The prototypes are manufactured using stereolithography (SLA) with a subsequent silver coating. Measured results show realized gains of 21 dBi in the 18 GHz band and 24 dBi in the 26.5 GHz band, with a high radiation efficiency and good agreement with the simulated results.
With rapid urbanization, cities face immense pressures on infrastructure and resources. Uncoordinated management of transportation, energy, water and waste infrastructure leads to inefficiencies, delays and unsustainability. This paper proposes a novel IoT-enabled framework to address these challenges through holistic data-driven management of city infrastructure. While prior works have explored IoT point solutions for specific domains, our integrated framework delivers a comprehensive architecture for citywide infrastructure visibility. The edge computing-based distributed design enables scalable real-time analytics across thousands of heterogeneous assets spread city-wide. Through consolidated storage and analytics, interdependencies between various infrastructure systems can be uncovered to optimize overall city operations. The standardsbased implementation fosters seamless integration of diverse infrastructure technologies. Our unified data management layer provides a single platform for visual intelligence on city-wide infrastructure health to support data-driven planning. We demonstrate the efficacy of the proposed framework through a case study focused on transportation infrastructure management. The results showcase significant enhancements in operational efficiency, sustainability and cost savings across transport assets when managed under the IoT-enabled framework versus traditional siloed approaches. This paper provides city leaders and technologists an implementable blueprint to harness the power of IoT and analytics for transitioning to smarter, sustainable and resident-friendly infrastructure.
Future 6G networks will be enabled by full softwarization of network functions & operations and in-network intelligence for self-management and orchestration. However, the intelligent management of a softwarized network will require massive data mining, analytics, and processing. That is why it is fundamental to find additional resources like quantum technologies to help achieve 6G key performance indicators. Quantum properties provide quantum computers to run a quantum algorithm with lesser queries. Quantum Machine Learning (QML) studies machine learning techniques on quantum computers. In this work, we use a QML algorithm to solve the controller placement problem for a multi-controller Software Defined Network (SDN). The network delay depends on where the controller is located, thus, it is critical to choose controllers at positions leading to minimize latency between the controllers and their associated switches. We consider an SDN architecture which is in its early stage of installation where the network nodes are deployed but connections will be established after obtaining controller locations, which results in the reduction of the overall controller to switch delay. By using different types of datasets, i.e., uniformly distributed and Gaussian distributed points, the experimental results show that the QML algorithm accelerates the SDN clustering methods (which are used to resolve the control placement problem) compared to those of the classical machine learning algorithm (like K-means) with comparable latency.
Intelligent Transportation Systems (ITS) operate within a highly intricate and dynamic environment characterized by complex spatial and temporal dynamics at various scales, further compounded by fluctuating conditions influenced by external factors such as social events, holidays, and weather conditions. Navigating the intricacies of modeling the intricate interaction among these elements, creating universal representations, and employing them to address transportation issues presents a significant endeavor. Yet, these intricacies comprise just one facet of the multifaceted trials confronting contemporary ITS. This paper offers an all-encompassing survey exploring Deep learning (DL) utilization in ITS, primarily focusing on practitioners' methodologies to address these multifaceted challenges. The emphasis lies on the architectural and problem-specific factors that guide the formulation of innovative solutions. In addition to shedding light on the state-of-the-art DL algorithms, we also explore potential applications of DL and large language models (LLMs) in ITS, including traffic flow prediction, vehicle detection and classification, road condition monitoring, traffic sign recognition, and autonomous vehicles. Besides, we identify several future challenges and research directions that can push the boundaries of ITS, including the critical aspects of explainability, transfer learning, hybrid models, privacy and security, and ultra-reliable low-latency communication. Our aim for this survey is to bridge the gap between the burgeoning DL and transportation communities. By doing so, we aim to facilitate a deeper comprehension of the challenges and possibilities within this field. We hope that this effort will inspire further exploration of fresh perspectives and issues, which, in turn, will play a pivotal role in shaping the future of transportation systems.
Current research uses TOPSIS to evaluate 14 Cricket World Cup 2023 teams. Data from the Espn Cricinfo website was used in this analysis. A comprehensive set of criteria (P1 to P11) was used to evaluate each squad, encompassing various game aspects. A numerical labeling system (A1 to A14) and parameter system (P1 to P11) were used to idenMfy team names and qualiMes more efficiently. The research calculates the normalized matrix and weighted matrix, then finds the best and worst values using TOPSIS. A normalized matrix creates a consistent and uniform framework for evaluaMng and comparing factors, ensuring imparMality and jusMficaMon. In contrast, the weighted matrix integrates each criterion's proporMonal importance into the evaluaMon process. For each criterion, the ideal best and ideal worst values indicate the best and worst performance. The TOPSIS analysis placed Australia first, Bangladesh second, and New Zealand third. In fourth and fiXh place were India and Sri Lanka. Afghanistan, West Indies, England, South Africa, and Pakistan rated sixth to tenth. Nepal was tenth, Ireland, the US, and Zimbabwe fourteenth.To understand team performance, the TOPSIS technique must be accepted. It is important to acknowledge that the Cricket World Cup 2023 results may vary owing to many factors. This study provides a systematic and comprehensive approach to team performance, making it a useful resource for cricket fans and experts interested in the event's competitive dynamics.
Time measurements are challenging in electronics 1 given their various applications. The main focus lies not in achieving greater precision, as conventional architectures have already reached picosecond levels. Instead, the challenge stems from the use of low resources and the substantial expansion in the number of channels. This study presents a novel architecture for the implementation of TDCs in applications where resources are constrained. The introduced FPGA-based TDC offers a resolution of 415.84 ps, a single-shot precision of 0.45 LSB (186 ps r.m.s), while maintaining a minimal resource occupancy. Built upon a multi-shift phase counter, the TDC is extended with a tap delay using the input delay available in the FPGA hardware input, doubling the resolution of the TDC. The resource utilization is minimized when compared to low-resources state-of-the-art TDCs. The number of LUTs has been reduced up to 102, and the number of registers to 213. Furthermore, the presented TDC exhibits favorable DNL (0.2 LSB) and INL 17 (0.15 LSB). The TDC has been successfully implemented on an Artix7-2 FPGA from Xilinx. This design provides a resource-effective solution for applications requiring high precision and low resource consumption.
This letter reports electrical properties of Al-GaN/GaN high electron mobility transistor (HEMT) with epitaxial Nd2O3 as a gate insulator. The introduction of Nd2O3 between metal and semiconductor in the gate region, results in two orders of magnitude reduction of gate leakage current, which remains unchanged even at higher temperature of 200°C. The Ion/I off also remains constant at 200°C and the transconductance stays at its peak over a significant range of gate bias (4.5 V). This linearity is attributed to the increased electron concentration in channel due to introduction of epitaxial Nd2O3 and the resulting strain on AlGaN barrier layer. The increased 2DEG density also leads to an increase in output drain current from the metal oxide semiconductor(MOS)-HEMT.