The Industrial Internet of Things (IIoT) has become a critical technology to accelerate the process of digital and intelligent transformation of industries. As the cooperative relationship between smart devices in IIoT becomes more complex, getting deterministic responses of IIoT periodic time-critical computing tasks becomes a crucial and nontrivial problem. However, few current works in cloud/edge/fog computing focus on this problem. This paper is a pioneer to explore the deterministic scheduling and network structural optimization problems for IIoT periodic time-critical computing tasks. We first formulate the two problems and derive theorems to help quickly identify computation and network resource sharing conflicts. Based on this, we propose a deterministic scheduling algorithm, IIoTBroker, which realizes deterministic response for each IIoT task by optimizing the fine-grained computation and network resources allocations, and a network optimization algorithm, IIoTDeployer, providing a cost-effective structural upgrade solution for existing IIoT networks. Our methods are illustrated to be cost-friendly, scalable, and deterministic response guaranteed with low computation cost from our simulation results.
This work emerges from the intensifying need to understand and address security issues in rapidly advancing technologies such as 5G and beyond, including open radio access network (O-RAN). The current paper provides an in-depth examination of the security aspects of the E2 interface within the O-RAN context. The research underscores the diverse roles that the E2 interface assumes in enabling communication between the RAN Intelligent Controller (RIC) and the E2 node. It critically examines the various vulnerabilities and potential security threats of this interface. This work subsequently reviews the security mechanisms and methodologies proposed by the O-RAN Alliance to secure the E2 interface. This work aims to highlight the crucial role that the E2 interface undertakes in the network's overall communication and the indispensable security questions, given the stakes of these networks. The findings from this work could serve as a valuable addition to existing resources and provide insightful perspectives for future research in this field. The paper concludes with a discussion of potential directions for future work on the security of the E2 interface.
In recent years, RIS has made significant progress in engineering application research and industrialization, as well as in academic research. However, the engineering application research field of RIS still faces several challenges. This article analyzes and discusses the two deployment modes of RIS-Assisted wireless networks, namely Network Controlled Mode and Standalone mode. It also presents three typical collaboration scenarios of RIS networks, including multi-RIS collaboration, multiuser access, and multi-cell coordination, which reflect the differences between the two deployment modes of RIS. The article proposes collaborative regulation mechanisms for RIS and analyzes their applications in the two network deployment modes in-depth. Furthermore, the article establishes simulation models of three scenarios and provides rich numerical simulation results. An actual field test environment is also built, where a specially designed and processed RIS prototype was used for preliminary field test and verification. Finally, this article puts forward future trends and challenges.
Today modern web site accelerated by scripts, but the foundation, web page its self is still a static structure. Document Object Model (DOM) represents the structure of web page. Here we show a new approach: It is possible to put timetree and DOM together to shape a new structure named Time Object Model. TOM represents not only a static page but also a dynamic stream. We believe the best way for using TOM is to embed it into a HTML page in real time without changing the existence, it is the only way works now.
In this paper, comprehensive double-directional channel measurements at 300 GHz in various usage scenarios in corridor environments, such as Access, Device-to-device (D2D), and Backhaul over 40 different receiver (Rx) positions using an in-house developed channel sounder, are presented. The measurement results are analyzed and validated by ray tracing (RT) simulation. The quasi-optical propagation properties at 300 GHz make an accurate estimation of relatively simple propagation in a corridor environment possible by using ray optics theory. However, even though non-trivial quadruple-bounce specular reflection paths can be identified in both scenarios, propagation phenomena other than reflection exist irrespective of the Rx positions. Thus, to model the propagation mechanism appropriately, a quasi-deterministic (QD) channel model comprising deterministic and random components is also proposed. The results generated using the proposed model are found to agree well with our prior observations and measurement results. Finally, the paper concludes by characterizing and comparing the channel for all the investigated scenarios in terms of path loss (PL) and large-scale parameters (LSP). On analyzing the measurement results using synthesized power spectra, proposed QD model, and evaluated PL and LSP it is observed that the Access and D2D scenarios share almost similar propagation mechanisms. Furthermore, in the Access and Backhaul scenario the LoS is observed to be affected by the unresolvable ceiling-reflected components. This study, across three different scenarios, can aid the design of next-generation communication systems operating in the THz spectrum.
Enabling artificial intelligence native end-to-end systems in ultra-wideband sub-terahertz spectrum faces several challenges. The particularly complex channel variations and nonlinear behavior of analog components of the transceivers are major obstacles to the over-the-air adaptation of these systems. In this paper, we investigate an edge-based bidirectional long-shortterm memory neural network capable of predicting the channel gain variations in Non-Line-of-Sight conditions. We aim to enable end-to-end autoencoders with a predictive model for scheduling the training phase when the power is above the receiver sensitivity and there are no large fading variations. Otherwise, the training of the end-to-end system will likely fail. With only 16 BiLSTM cells our model is capable of inferring the channel gain variations with a worst-case root mean squared error lower than 0.0547 (i.e., 1.1% compared to the normalized channel gain range). Also, with lower computational complexity, our model decreased the propagation of the error compared to traditional recurrent neural networks and deep-learning-based forecasting models.
In the context of 6G architecture development, the concept of a softwarized (orchestration) continuum is a key pillar. Nevertheless, achieving complete softwarization of network functionalities, tasks, and operations presents inherent challenges, leading to critical trade-offs and limitations. This article explores a novel approach to address these issues by integrating quantum technologies and the Physical Layer Service Integration (PLSI) paradigm. Specifically, we propose the formulation and analysis of network synchronization as a quantum PLSI problem. Our study evaluates synchronization time offset in both conventional Precision Time Protocol (PTP) and quantum-based approaches within the network. We investigate the impact of various network conditions on the precision of PTP synchronization, ranging from nanoseconds under ideal circumstances to microseconds when utilizing virtual network devices. Further, we perform a simulation to generate frequency-entangled photon pairs to access nonlocal temporal correlations and calculate the time offsets. Our findings reveal that entanglement-based PLSI for network synchronization achieves precision at the picosecond level. These results emphasises the high precision achievable by interpreting the network synchronisation problem in the perspective of PLSI and not as a service of the softwarized continuum.
Ultra-reliable and low latency communications (URLLC) will be the backbone of the upcoming sixth-generation (6G) systems and will facilitate mission-critical scenarios. A design accounting for stringent reliability and latency requirements for URLLC systems poses a challenge for both industry and academia. Recently, unmanned aerial vehicles (UAV) have emerged as a potential candidate to support communications in futuristic wireless systems due to providing favourable channel gains thanks to Lineof-Sight (LoS) communications. However, usage of UAV in cellular infrastructure increases interference in aerial and terrestrial user equipment (UE) limiting the performance gain of UAV-assisted cellular systems. To resolve these issues, we propose low-complexity algorithms for intercell interference coordination (ICIC) using cognitive radio when single and multi-UAVs are deployed in a cellular environment to facilitate URLLC services. Moreover, we model BS-to-UAV (B2U) interference in downlink communication, whereas in uplink we model UAV-to-BS (U2B), UAV-to-UAV (U2U), and UE-to-UAV (UE2U) interference under perfect/imperfect channel state information (CSI). Results demonstrate that the proposed perfect ICIC accounts for fairness among UAV especially in downlink communications compared to conventional ICIC algorithms. Furthermore, in general, the proposed UAV-sensing assisted ICIC and perfect ICIC algorithms yield better performance when compared to conventional ICIC for both uplink and downlink for the single and multi-UAV frameworks. INDEX TERMS URLLC, multi-UAV, cognitive radio, intercell interference coordination (ICIC).
The value of colorless, directionless, and contentionless (CDC-)ROADM (reconfigurable optical add/drop multiplexer) nodes is strongly contested in the optical networking community. In this work, we compare known ROADM node designs incorporating different switching elements and account for their total nodal switching state support (in consideration of both channel routing and add/drop). This allows us to quantify the impact of directional/contentional accessibility constraints to add/drop transceivers. By considering the network node entity as a permutation network among its ingress/egress ports for all wavelength channels, which covers both through routing and add/drop assignments, we tabulate the node’s switching capacity, or total allowable connection states, per different ROADM architecture, hardware constraints, and finite number of add/drop transceivers. We further introduce the impact of idle wavelength channels on fiber links, as well as bidirectional routing assignments. Our switching capacity enumerations demonstrate that CDC-ROADM outperforms other designs, but parallel contentional aggregation hardware (partially contentional) and directional transceivers (permanently assigned to port directions) offer competitive performance under certain scenarios (at lower and higher number of deployed transceivers, or a combination of both). These findings suggest that design alternatives to the “difficult to implement” CDC-ROADM exist, with nearly equivalent switching capacity, and additional system considerations must be taken into account for ROADM design selection such as hardware availability, cost, impact of traffic churn, and disaster recovery with over-provisioned add/drop transceivers.
Cell-free massive MIMO networks have recently emerged as an attractive solution capable of solving the performance degradation at the cell edge of cellular networks. For scalability reasons, usercentric clusters were recently proposed to serve users via a subset of APs. In the case of dynamic mobile scenarios, this form of network organization requires predictive algorithms for forecasting propagation parameters to maintain performance by proactively allocating new APs to a user. In this paper, we present a BiLSTM-based multivariate path loss forecasting algorithm. Thanks to the combination of dual prediction by the BiLSTM and diversity from multiple antennas, our model mitigates the error propagation typically faced by sequential neural networks for time-series forecasting. In the evaluated scenario, from 2 to 10 steps ahead, we reduce the propagation of the error by a factor of 18 compared to previous research on path loss forecasting by an LSTM time-series-based model. In contrast to parallel transformer solutions, the complexity cost of our algorithm is also significantly lower.
To design a reliable communication system utilizing millimeter-wave (mm-wave) technology, which is gaining popularity due to its ability to deliver multi-gigabit-per-second data rate, it's essential to consider the site-specific nature of the mmwave propagation. Conventional site-general stochastic channel models are often unsatisfactory for accurately reproducing the channel responses under specific usage scenarios or environments. For high-precision channel simulation that reflects sitespecific characteristics, this paper proposes a channel model framework leveraging a widely accepted 3GPP map-based hybrid channel modeling approach, and it provides a detailed recipe to apply it to an actual scenario using some examples. First, an extensive measurement campaign was conducted in typical urban macro and micro cellular environments using an inhouse dual-band (24/60 GHz) double-directional channel sounder. Subsequently, the mm-wave channel behavior was characterized, focusing on the difference between the two frequencies. Then, the site-specific large-scale and small-scale channel properties were parameterized. As an essential component for improving prediction accuracy, this paper proposes an exponential decay model for power delay characteristics of non-line-of-sight clusters, of which powers are significantly overestimated by deterministic prediction tools. Finally, using the in-house channel model simulator (CPSQDSIM) developed for grid-wise channel data (PathGridData) generation, a significant improvement in prediction accuracy compared with the existing 3GPP map-based channel model was demonstrated.
Predictive monitoring on distributed critical infrastructures (DCI) is the ability to anticipate events that will likely occur in the DCI before they actually appear, improving the response time to avoid the rise of critical incidents. Distributed into a region or country, DCIs such as smart grids or microgrids rely on IoT, edge-fog continuum computing and the growing capabilities of distributed application architectures to collect, transport, and process data generated by the infrastructure. We present a model-agnostic distributed architecture for the inference execution of machine learning window-based prediction models of predictive monitoring applications to be used in this context. This architecture transports the events generated by the DCI using event streams to be processed by a hierarchy of nodes holding predictive models. It also handles the offloading of inferences from resource-scarce devices at lower levels to the resourceful upper nodes. Therefore, the timing requirements for setting predictions before they occur are met.
Sharing basic safety messages (BSMs) among connected vehicles (CVs) in a timely and reliable manner is of paramount importance in vehicular networks. When CVs are connected through ad hoc networks, the timely delivery of BSMs is very challenging due to the randomness in medium access control (MAC) and may lead to collision, especially in crowded networks. Besides, although the channel acquisition in conventional methods via transmission and reception of control signals results in collision-free message delivery, it adds high overhead cost. In this paper, we propose an efficient MAC scheme to carefully address these issues by improving communication efficiency and reducing the signaling overhead. The proposed scheme dedicates each time slot to only one CV and consequently is collision-free. Since BSMs contain similar information of a CV, we adopt the age of information (AoI) as the performance metric. We derive mathematical expressions for the MAC delay and AoI of the collision-free scheme by proposing a two dimensional Markov model. We compare the performance of proposed scheme with IEEE 802.11p standard and another lowcomplexity random scheme. AoI, delay, and collision rate are evaluated by OPNET network simulator, which provides realworld implementation scenario. Simulations results show that the collision-free scheme performs significantly better than IEEE 802.11p in highly congested networks. As an example, for a dense scenario where BSMs are generated every 10 ms, AoI of collisionfree scheme is about 50 ms while those of IEEE 802.11p and random scheme are about 140 and 150 ms, respectively, which are assumed too high for safety applications. Besides, the results show almost perfect match between mathematical derivations and results obtained by OPNET.
The Internet of Things (IoT) enables billions of smart devices to capture, process, and transform data to improve decision-making. IoT demands a critical mobile edge computing (MEC) ecosystem to provide dependability. It requires an efficient and distributed architecture with multiple IoT communication protocols. In this way, intelligent middleware is needed to achieve the efficiency, throughput, and reliability of data delivery on different protocols without interference from the local setup of the device. This paper proposes a modular and interoperable middleware called MiddleFog to select the most appropriate communication protocol among MQTT and CoAP dynamically. Also, the approach minimizes communication limitations caused by latency, package loss, and low network throughput between MEC and Cloud. The initial evaluations show a message loss rate lower than 25% for small messages, and performance improves around 48% for medium-sized delivery messages.
This paper is a continuation on my revolutionary theory of solving the pointwise fluid flow approximation model for time-varying queues. Thus, the long-standing simulative approach has now been replaced by an exact solution by using a constant ratio 𝛽 (Ismail's ratio) , offering an exact analytical solution. The stability dynamics of the time-varying 𝑀/𝐸 𝑘 /1 queueing system are then examined numerically in relation to time, 𝛽, and the queueing parameters.
This paper details an experiment utilizing ESP8266 modules as servers to wirelessly control diverse electrical appliances in home automation. The experiment showcased the modules' capability to respond to commands via a web interface on both mobile and desktop platforms or even tablets. While most of the experiment ran smoothly, occasional freezing and connectivity disruptions were observed. The abstract encapsulates the experiment's successes, discusses encountered challenges, and outlines a forward-looking perspective, including the integration of a custom PCB for enhanced system stability.
This research explores the efficacy of machine learning and deep learning models in predicting soil moisture, a critical factor in optimizing agricultural irrigation systems. Utilizing data from the Vantage Vue weather station and Watermark 200 SS soil moisture sensors, we conducted a comparative analysis of traditional models like RandomForest and MLP against advanced deep learning models, particularly LSTM and 1D Convolutional Neural Networks, enhanced with attention mechanisms. The study reveals that attention-augmented models, especially the CONV1D+Attention model achieving an R2 value of 0.51, excel in capturing the complex dynamics of soil moisture. These results underscore the potential of such models in handling complex timeseries data in contexts like soil moisture levels influenced by weather conditions, offering significant insights for improved water management and sustainable agricultural practices globally.
In the pursuit of enhancing human life and safety through artificial intelligence (AI), transparency and predictability emerge as crucial elements. The potential for self-awareness in AI, we argue, hinges on its autonomy in decision-making. To address this, we propose three fundamental laws of AI, emphasizing their practical implementability. These laws aim to establish a framework that ensures transparency, predictability, and independence in decision-making, ultimately contributing to the responsible development and deployment of artificial intelligence for the benefit of humanity.