This paper introduces a novel nonparametric framework for data imputation, coined multilinear kernel regression and imputation via the manifold assumption (MultiL-KRIM). Motivated by manifold learning, MultiL-KRIM models data features as a point cloud located in or close to a user-unknown smooth manifold embedded in a reproducing kernel Hilbert space. Unlike typical manifold-learning routes, which seek low-dimensional patterns via regularizers based on graph-Laplacian matrices, MultiL-KRIM builds instead on the intuitive concept of tangent spaces to manifolds and incorporates collaboration among point-cloud neighbors (regressors) directly into the data modeling term of the loss function. Multiple kernel functions are allowed to offer robustness and rich approximation properties, while multiple matrix factors offer low-rank modeling, integrate dimensionality reduction, and streamline computations with no need of training data. Two important application domains showcase the functionality of MultiL-KRIM: time-varying graph-signal (TVGS) recovery, and reconstruction of highly accelerated dynamic-magnetic-resonance-imaging (dMRI) data. Extensive numerical tests on real and synthetic data demonstrate MultiL-KRIM's remarkable speedups over its predecessors, and outperformance over prevalent "shallow" data-imputation techniques, with a more intuitive and explainable pipeline than deep-image-prior methods.
Our current digital landscape relies heavily on existing data centers to store and process information for online applications accessed by millions of users. Any failure of these operations significantly affects productivity as well as significant operations of an organization. Additionally, data centers are high energy consumers. Climate change and the scarcity of energy resources due to political or local constraints emphasize the need for us to deeply analyze the strategic placement of these data centers to understand factors contributing to risks and opportunities in data center placement.
In this paper we will present the results of two DTN demonstration activities carried out in the ESA Ground Segment. The first demonstration has been prepared with the OPS-SAT spacecraft, to demonstrate a full DTN protocol stack with CFDP, Bundle Protocol, LTP, CCSDS Space Packet Protocol and show the ESA Ground Segment Bundle Protocol implementation capabilities. The second demonstration has been performed in collaboration with Morehead State University, NASA JPL and D3TN, with the aim to show interoperability of DTN implementations across space agencies and external partners.
Mixed Reality (MR) and Artificial Intelligence (AI) are increasingly becoming integral parts of our daily lives. Their applications range in fields from healthcare to education to entertainment. MR has opened a new frontier for such fields as well as new methods of enhancing user engagement. In this paper, We propose a new system one that combines the power of Large Language Models (LLMs) and mixed reality (MR) to provide a personalized companion for educational purposes. We present an overview of its structure and components as well tests to measure its performance. We found that our system is better in generating coherent information, however it's rather limited by the documents provided to it. This interdisciplinary approach aims to provide a better user experience and enhance user engagement. The user can interact with the system through a custom-design smart watch, smart glasses and a mobile app.
As naval power systems are forced to serve larger and more numerous pulsed loads, new techniques will need to be applied to maintain power quality. Extended Droop Control (EDC) provides robust, adaptable system level control capable of serving pulsed loads while maintaining tight bus voltage regulation. Pulsed loads are served from fast responding energy storage resources while slower generation assets are allowed to serve the baseload at their most efficient operating point. EDC achieves transient current sharing between sources through converter output impedance shaping; the limitations of the shaping and the droop impedance maximum frequency are developed. Inter converter interactions under EDC cause output current oscillations in multi converter systems, which can be corrected with a virtual damping series resistance for energy storage converters. Recommendations for the minimum damping resistance for virtual capacitors are made based on the output impedance shaping analysis. Hardware results validating nominal transient current sharing, the effect of damping resistance, and the ability to transition between EDC and Resistive Droop Control (RDC) are presented. EDC is found to be a powerful method to integrate diverse sources into a DC distribution system serving pulsed loads.
In the past decade, focal muscle vibration (FMV) has gained wide attention in neurological rehabilitation for its non-invasive nature, ease-of-use, and minimal side effects. Disorders like stroke, cerebral palsy, and multiple sclerosis have shown rehabilitatory benefits from FMV. The effectiveness of FMV is closely tied to device parameters, particularly the frequency and location of vibration stimulation. Despite a variety of devices available on the consumer market and research community, there are often insufficient details for robust device evaluation and its purported effects, leading to performance variability among different devices under similar input conditions.This study aims to develop a well-characterized FMV device that is usable and comparable across various application domains. The research focuses on the development and validation of a custom-designed wearable vibration device designed to deliver precisely controlled muscle stimulation. The device utilizes an eccentric-rotating-mass (ERM) motor design and features a three-dimensional computer-aided design (CAD) model, a 3D printed casing, and a curved surface for enhanced comfort during muscle contact. Characterization of the device involved establishing the relationship between input (battery) voltages and output (vibration) frequencies. Accelerometers and a microcontroller were used for precise frequency determination. The subsequent design of an electronic circuit allowed for user-controlled frequency adjustments, complemented by a pressure sensor ensuring consistent pressure during device use. The study concludes with a well-characterized vibration device holding promise for applications in neuromuscular research, and rehabilitation, owing to its precision, versatility, and user-friendly design.
Local Flexibility Markets (LFMs) are considered a promising framework towards resolving voltage and congestion issues of power distribution systems in an economically efficient manner. However, the need for location-specific flexibility services renders LFMs naturally imperfectly competitive and market efficiency is severely challenged by strategic participants that exploit their locally monopolistic power. Previous works have been considering either non-strategic participants, or strategic participants with perfect information (e.g. about the network characteristics etc) that can readily compute their payoffmaximizing bidding strategy. In this paper, we take on the problem of designing an efficient LFM in the more realistic case where market participants do not possess this information and, instead, learn to improve their bidding policies through experience. To that end, we develop a multi-agent reinforcement learning algorithm to model the participants' learning-to-bid process. In this framework, we first present two popular LFM pricing schemes (pay-as-bid and distribution locational marginal pricing) and expose that learning agents can discover ways to exploit them, resulting in severe dispatch inefficiency. We then present a gametheoretic pricing scheme that theoretically incentivizes truthful bidding and empirically demonstrate that this property improves the efficiency of the resulting dispatch also in the presence of learning agents. In particular, the proposed scheme is able to outperform the popular distribution locational marginal pricing (DLMP) scheme, in terms of efficiency, by a factor of 15 − 23%.
In pathology, various tissue and cell components play diverse biological roles. The morphology of each component can vary markedly with differentiation status or pathological conditions, making it critical for understanding diseases. Traditional computational pathology methods typically employ patch-based feature extraction, which aggregates visual features across entire images. However, this approach does not differentiate between tissue types, limiting component analysis. To address this limitation, we introduce a novel concept in pathology image analysis, namely segment representation learning, and present an algorithm, SegRep, for this purpose. SegRep uses a unique dual-masking strategy that combines input masking and feature map masking. This approach effectively removes external influences for the targeted segment, identified via a segmentation model or manual annotation, allowing for the extraction of segment-specific feature representations. In addition, SegRep utilizes a selfsupervised learning algorithm to achieve optimized segment representation. We evaluated SegRep's efficacy in clustering and classification tasks using a dataset of human gastric cancer samples. The results demonstrate SegRep's superior capability in extracting feature vectors that are highly specific to different pathology image segments. Compared with traditional methods, SegRep shows significant improvements in accuracy and specificity in both clustering and classification tasks. Segment representations obtained via SegRep can offer a more detailed and insightful perspective on computational pathology, paving the way for advanced applications in the field.
The popularity of WiFi devices and the development of WiFi sensing have alerted people to the threat of WiFi sensingbased privacy leakage, especially the privacy of human poses. Existing work on human pose estimation is deployed in indoor scenarios or simple occlusion (e.g., a wooden screen) scenarios, which are less privacy-threatening in attack scenarios. To reveal the risk of leakage of the pose privacy to users from commodity WiFi devices, we propose CSIPose, a privacy-acquisition attack that passively estimates dynamic and static human poses in through-the-wall scenarios. We design a three-branch network based on knowledge distillation, self-encoder, and self-attention mechanisms to realize the supervision of video frames over CSI frames to generate human pose skeleton frames. Notably, we design AveCSI, a unified framework for preprocessing and feature extraction of CSI data corresponding to dynamic and static poses. This framework uses the average of CSI sequences to generate CSI frames to mitigate the instability of passively collected CSI data, and utilizes a self-attention mechanism to enhance key features. We evaluate the performance of CSIPose across different room layouts, subjects, devices, subject locations, and device locations, and the evaluation results emphasize the generalizability of the system. Finally, we discuss measures to mitigate this attack.
In recent years, the popularity of network intrusion detection systems (NIDS) has surged, driven by the widespread adoption of cloud technologies. Given the escalating network traffic and the continuous evolution of cyber threats, the need for a highly efficient NIDS has become paramount for ensuring robust network security. Typically, intrusion detection systems utilize either a pattern-matching system or leverage machine learning for anomaly detection. While pattern-matching approaches tend to suffer from a high false positive rate (FPR), machine learning-based systems, such as SVM and KNN, predict potential attacks by recognizing distinct features. However, these models often operate on a limited set of features, resulting in lower accuracy and higher FPR. In our research, we introduced a deep learning model that harnesses the strengths of a Convolutional Neural Network (CNN) combined with a Bidirectional LSTM (Bi-LSTM) to learn spatial and temporal data features. The model, evaluated using the NSL-KDD dataset, exhibited a high detection rate with a minimal false positive rate. To enhance accuracy, K-fold cross-validation was employed in training the model. This paper showcases the effectiveness of the CNN with Bi-LSTM algorithm in achieving superior performance across metrics like accuracy, F1-score, precision, and recall. The binary classification model trained on the NSL-KDD dataset demonstrates outstanding performance, achieving a high accuracy of 99.5% after 10-fold cross-validation, with an average accuracy of 99.3%. The model exhibits remarkable detection rates (0.994) and a low false positive rate (0.13). In the multiclass setting, the model maintains exceptional precision (99.25%), reaching a peak accuracy of 99.59% for k-value=10. Notably, the Detection Rate for k-value=10 is 99.43%, and the mean False Positive Rate is calculated as 0.214925.
Electric grid operators have proven very adept at handling complexity and uncertainty. However, as uncertainty and variability continue to grow with increasing introduction of renewable generation, distributed energy resources, retirement of dispatchable generation, and occurrence of more frequent and historic weather events, operators will expericence new workload and challenging decision scenarios. New types of resources and technology, like dynamic line ratings, are being introduced to the system to attempt to address transmission congestion, often without fully considering the operator. When integrating a novel technology, the concept of operations is commonly overlooked during the development phase, and is often only defined during, or even after the actual implementation. This paper provides an example of defining a concept of operations for a case study of a dynamic line rating (DLR) implementation and its integration with offshore wind generation (OSW) to consider risk and benefits of DLR due to weather forecast uncertainty. The analysis results offer insights into DLR and OSW forecast uncertainties, establishing a baseline for conducting studies on the acceptable level of uncertainty for operators. For implementation of future advanced technologies, researchers can utilize similar analyses to understand the effectiveness and potential impacts of these technologies on control room operations.
Residential space heating accounted for approximately 19% of the overall energy consumption of Germany in 2021. Therefore, the efficient operation of electrified heating systems is of major importance for the energy transition. We apply a reinforcement learning approach for operating a district heat pump and compare results with a classic rule-based approach. No building model is required in our study but only basic parameters of the hot water tank along with demand and ambient temperature data which all is easily attainable. Additionally, the environment is designed in a way that the residents living comfort is never compromised which maximizes applicability in real world buildings. The agent is able to exploit variable electricity prices and the flexibility of the hot water tank in such a way, that up to 35% of energy costs could be saved. Additionally, depending on the agent's settings, only 23% to 41% of the heat pump's nominal power installed according to current standards was used. The robustness of the approach is shown by running ten independent training and testing cycles for all setups with reproducible results. The importance of demand forecasts is evaluated by testing different observation spaces of the RL agent. Even if the agent has no demand information at all, costs savings still are 25%.
Thermal to visual cross-domain face recognition is the process of recognising faces captured in the thermal domain and matching them with the visual domain images. Its significance lies in enabling accurate face recognition across different domains, allowing for enhanced security and surveillance capabilities in various environments, particularly during night-time or adverse weather conditions where thermal imaging excels. By bridging the gap between thermal and visual domains, this cross-domain face recognition improves the overall performance and applicability of face recognition systems in real-world scenarios. This survey explores the existing literature on thermal-to-visual cross-domain face recognition, categorising it into two distinct approaches: the first classification is based on the learning model, and the other is on generalisation approaches. This survey aims to provide a nuanced understanding of the current landscape, methodologies, and challenges in this evolving field by systematically reviewing and categorising the extensive body of research in thermal-to-visual cross-domain face recognition.
As we transition from the 5G epoch, a new horizon beckons with the advent of 6G, seeking a profound fusion with novel communication paradigms and emerging technological trends, bringing once-futuristic visions to life along with added technical intricacies. Although analytical models lay the foundations and offer systematic insights, we have recently witnessed a noticeable surge in research suggesting machine learning (ML) and artificial intelligence (AI) can efficiently deal with complex problems by complementing or replacing model-based approaches. The majority of data-driven wireless research leans heavily on discriminative AI (DAI) that requires vast real-world datasets. Unlike the DAI, Generative AI (GenAI) pertains to generative models (GMs) capable of discerning the underlying data distribution, patterns, and features of the input data. This makes GenAI a crucial asset in wireless domain wherein real-world data is often scarce, incomplete, costly to acquire, and hard to model or comprehend. With these appealing attributes, GenAI can replace or supplement DAI methods in various capacities. Accordingly, this combined tutorial-survey paper commences with preliminaries of 6G and wireless intelligence by outlining candidate 6G applications and services, presenting a taxonomy of state-of-the-art DAI models, exemplifying prominent DAI use cases, and elucidating the multifaceted ways through which GenAI enhances DAI. Subsequently, we present a tutorial on GMs by spotlighting seminal examples such as generative adversarial networks, variational autoencoders, flow-based GMs, diffusion-based GMs, generative transformers, large language models, autoregressive GMs, to name a few. Contrary to the prevailing belief that GenAI is a nascent trend, our exhaustive review of approximately 120 technical papers demonstrates the scope of research across core wireless research areas, including 1) physical layer design; 2) network optimization, organization, and management; 3) network traffic analytics; 4) cross-layer network security; and 5) localization & positioning. Furthermore, we outline the central role of GMs in pioneering areas of 6G network research, including semantic communications, integrated sensing and communications, THz communications, extremely large antenna arrays, near-field communications, digital twins, AI-generated content services, mobile edge computing and edge AI, adversarial ML, and trustworthy AI. Lastly, we shed light on the multifarious challenges ahead, suggesting potential strategies and promising remedies. Given its depth and breadth, we are confident that this tutorial-cum-survey will serve as a pivotal reference for researchers and professionals delving into this dynamic and promising domain.
Explainable artificial intelligence (XAI) methodologies can demystify the behavior of machine learning (ML) "black-box" models based on the individual impact each feature has on the model's output. In the cybersecurity domain, explanations of this type have been studied from the perspective of a humanin-the-loop, where they serve an essential role in building trust from stakeholders, as well as aiding practitioners in tasks such as feature selection and model debugging. However, another important but largely overlooked use case of explanations emerges when they are passed as inputs into other ML models. In this sense, the rich information encompassed by the explanations can be harnessed at a fine-grained level and used to subsequently enhance the performance of the system. In this work, we outline a general methodology whereby explanations of a front-end network intrusion detection system (NIDS) are leveraged alongside additional ML to automatically improve the system's overall performance. We demonstrate the robustness of our methodology by evaluating its performance across multiple intrusion datasets and perform an in-depth analysis to assess its generalizability under various conditions, such as unseen environments, varying types of front-end NIDS and XAI methodologies, etc. The overall results indicate the efficacy of our methodology to produce significant improvement gains (i.e., up to +36% gains achieved across the considered metrics and datasets) that exceed those achieved by other state-of-the-art intrusion models from the literature.
Objective: Digital subtraction angiography (DSA) is significantly important for cerebrovascular disease diagnosis and treatment. However, artifacts and noise are inevitable and reduce image quality. These problems could make clinical diagnosis difficult. In this paper, we introduce a novel deep learning architecture, exploiting the information decoupling training strategy to generate highquality DSA images. Methods: We propose the generative decoupling network, a feature decoupling convolutional network, which maximizes the difference between different structures throughout a decoupling training strategy. In this network, an axial residual block and a learnable sampling method are proposed to enhance the strength of feature extraction. Results: The results showed that our proposed method significantly outperforms the existing methods in the DSA generation task. Furthermore, we quantified the method using the metrics of SSIM, PSNR, VSI, FID and FSIM, with the results of 93.57%, 24.18dB, 98.04%, 351.59, and 89.95%, respectively. Conclusion: Our method can produce high-quality DSA images with little or even no artifact and noise. Significance: The proposed method can effectively reduce artifacts and noise, and generate high quality DSA images with complete and clear vascular structures.