Stringent physical requirements need to be met for the high performing surface-electrode ion traps used in quantum computing, sensing, and timekeeping. In particular, these traps must survive a high temperature environment for vacuum chamber preparation and support high voltage rf on closely spaced electrodes. Due to the use of gold wirebonds on aluminum pads, intermetallic growth can lead to wirebond failure via breakage or high resistance, limiting the lifetime of a trap assembly to a single multi-day bake at 200 • C. Using traditional thick metal stacks to prevent intermetallic growth, however, can result in trap failure due to rf breakdown events. Through high temperature experiments we conclude that an ideal metal stack for ion traps is Ti20nm/Pt100nm/Au250nm which allows for a bakeable time of roughly 86 days without compromising the trap voltage performance. This increase in the bakable lifetime of ion traps will remove the need to discard otherwise functional ion traps when vacuum hardware is upgraded, which will greatly benefit ion trap experiments.
This paper describes a method for automatically transforming the structure and characteristics of an image processing dataflow graph for the purpose of improving performance and/or lowering memory utilization as compared to the baseline tools. Embedded image processing applications are often executed on Digital Signal Processors, or their modern equivalent Visual Processor Units. The software usually performs a series of pixel-level operations for basic color conversion, channel extraction and combining, arithmetic, and filtering. These steps can often be efficiently described as a graph. For this reason, standard libraries such as OpenVX are used, which provide a graph-based programming model where the nodes are chosen from a repertoire of common pixel-level operations and the edges represent the flow of images as they progress though the processing stages. Generally speaking, each node is processed sequentially in the order implied by the data dependencies defined by the graph structure, with all intermediate values stored in external memory. In the proposed framework, we developed performance models for both the direct memory access subsystem and the L1 data cache to allow for selection of certain intermediate values to be stored in on-chip scratchpad memory as well as selecting the most appropriate tile size. In this way, we effectively decompose the graph in a way to fuse specific sets of nodes to associate their internal edges with on-chip buffers. Additionally, the tile size is optimized for each fused set of nodes. In this paper, we describe our performance models and approach for graph decomposition and tile size selection. The proposed performance models are accurate to within 2% on average, and the overall approach of graph optimization achieves an average speedup of 1.3 and allows for reduction of average DRAM utilization from 100% to as low as 15%.
This study advances indoor environment modeling by focusing on the optimal placement of sensors. Our approach involves creating a detailed environment model from a 3D point cloud by identifying spatial boundaries and furniture in indoor spaces, which are then represented as a series of polygons. To validate our method, we compare its performance against ground truth data, demonstrating high accuracy in both simple and complex environments. The core of our study is a comprehensive experiment that evaluates the effectiveness of three evolutionary nature-inspired genetic and three metaheuristic iterative optimization algorithms in solving the sensor placement problem in a complex environment scenario. We perform a statistical analysis to understand the impact of the choice of optimization algorithm and the number of sensors on the achieved spatial coverage. This analysis provides insights into the comparative effectiveness of various evolutionary algorithms in enhancing sensor network design within intricate indoor spaces. In particular, the Artificial Bee Colony algorithm consistently delivered superior results.
Cryptography has become an essential tool in information security, preserving data confidentiality, integrity, and availability. However, despite rigorous analysis, cryptographic algorithms may still be susceptible to attack when used on real-world devices. Side-channel attacks (SCAs) are physical attacks that target cryptographic equipment through quantifiable phenomena such as power consumption, operational times, and EM radiation. These attacks are considered to be a significant threat to cryptography since they compromise the integrity of the algorithm by obtaining the internal cryptographic key of a device by seeing its physical implementation. The literature on SCAs has focused on real-world devices, yet with the growing popularity of sophisticated devices like smartphones, fresh approaches to SCAs are necessary. One such approach is electromagnetic side-channel analysis (EM-SCA), which gathers information by listening to electromagnetic (EM) radiation. EM-SCA has been demonstrated to recover sensitive data like encryption keys and has the potential to identify malicious software, retrieve data, and identify program activity. This study aims to evaluate how well EM-SCA compromises encryption under various application scenarios, as well as examine the role of EM-SCA in digital forensics and law enforcement. Regarding this, addressing the susceptibility of encryption algorithms to EM-SCA approaches can provide digital forensic investigators with the tools they desire to overcome the challenges posed by strong encryption, allowing them to continue playing a crucial role in law enforcement and the justice system. Furthermore, this paper seeks to define the current state of EM-SCA in terms of attacking encryption, the encryption algorithms and encrypted devices that are most vulnerable and resistant to EM-SCA, and the most promising EM-SCA on encryption approaches. This study will provide a comprehensive analysis of EM-SCA in the context of law enforcement and digital forensics and point towards potential directions for further research.
In order to low-frequency stabilize the electric field integral equation (EFIE) when discretized with divergence conforming B-spline based basis and testing functions in an isogeometric approach, we propose a corresponding quasi-Helmholtz preconditioner. To this end, we derive i) a loop-star decomposition for the B-spline basis in the form of sparse mapping matrices applicable to arbitrary polynomial orders of the basis as well as to open and closed geometries described by single-or multipatch parametric surfaces (as an example non-uniform rational Bsplines (NURBS) surfaces are considered). Based on the loopstar analysis, we show ii) that quasi-Helmholtz projectors can be defined efficiently. This renders the proposed low-frequency stabilization directly applicable to multiply-connected geometries without the need to search for global loops and results in betterconditioned system matrices compared to directly using the loopstar basis. Numerical results demonstrate the effectiveness of the proposed approach.
In order to accurately compute scattered and radiated fields in the presence of arbitrary excitations, a lowfrequency stable discretization of the right-hand side (RHS) of a quasi-Helmholtz preconditioned electric field integral equation (EFIE) on multiply-connected geometries is introduced, which avoids an ad-hoc extraction of the static contribution of the RHS when tested with solenoidal functions. To obtain an excitation agnostic approach, our approach generalizes a technique to multiply-connected geometries where the testing of the RHS with loop functions is replaced by a testing of the normal component of the magnetic field with a scalar function. To this end, we leverage orientable global loop functions that are formed by a chain of Rao-Wilton-Glisson (RWG) functions around the holes and handles of the geometry, for which we introduce cap surfaces that allow to uniquely define a suitable scalar function. We show that this approach works with open and closed, orientable and non-orientable geometries. The numerical results demonstrate the effectiveness of this approach.
Manual object identification labelling is laborious, time-consuming and prone to inconsistencies hindering advancements in various computer vision tasks.These inconsistencies can lead to inaccurate models with poor performance. Considering these potential consequences, highlights the importance of addressing labelling challenges for ethical and responsible AI development. To address this our study evaluates several popular platforms for their suitability in tackling these challenges. Roboflow, Makesense.ai, SentiSight.ai, Labelbox and SuperAnnotate are the five different data labelling platforms that have been taken for assessment. The study identifies strengths and weaknesses of each platform in the context of basketball detection using YOLO v8, a deep learning model for object detection, image classification, and image segmentation. Each platform is analysed based on features, ease of use, pricing, and support for image annotation, object detection, and YOLO v8 integration. After analysing these factors, a final recommendation is made, highlighting the platform that demonstrably offers the best balance of features, efficiency, and cost-effectiveness for this specific task. The study helps in deeper exploration of the potential of YOLO v8. It is mainly aimed at assisting the Video Assistant Referees(VARs) for accurate and unbiased decision-making and also empowers the development of AI technology across the domain of sports.
Cell-free massive MIMO networks have recently emerged as an attractive solution capable of solving the performance degradation at the cell edge of cellular networks. For scalability reasons, usercentric clusters were recently proposed to serve users via a subset of APs. In the case of dynamic mobile scenarios, this form of network organization requires predictive algorithms for forecasting propagation parameters to maintain performance by proactively allocating new APs to a user. In this paper, we present a BiLSTM-based multivariate path loss forecasting algorithm. Thanks to the combination of dual prediction by the BiLSTM and diversity from multiple antennas, our model mitigates the error propagation typically faced by sequential neural networks for time-series forecasting. In the evaluated scenario, from 2 to 10 steps ahead, we reduce the propagation of the error by a factor of 18 compared to previous research on path loss forecasting by an LSTM time-series-based model. In contrast to parallel transformer solutions, the complexity cost of our algorithm is also significantly lower.
Flood monitoring with satellite images is an effective method of detecting and tracking floods. This approach involves the use of satellite imagery to detect changes in water levels and identify flooded areas. To monitor floods using satellite images, the images are analyzed to detect changes in water levels over time. To detect changes in water levels and identify flooded areas based on a set of predefined criteria, we can train algorithms. Amazon SageMaker geospatial capabilities make it easier for data scientists and machine learning (ML) engineers to build, train, and deploy ML models using geospatial data. These capabilities also provide pre-trained models. One of the pre-trained models is land cover segmentation model. This land cover segmentation model can be run with a simple API call and can be leverage to analyze changes in the water level.
The goal of cancer treatment is to remove or kill malignant cells while preserving surrounding healthy tissue. Among treatment methods, needle-based ultrasound thermal ablation is an option that involves the insertion of an applicator into the patient's body and the use of an ultrasound transducer to vibrate tissue, producing heat. An ablation pattern for an arbitrarily shaped tumor can be approximated by moving the applicator to deposit heat in targeted locations. However, this conformal ablation process is challenging to control because of the complex interactions between tissue and ultrasound. To address this, we built an interactive planning toolkit that allows a physician to perform the procedure multiple times in simulation and record the ablation trajectory once a desired result is achieved. To validate this method, a previously developed MRconditional robot was used to replicate the planned ablation in a phantom model. Live Magnetic Resonance Thermal Imaging was used to track temperature changes, allowing us to measure the thermal dose and identify the ablated region. In 4 ablations experiments, we achieved an average of 80.9% overlap between the targeted tumor area and the actual ablated area, with minimal damage of 9.4% affected surrounding tissues, demonstrating the effectiveness of our approach.
To design a reliable communication system utilizing millimeter-wave (mm-wave) technology, which is gaining popularity due to its ability to deliver multi-gigabit-per-second data rate, it's essential to consider the site-specific nature of the mmwave propagation. Conventional site-general stochastic channel models are often unsatisfactory for accurately reproducing the channel responses under specific usage scenarios or environments. For high-precision channel simulation that reflects sitespecific characteristics, this paper proposes a channel model framework leveraging a widely accepted 3GPP map-based hybrid channel modeling approach, and it provides a detailed recipe to apply it to an actual scenario using some examples. First, an extensive measurement campaign was conducted in typical urban macro and micro cellular environments using an inhouse dual-band (24/60 GHz) double-directional channel sounder. Subsequently, the mm-wave channel behavior was characterized, focusing on the difference between the two frequencies. Then, the site-specific large-scale and small-scale channel properties were parameterized. As an essential component for improving prediction accuracy, this paper proposes an exponential decay model for power delay characteristics of non-line-of-sight clusters, of which powers are significantly overestimated by deterministic prediction tools. Finally, using the in-house channel model simulator (CPSQDSIM) developed for grid-wise channel data (PathGridData) generation, a significant improvement in prediction accuracy compared with the existing 3GPP map-based channel model was demonstrated.
A stochastic compact model for resistive switching devices is presented. The motivation is twofold: first, introducing variability in a natural way, and second, accounting for the discrete jumps of conductance observed during set and reset transitions. The model is based on an event generation rate, and it is an "on-the-fly" procedure because events are randomly generated as the simulation proceeds in time. For the generation of events, we assume a mixed non-homogeneous Poisson process. Before considering resistive switching, we deal with the generation of successive breakdown events in metalinsulator-semiconductor structures. This confirms the validity of the approach by comparing with experimental data in which discrete events are evident. To deal with resistive switching, we transform a previous compact model into a stochastic model. Comparison with experiments in TiN/Ti/HfO2/W devices show the validity of the approach. Current-voltage loops and potentiationdepression transients in pulsed experiments are captured with a single set of parameters. Moreover, the model is an adequate framework to deal with both cycle-to-cycle and device-to-device variability.
Predictive monitoring on distributed critical infrastructures (DCI) is the ability to anticipate events that will likely occur in the DCI before they actually appear, improving the response time to avoid the rise of critical incidents. Distributed into a region or country, DCIs such as smart grids or microgrids rely on IoT, edge-fog continuum computing and the growing capabilities of distributed application architectures to collect, transport, and process data generated by the infrastructure. We present a model-agnostic distributed architecture for the inference execution of machine learning window-based prediction models of predictive monitoring applications to be used in this context. This architecture transports the events generated by the DCI using event streams to be processed by a hierarchy of nodes holding predictive models. It also handles the offloading of inferences from resource-scarce devices at lower levels to the resourceful upper nodes. Therefore, the timing requirements for setting predictions before they occur are met.
Transitioning to renewable energy in the distribution grid (DG) is essential for combating climate change and ensuring energy security. However, this transition can introduce grid instability. To combat this, we need improved control capabilities for these energy resources; which requires accurate information on system state variables and distribution grid line parameters. This study presents a way to simultaneously estimate the system state variables, active and reactive power, and the line parameters of the distribution grid without the need for any information on voltage angles. This is achieved by formulating it as a maximum likelihood problem that we solve using the expectation maximization (EM) algorithm, which we adapt to this problem and provide details of a numerically robust implementation. The study uses the modified Distflow model which provides a way to consider line losses in the system and improves system accuracy. The proposed method is demonstrated on the IEEE 37-node test feeder. The proposed study is compared to state-of-the-art, where we achieve a 70% reduction in voltage error and more than 10, 000 times lower error for state variables.
The actual paper presents an in-depth study and experimental development of a class of rotorcraft, named as x-tilt, that features four tilting rotors. Initially, the equations of motion modeling the aerial robot are presented based on the Euler-Lagrange formulation. The model includes the aerodynamic effects induced by the rotorcraft's relative motion and propellers. For control purposes the aforementioned model is split into a nominal model and lumped disturbance terms, the latter encompassing endogenous and exogenous uncertainties. In this vein, the actual work propose a robust navigation strategy targeting a specific performance profile whose problem is formulated through the model predictive control (MPC) framework. To this end, two schemes are proposed, (i) an integral MPC and a (ii) MP sliding-mode Control (MPSMC). Both control schemes are linked to a extended-state Linear Kalman Filter (ES-LKF) that furnishes the states and lumped disturbance estimates. Moreover, a high-fidelity simulation is presented in detail to validate the effectiveness of the proposed controller within a realistic scenario. We finally present the experimental stage to validate the tilting-rotor configuration as well as the integral MPC.
This paper reviews about the field of data science and the importance of it with the today's rapidly evolving landscape of technology. We start the survey by introducing the fundamental concepts of data science including history and how we collect data and why we need data science for and how we use them effectively to use them for processing, storing, and analyzing such as machine learning, data visualization. And we explore about the different domains we use data science and the challenges, advantage and disadvantages we face while using them. And finally, we are going to discuss about the prospects and implications of data science and how we can use data science to overcome the current and future challenges. The need of careers and the paths of careers are also discussed as they are essential and growing rapidly.
Distinguished from most hyperspectral Anomaly Detection(AD) methods based on trainable parameter networks, the recently proposed method called AETNet eliminates the need for parameter adjustments or retraining on new test scenes by training an anomaly enhancement network on background data with false anomalies. In this letter, we achieve this by proposing a novel training and inference framework that enhances the network's background spectral feature extraction capability without any data augmentation. During training on background data, the complete network is trained using the reverse distillation framework with a spectral feature alignment mechanism to improve the network's background feature expressiveness. For inference, a pruned network is applied, composed solely of components most relevant to expressing features in the spectral dimension. This effectively reduces redundant information, enhancing both inference efficiency and anomaly detection accuracy. Experimental results demonstrate that our method outperforms state-of-theart methods on the HAD100 dataset, striking an optimal balance between detection accuracy and inference speed. Our code is available at https://github.com/cristianoKaKa/FERD.
The application of millimeter-Wave (mmWave) Radar sensors for people monitoring raised a lot of interest in the context of Active Assisted Living (AAL), especially since the processing of Radar signals can provide interesting information about the observed subjects. Correct recognition of the ongoing behavior, however, cannot disregard from detecting where the subject is positioned. Detection approaches, based on Constant False Alarm Rate (CFAR) algorithms, sometimes fail to correctly identify the presence of targets within the observed scenario, especially in complex environments such as indoors. This paper proposes the use of a mmWave Multiple Input Multiple Output (MIMO) Radar in combination with a You Only Look Once (YOLO) neural network-based algorithm for the detection of moving people in indoor environments by processing all the data cube information at the same time. Results are validated through experimental tests which involve subjects walking in linear or random mode, different Radar configurations, and different indoor environments. By exploiting at the same time information such as the angle, Doppler, and range distance of the target, the proposed approach proves to be very effective in the examined scenarios. Experimental results will be discussed in this work to demonstrate the effectiveness of the proposed method.