The quality assessment of biomaterials in pathological anatomy is crucial for the optimal diagnosis and treatment of conditions like cancer. This is exemplified in the immunohistochemistry profiling of the human epidermal growth factor receptor 2 (HER2) in breast cancer. Therefore, it is relevant to understand how preanalytical processes, such as post-surgery handling and fixation quality, impact biomaterial quality and diagnostic accuracy. This study investigates first the influence of fixation steps on the performance of HER2 diagnosis. Then a quantitative and automated approach is proposed to correct these biases. This approach is derived from a previous supervised Machine Learning model. The method, which employs a high-performance logistic model, has been further enhanced with a compensation strategy based on tissue quality. This enhancement utilizes a correction derived from a Tissue Quality Index (TQI) to fine-tune the input parameters of the classification model (referred to as TQI-enhancer). Results, obtained from 60 quality control samples with Vimentin and 75 HER2 classification samples, first demonstrate that cold ischemia and fixation times lead to significant changes in immunoreactivity within a short period. Second, adjusting specific parameters quantified in HER2 samples through automated image analysis based on the TQI-Enhancer equation exhibits an improved correlation with the reference diagnosis. This adjustment significantly enhances the classification performance of the logistic classifier in ML-based diagnosis compared to uncompensated data with improved AUC values from 0.84 to 0.93. We anticipate that implementing similar strategies will enhance the performance of digital pathology techniques, ultimately leading to the development of robust diagnostic classifiers for cases of aggressive breast cancer. By analyzing the association between biomarkers like HER2 with patients' clinical outcomes, these classifiers are expected to provide invaluable insights.
This work delves into the exploration of optimizing Multilayer Perceptrons (MLP) or the dense layers of other sorts of Deep Neural Networks when they are aimed at edge computing applications such as Internet of Things (IoT) devices, very limited in resources at the edge. The proposed optimization approach consists of generating a pruning mask for the hidden dense layers of the original neural network by using auxiliary dense Morphological Neural Networks (MNN). These MNN have shown a notable efficiency when it comes to the process of pruning, resulting in a significant decrease in the overall number of connections and a low cost in terms of accuracy degradation. The effectiveness of this new pruning methodology has been explained in detail and validated for two widely used datasets as MNIST and Fashion MNIST and two very well-known neural networks such as LeNet-5 and LeNet-300-100. Subsequently, the performance of these pruned neural networks has been assessed using an IoT hardware platform. The experimental results have outperformed other contemporary state-of-the-art pruning techniques, in terms of power efficiency and processing speed for a similar percentage of weight reduction, all while maintaining minimal impact on overall accuracy. In addition, a custom software tool has been developed to generate a C code designed to optimize the inference of these pruned networks on IoT edge devices. These findings hold important implications for advancing the development of efficient and scalable deep learning models that are specifically tailored to meet the demands of edge computing applications.
We present SCMI30-IITRPR, a dataset for smartphone camera model identification (CMI) performance assessment comprising 9937 diverse scene images collected using 30 different camera models. Importantly, to allow assessment of CMI performance under different application settings where either similar or random content images may be available across the camera models, SCMI30-IITRPR provides images grouped in two sets: one set with similar image content and another with random image content. SCMI30-IITRPR therefore overcomes a key limitation of prior datasets that provided either images with random or similar content but not both. Additionally, SCMI30-IITRPR also allows researchers to test the robustness of CMI techniques under test conditions mismatched with the training and to explore alternative data selection approaches for more robust training. We present benchmarks of five CMI methods on the SCMI30-IITRPR dataset highlighting the facts that significant performance variations can be encountered under a mismatch between training and testing scenarios and that training datasets that merge images with similar and random content offer the most robustness.
The development of Internet of things has spawned the vehicular ad hoc network (VANET), which facilitates the safe and comfortable driving. The communications in VANET should be protected to deter against message leakage and modifications. To solve the security issues in VANET communications, we present a dynamic and efficient authenticated group key agreement (AGKA) protocol SC-AGKA with conditional privacy employing the self-certified cryptosystem, and prove its security based on the computational Diffie-Hellman problem. Our SC-AGKA protocol establishes a group key among multiple group users and achieves conditional privacy for the group users. Based on our SC-AGKA protocol, we propose an authentication and group key agreement protocol applying the design in VANET. Additionally, we also notify that through performance comparisons, our protocol has higher security and efficiency in computation and communication compared to other AGKA designs.
Today modern web site accelerated by scripts, but the foundation, web page its self is still a static structure. Document Object Model (DOM) represents the structure of web page. Here we show a new approach: It is possible to put timetree and DOM together to shape a new structure named Time Object Model. TOM represents not only a static page but also a dynamic stream. We believe the best way for using TOM is to embed it into a HTML page in real time without changing the existence, it is the only way works now.
This work proposes a novel combination of behavioural-tracking sensors and immersive virtual reality in a gamified proof-of-concept prototype, which demonstrates affective treatment concepts for hypervigilance symptoms. A number of limitations have been identified in current approaches, prompting more advanced techniques that efficiently target hypervigilance at an individual patient level. In response, we developed a virtual reality first-person shooter that responds to inertial user behaviour in a way that aims to combat detrimental symptoms, proposed as an exploratory investigation into innovative technology and its potential to maximise cognitive behavioural therapy outcomes for hypervigilance treatment. The prototype is evaluated through interactive user studies with 22 participants, gathering a large volume of qualitative data regarding participant experiences and opinions after use. Rigorous thematic analysis finds that participants can independently identify the cognitive behavioural therapy purpose of the intervention without prior knowledge of such intentions, and relate efficacious approaches from the literature to their own experiences. Despite prospective apprehension, themes also demonstrate widespread adherence and acceptance of such approaches to hypervigilance treatment, alongside perceived effectiveness both of experienced outcomes and future potential. These results support the validity of combining such technologies in the context of cognitive behavioural therapy interventions, such that the standard of future interventions may be improved.
Text generation is an important method to generate high quality and available product description from product title. For the product description generation for online E-commerce application, the main problem is how to improve the quality of generated text. In other words, how we judge the quality of text. If all texts are already positive and available, then we find it impossible to manually judge which text is the better text for a product. So if we cannot judge which is a better text manually, we cannot improve the quality of generated text. In E-commerce, product description is to attract shoppers and improve sales. So we design a method to improve the quality of generated text based on user buying behaviour. Online result shows that our approach improve the sales of products by improving the text quality.
This paper describes a method for automatically transforming the structure and characteristics of an image processing dataflow graph for the purpose of improving performance and/or lowering memory utilization as compared to the baseline tools. Embedded image processing applications are often executed on Digital Signal Processors, or their modern equivalent Visual Processor Units. The software usually performs a series of pixel-level operations for basic color conversion, channel extraction and combining, arithmetic, and filtering. These steps can often be efficiently described as a graph. For this reason, standard libraries such as OpenVX are used, which provide a graph-based programming model where the nodes are chosen from a repertoire of common pixel-level operations and the edges represent the flow of images as they progress though the processing stages. Generally speaking, each node is processed sequentially in the order implied by the data dependencies defined by the graph structure, with all intermediate values stored in external memory. In the proposed framework, we developed performance models for both the direct memory access subsystem and the L1 data cache to allow for selection of certain intermediate values to be stored in on-chip scratchpad memory as well as selecting the most appropriate tile size. In this way, we effectively decompose the graph in a way to fuse specific sets of nodes to associate their internal edges with on-chip buffers. Additionally, the tile size is optimized for each fused set of nodes. In this paper, we describe our performance models and approach for graph decomposition and tile size selection. The proposed performance models are accurate to within 2% on average, and the overall approach of graph optimization achieves an average speedup of 1.3 and allows for reduction of average DRAM utilization from 100% to as low as 15%.
Cryptography has become an essential tool in information security, preserving data confidentiality, integrity, and availability. However, despite rigorous analysis, cryptographic algorithms may still be susceptible to attack when used on real-world devices. Side-channel attacks (SCAs) are physical attacks that target cryptographic equipment through quantifiable phenomena such as power consumption, operational times, and EM radiation. These attacks are considered to be a significant threat to cryptography since they compromise the integrity of the algorithm by obtaining the internal cryptographic key of a device by seeing its physical implementation. The literature on SCAs has focused on real-world devices, yet with the growing popularity of sophisticated devices like smartphones, fresh approaches to SCAs are necessary. One such approach is electromagnetic side-channel analysis (EM-SCA), which gathers information by listening to electromagnetic (EM) radiation. EM-SCA has been demonstrated to recover sensitive data like encryption keys and has the potential to identify malicious software, retrieve data, and identify program activity. This study aims to evaluate how well EM-SCA compromises encryption under various application scenarios, as well as examine the role of EM-SCA in digital forensics and law enforcement. Regarding this, addressing the susceptibility of encryption algorithms to EM-SCA approaches can provide digital forensic investigators with the tools they desire to overcome the challenges posed by strong encryption, allowing them to continue playing a crucial role in law enforcement and the justice system. Furthermore, this paper seeks to define the current state of EM-SCA in terms of attacking encryption, the encryption algorithms and encrypted devices that are most vulnerable and resistant to EM-SCA, and the most promising EM-SCA on encryption approaches. This study will provide a comprehensive analysis of EM-SCA in the context of law enforcement and digital forensics and point towards potential directions for further research.
In order to low-frequency stabilize the electric field integral equation (EFIE) when discretized with divergence conforming B-spline based basis and testing functions in an isogeometric approach, we propose a corresponding quasi-Helmholtz preconditioner. To this end, we derive i) a loop-star decomposition for the B-spline basis in the form of sparse mapping matrices applicable to arbitrary polynomial orders of the basis as well as to open and closed geometries described by single-or multipatch parametric surfaces (as an example non-uniform rational Bsplines (NURBS) surfaces are considered). Based on the loopstar analysis, we show ii) that quasi-Helmholtz projectors can be defined efficiently. This renders the proposed low-frequency stabilization directly applicable to multiply-connected geometries without the need to search for global loops and results in betterconditioned system matrices compared to directly using the loopstar basis. Numerical results demonstrate the effectiveness of the proposed approach.
In order to accurately compute scattered and radiated fields in the presence of arbitrary excitations, a lowfrequency stable discretization of the right-hand side (RHS) of a quasi-Helmholtz preconditioned electric field integral equation (EFIE) on multiply-connected geometries is introduced, which avoids an ad-hoc extraction of the static contribution of the RHS when tested with solenoidal functions. To obtain an excitation agnostic approach, our approach generalizes a technique to multiply-connected geometries where the testing of the RHS with loop functions is replaced by a testing of the normal component of the magnetic field with a scalar function. To this end, we leverage orientable global loop functions that are formed by a chain of Rao-Wilton-Glisson (RWG) functions around the holes and handles of the geometry, for which we introduce cap surfaces that allow to uniquely define a suitable scalar function. We show that this approach works with open and closed, orientable and non-orientable geometries. The numerical results demonstrate the effectiveness of this approach.
Manual object identification labelling is laborious, time-consuming and prone to inconsistencies hindering advancements in various computer vision tasks.These inconsistencies can lead to inaccurate models with poor performance. Considering these potential consequences, highlights the importance of addressing labelling challenges for ethical and responsible AI development. To address this our study evaluates several popular platforms for their suitability in tackling these challenges. Roboflow, Makesense.ai, SentiSight.ai, Labelbox and SuperAnnotate are the five different data labelling platforms that have been taken for assessment. The study identifies strengths and weaknesses of each platform in the context of basketball detection using YOLO v8, a deep learning model for object detection, image classification, and image segmentation. Each platform is analysed based on features, ease of use, pricing, and support for image annotation, object detection, and YOLO v8 integration. After analysing these factors, a final recommendation is made, highlighting the platform that demonstrably offers the best balance of features, efficiency, and cost-effectiveness for this specific task. The study helps in deeper exploration of the potential of YOLO v8. It is mainly aimed at assisting the Video Assistant Referees(VARs) for accurate and unbiased decision-making and also empowers the development of AI technology across the domain of sports.
Flood monitoring with satellite images is an effective method of detecting and tracking floods. This approach involves the use of satellite imagery to detect changes in water levels and identify flooded areas. To monitor floods using satellite images, the images are analyzed to detect changes in water levels over time. To detect changes in water levels and identify flooded areas based on a set of predefined criteria, we can train algorithms. Amazon SageMaker geospatial capabilities make it easier for data scientists and machine learning (ML) engineers to build, train, and deploy ML models using geospatial data. These capabilities also provide pre-trained models. One of the pre-trained models is land cover segmentation model. This land cover segmentation model can be run with a simple API call and can be leverage to analyze changes in the water level.
Predictive monitoring on distributed critical infrastructures (DCI) is the ability to anticipate events that will likely occur in the DCI before they actually appear, improving the response time to avoid the rise of critical incidents. Distributed into a region or country, DCIs such as smart grids or microgrids rely on IoT, edge-fog continuum computing and the growing capabilities of distributed application architectures to collect, transport, and process data generated by the infrastructure. We present a model-agnostic distributed architecture for the inference execution of machine learning window-based prediction models of predictive monitoring applications to be used in this context. This architecture transports the events generated by the DCI using event streams to be processed by a hierarchy of nodes holding predictive models. It also handles the offloading of inferences from resource-scarce devices at lower levels to the resourceful upper nodes. Therefore, the timing requirements for setting predictions before they occur are met.
The actual paper presents an in-depth study and experimental development of a class of rotorcraft, named as x-tilt, that features four tilting rotors. Initially, the equations of motion modeling the aerial robot are presented based on the Euler-Lagrange formulation. The model includes the aerodynamic effects induced by the rotorcraft's relative motion and propellers. For control purposes the aforementioned model is split into a nominal model and lumped disturbance terms, the latter encompassing endogenous and exogenous uncertainties. In this vein, the actual work propose a robust navigation strategy targeting a specific performance profile whose problem is formulated through the model predictive control (MPC) framework. To this end, two schemes are proposed, (i) an integral MPC and a (ii) MP sliding-mode Control (MPSMC). Both control schemes are linked to a extended-state Linear Kalman Filter (ES-LKF) that furnishes the states and lumped disturbance estimates. Moreover, a high-fidelity simulation is presented in detail to validate the effectiveness of the proposed controller within a realistic scenario. We finally present the experimental stage to validate the tilting-rotor configuration as well as the integral MPC.
This paper reviews about the field of data science and the importance of it with the today's rapidly evolving landscape of technology. We start the survey by introducing the fundamental concepts of data science including history and how we collect data and why we need data science for and how we use them effectively to use them for processing, storing, and analyzing such as machine learning, data visualization. And we explore about the different domains we use data science and the challenges, advantage and disadvantages we face while using them. And finally, we are going to discuss about the prospects and implications of data science and how we can use data science to overcome the current and future challenges. The need of careers and the paths of careers are also discussed as they are essential and growing rapidly.
The Collatz conjecture, a longstanding mathematical puzzle, posits that, regardless of the starting integer, iteratively applying a specific formula will eventually lead to the value 1. This paper introduces a novel approach to validate the Collatz conjecture by leveraging the binary representation of generated numbers. Each transition in the sequence is predetermined using the Collatz conjecture formula, yet the path of transitions is revealed to be intricate, involving alternating increases and decreases for each initial value. The study delves into the global flow of the sequence, investigating the behavior of the generated numbers as they progress toward the termination value of 1. The analysis utilizes the concept of probability to shed light on the complex dynamics of the Collatz conjecture. By incorporating probabilistic methods, this research aims to unravel the underlying patterns and tendencies that govern the convergence of the sequence. The findings contribute to a deeper understanding of the Collatz conjecture, offering insights into the inherent complexities of its trajectories. This work not only validates the conjecture through binary representation but also provides a probabilistic framework to elucidate the global flow of the sequence, enriching our comprehension of this enduring mathematical mystery.