Flood monitoring with satellite images is an effective method of detecting and tracking floods. This approach involves the use of satellite imagery to detect changes in water levels and identify flooded areas. To monitor floods using satellite images, the images are analyzed to detect changes in water levels over time. To detect changes in water levels and identify flooded areas based on a set of predefined criteria, we can train algorithms. Amazon SageMaker geospatial capabilities make it easier for data scientists and machine learning (ML) engineers to build, train, and deploy ML models using geospatial data. These capabilities also provide pre-trained models. One of the pre-trained models is land cover segmentation model. This land cover segmentation model can be run with a simple API call and can be leverage to analyze changes in the water level.
The actual paper presents an in-depth study and experimental development of a class of rotorcraft, named as x-tilt, that features four tilting rotors. Initially, the equations of motion modeling the aerial robot are presented based on the Euler-Lagrange formulation. The model includes the aerodynamic effects induced by the rotorcraft's relative motion and propellers. For control purposes the aforementioned model is split into a nominal model and lumped disturbance terms, the latter encompassing endogenous and exogenous uncertainties. In this vein, the actual work propose a robust navigation strategy targeting a specific performance profile whose problem is formulated through the model predictive control (MPC) framework. To this end, two schemes are proposed, (i) an integral MPC and a (ii) MP sliding-mode Control (MPSMC). Both control schemes are linked to a extended-state Linear Kalman Filter (ES-LKF) that furnishes the states and lumped disturbance estimates. Moreover, a high-fidelity simulation is presented in detail to validate the effectiveness of the proposed controller within a realistic scenario. We finally present the experimental stage to validate the tilting-rotor configuration as well as the integral MPC.
Whereas robots are and will be increasingly present in all areas of society, robotics can be limited by its narrow perspective, often neglegecting discussions about its own goals, epistemology, ethics, and socio-political impacts. In order to try to adress this issue, this article proposes a holistic analysis of robots and robotics. Taking into these blind spots, raising new challenges. In response, the second part proposes a set of guidelines : a new epistemology, a metaethical framework, an organization method and a template design in order to answer these challenges.
In an era where the drumbeats of technological advancement echo through the corridors of military strategy, my research takes a deep dive into the storied pasts of military legends-General John J. Pershing, General George S. Patton Jr., and General Norman Schwarzkopf-to juxtapose their timeless strategies with the burgeoning field of Artificial Intelligence (AI) in warfare. This comparative analysis, crafted with the meticulousness of a New York Times feature, seeks to unravel the complex tapestry of leadership, tactical innovation, and the human element that defined the battlegrounds of the 20th century, and to critically examine how AI might redefine the very fabric of military operations in the future.
This paper is a continuation on my revolutionary theory of solving the pointwise fluid flow approximation model for time-varying queues. Thus, the long-standing simulative approach has now been replaced by an exact solution by using a constant ratio 𝛽 (Ismail's ratio) , offering an exact analytical solution. The stability dynamics of the time-varying 𝑀/𝐸 𝑘 /1 queueing system are then examined numerically in relation to time, 𝛽, and the queueing parameters.
This paper details an experiment utilizing ESP8266 modules as servers to wirelessly control diverse electrical appliances in home automation. The experiment showcased the modules' capability to respond to commands via a web interface on both mobile and desktop platforms or even tablets. While most of the experiment ran smoothly, occasional freezing and connectivity disruptions were observed. The abstract encapsulates the experiment's successes, discusses encountered challenges, and outlines a forward-looking perspective, including the integration of a custom PCB for enhanced system stability.
P versus NP is considered as one of the most fundamental open problems in computer science. This consists in knowing the answer of the following question: Is P equal to NP? It was essentially mentioned in 1955 from a letter written by John Nash to the United States National Security Agency. However, a precise statement of the P versus NP problem was introduced independently by Stephen Cook and Leonid Levin. Since that date, all efforts to find a proof for this problem have failed. Another major complexity class is NP-complete. It is well-known that P is equal to NP under the assumption of the existence of a polynomial time algorithm for some NP-complete. We show that the Monotone Weighted Xor 2-satisfiability problem (MWX2SAT) is NP-complete and P at the same time. Certainly, we make a polynomial time reduction from every directed graph and positive integer k in the K-CLOSURE problem to an instance of MWX2SAT. In this way, we show that MWX2SAT is also an NP-complete problem. Moreover, we create and implement a polynomial time algorithm which decides the instances of MWX2SAT. Consequently, we prove that P = NP.
Detecting and segmenting cracks in infrastructure, such as roads and buildings, is crucial for safety and cost-effective maintenance. In spite of the potential of deep learning, there are challenges in achieving precise results and handling diverse crack types. With the proposed dataset and model, we aim to enhance crack detection and infrastructure maintenance. This study proposes a novel approach termed Hybrid-Segmentor, which uses a convolutional neural network path that is well-suited for extracting fine-grained local features and a transformer path to extract global features that benefit from understanding the overall structure. This hybrid method makes the model more generalizable to various shapes, surfaces, and sizes of cracks. To achieve a balanced computational cost, the study incorporates efficient self-attention in the transformer path and introduces a comparatively simpler decoder compared to the complexity of the two encoder paths. This combination strategically optimizes the extraction of global and local features while maintaining computational efficiency. The model was trained using a combined binary cross entropy and Dice loss function on a large refined dataset of 12,000 crack images generated from 13 publicly available datasets. Our studies demonstrate that the model efficiently utilizes convolutional layers and transformers to extract local and global features. Hybrid-Segmentor outperforms existing benchmark models across 5 quantitative metrics (accuracy 0.971, precision 0.804, recall 0.744, F1-score 0.770, and IoU score 0.630), achieving state-of-the-art status. Finally, through careful qualitative analysis, we show that the model is capable of addressing discontinuities, detecting small non-crack regions, handling low-quality images, and detecting crack contours more accurately than existing models.
Sir J. C. Bose was the first to demonstrate wireless transmission with his indigenous set up. His patent for galena detector and his reports for a few microwave components are well recognized. In this paper, a few of his experiments, somewhat less discussed but recognized by experts as the first, will be listed and described. These include his detector as first IR detector, first experiment on light tunneling, jute polarizer as first chiral metamaterial, hysteresis in I-V curve of coherer as first signature of memristor action and a polarizer having alternate layers of paper and tin foil as the first structures for both the photonic band gap and the superlattice. Relevance of his work to devices in current electronics, photonics and information technology are pointed out. Comments by experts in the areas are also included.
In developing digital twins for power electronics converters and other power system components, selecting an appropriate representation type and level of abstraction is fundamental. The choice of representation should balance fidelity, computational cost, and objectives of the representation. Digital twins are generally given a single, specific representation task; however, various functions can be delegated to the digital twin to support, leaving room for ambiguity in the design of the digital twin. Digital twins can be designed with multi-domain and multi-functional capabilities, allowing them to adapt to diverse system domains and perform a variety of representation tasks. This approach allows the digital twin to be as specialized as the physical asset it serves. This study introduces a framework enabling the development of multi-domain, multi-functional digital twins, adaptable for use in various representation tasks. The framework utilizes a collection of digital images for an accurate depiction of different asset elements, ensuring a detailed yet unified digital twin. The framework is designed to analyze the assigned representation task and select the most suitable digital image for execution. Details on the development of the framework are provided and experimental results validate the effectiveness of the proposed framework.
Challenges exist in learning and understanding religions, such as the complexity and depth of religious doctrines and teachings. Chatbots as question-answering systems can help in solving these challenges. LLM chatbots use NLP techniques to establish connections between topics and accurately respond to complex questions. These capabilities make it perfect for enlightenment on religion as a question-answering chatbot. However, LLMs also tend to generate false information, known as hallucination. Also, the chatbots' responses can include content that insults personal religious beliefs, interfaith conflicts, and controversial or sensitive topics. It must avoid such cases without promoting hate speech or offending certain groups of people or their beliefs. This study uses a vector database-based Retrieval Augmented Generation (RAG) approach to enhance the accuracy and transparency of LLMs. Our question-answering system is called “MufassirQAS''. We created a database consisting of several open-access books that include Turkish context. These books contain Turkish translations and interpretations of Islam. This database is utilized to answer religion-related questions and ensure our answers are trustworthy. The relevant part of the dataset, which LLM also uses, is presented along with the answer. We have put careful effort into creating system prompts that give instructions to prevent harmful, offensive, or disrespectful responses to respect people's values and provide reliable results. The system answers and shares additional information, such as the page number from the respective book and the articles referenced for obtaining the information. MufassirQAS and ChatGPT are also tested with sensitive questions. We got better performance with our system. Study and enhancements are still in progress. Results and future works are given.
This work develops a methodology for studying the effect of an offload zone on the ambulance ramping problem using a multi-server, multi-class non-preemptive priority queueing model that can be treated analytically. A prototype model for the ambulance/emergency-department interface is constructed, which is then implemented as a formal discrete event simulation, and is run as a regenerative steady-state simulation for empirical estimation of the ambulance queue-length and waiting-time distributions. The model is also solved by analytical means for explicit and exact representations of these distributions, which are subsequently tested against simulation results. A number of measures of performance is extracted, including the mean and 90th percentiles of the ambulance queue length and waiting time, as well as the average number of ambulance days lost per month due to offload delay (offload delay rate). Various easily computable approximations are proposed and tested. In particular, a closed-form, purely algebraic expression that approximates the dependence of the offload delay rate on the capacity of the offload zone is proposed. It can be evaluated directly from model input parameters and is found to be, for all practical purposes, indistinguishable from the exact result.
The development of automated and connected driving functions is currently a central objective for vehicle manufacturers. Such functions generally are introduced at different levels of automation with limited operational design domains (ODDs), which is gradually extended. However, a concise and practical description of ODDs has not yet been established. This work aims at providing a suitable and mathematically concise description of the operational design domain and relates the new description to the definitions of related terms that are widely used in the research community. This work follows a top-down approach. Engineering applications for ODD descriptions are introduced that go beyond scenario-based test design, like ADAS specification, function delimitation and cooperative, connected mobility. Furthermore the ODD can be seen as an instrument and language for the description of system capabilities, building a fundamental tool for cooperative and collaborative development and operations. A set of requirements on the parameterization of operational design domains is derived and methods for selecting suitable parameters are presented. The application of these methods is demonstrated with real world examples. Finally, a discussion of open issues provides starting points for continuous further research.
This study introduces a hierarchical key assignment scheme (HKAS) based on the closest vector problem in an inner product space (CVP-IPS). The proposed scheme offers a comprehensive solution with scalability, flexibility, cost-effectiveness, and high performance. Key features include CVP-IPS based construction, using two public keys for the entire scheme, a distinct basis set for each class, a direct access scheme for user convenience, and rigorous mathematical and algorithmic presentation of dynamic update operations. This scheme eliminates the need for top-down structures and offers a significant benefit in that the lengths of the basis sets defined for classes are the same and the costs associated with key derivation are the same for all classes, unlike top-down approaches, where the higher class in the hierarchy incurs much higher costs. The scheme excels in both vertical and horizontal scalability due to its utilization of the access graph and is formally proven to achieve strong key indistinguishability security (S-KI-security). This research represents a significant advancement in HKAS systems, providing tangible benefits and improved security for a wide range of use cases.
Modern apps require high computing resources for real-time data processing, allowing app users (AUs) to access real-time information. Edge computing (EC) provides dynamic computing resources to AUs for real-time data processing. However, ESs in specific areas can only serve a limited number of AUs due to resource and coverage constraints. Hence, the app user allocation problem (AUAP) becomes challenging in the EC environment. In this paper, a quantum-inspired differential evolution algorithm (QDE-UA) is proposed for efficient user allocation in the EC environment. The quantum vector is designed to provide a complete solution to the AUAP. The fitness function considers factors such as minimum ES required, user allocation rate (UAR), energy consumption, and load balance. Extensive simulations are performed along with hypotheses-based statistical analyses (ANOVA, Friedman test) to show the significance of the proposed QDE-UA. The results indicate that QDE-UA outperforms existing strategies with an average UAR improvement of 116.63%, a 77.35% reduction in energy consumption, and 46.22% enhancement in load balance while utilizing 13.98% fewer ESs.
Pyramid Temporal Hierarchy Network (PTH-Net) is a new paradigm for dynamic facial expression recognition, applied directly to raw videos without face detection and alignment (FDA). The traditional paradigm initially employs FDA to extract facial regions from raw videos before recognition. The advantage of this paradigm lies in minimizing the impact of complex backgrounds. However, it inadvertently neglects valuable information, such as body movements. Additionally, being bound to FDA sacrifices flexibility. In contrast, PTH-Net distinguishes background and target at the feature level, preserves more critical information, and is an end-to-end network that is more flexible. Specifically, PTH-Net utilizes a pre-trained backbone to extract multiple generic features of video understanding at various temporal frequencies, forming pyramid features. Subsequently, through temporal hierarchy refinement—achieved via differential sharing and downsampling—PTH-Net refines key information under the supervision of multiple receptive fields with the temporal-frequency invariance of expressions. In addition, to solve the problem of containing numerous irrelevant frames in videos, PTH-Net incorporates a Temporal Hierarchy Refinement layer to aggregate information at different temporal granularities, enhancing its ability to distinguish target and non-target expressions. Notably, PTH-Net achieves more comprehensive and in-depth understanding by merging knowledge from both forward and reverse video sequences. PTH-Net excels across six challenging benchmarks with lower computational costs in comparison to preceding methods.