Monitoring the status of the driver is a crucial aspect of health monitoring inside vehicles as it helps to identify potential health or safety risks that could affect a driver’s ability to operate a vehicle safely. This includes monitoring for fatigue, distraction, or impairment, among other things, which can potentially cause car crashes. Although many solutions for health monitoring in private vehicles have been proposed, the majority of them are inconvenient to use or have the risk of leaking private information. Radars have the potential to address the above drawbacks by their inherent privacy protection and contactless operation in addition to their high accuracy, convenience, affordable price, and resilience to environmental factors. Among many possible radar configurations, millimeter FMCW radars can accurately detect range and monitor displacements that are essential in breathing pattern monitoring. Breathing pattern monitoring is one of the key signatures of the driver’s health. An accurate estimation of the breathing pattern enables the detection of breathing abnormalities, including tachypnea, bradypnea, biot, cheyne–stokes, and apnea. The breathing pattern can be estimated from both the chest and abdomen. For this purpose, we employed two 60 GHz FMCW radars. The proposed algorithm is capable of detecting the mentioned breathing abnormalities through breathing rate (BR) estimation and breath-hold period detection. In addition, the proposed method in this paper estimates BR based on the multiple range bins. We conducted a study on the human radar geometry problem inside a vehicle to determine the accurate number of range bins for BR estimation. The experimental results demonstrate a maximum BR error of 1.9 breaths per minute using the proposed multi-bin technique. In addition, the dual radar fusion system can detect breath-hold periods with minimal false detections.
The detection of targets under the ground is an important procedure that is typically performed by a human non-automatically. Recent studies have automated this process using artificial intelligence (AI) based on radar images. There are three main steps before feeding reconstructed radar images to a neural network. The first step is segmentation which can make the detection task straightforward. We have proposed an Otsu-based segmentation algorithm in this paper. The proposed segmentation algorithm is effectively able to distinguish between all the targets. In the second step before employing AI to detect targets, a local sliding window has been taken into consideration to improve the results. The image is divided into smaller parts by this sliding window after it has been reconstructed. In the third step, two different methods have been considered for data augmentation. The first method is a novel approach for generating synthetic radar data. It is applied before radar image reconstruction based on the summation of two receivers’ signals with different coefficients. In the second augmentation method, some conventional data augmentation methods like flip and rotating are applied to complete this task. To discriminate targets from background, it is necessary to classify input images to AI-based aproaches. This task can be accomplished by classical machine learning approaches like the scalar vector machine (SVM). Gabor filters have been utilized in this paper to extract the features. There also exist two classification approaches using convolutional neural networks (CNN) to automatically detect targets after image reconstruction. Two different CNN have been implemented. Without data augmentation, the SVM-based approach works better than CNN, and its accuracy is 86.9%. Overall, the second CNN algorithm outperformed SVM after the data augmentation by reaching 96% accuracy.