5/16/2023 0 Comments Chicago doppler radarThe most significant disadvantage of using wearables to recognize gestures is the very high cost of deployment. Extracting gestures from a large number of reflected signals is a huge challenge. Using Wi-Fi signals to describe motions, the multipath effect seriously affects the independence of gestures. If the light is dim, the collected data will be incomplete, and the gesture cannot be accurately recognized. The data collected by computer vision methods largely depend on light. The above three methods are the mainstream methods of gesture research, but some aspects have apparent deficiencies. The computer vision method collects the skeleton data in the gesture movement, uses the skin color, contour, texture, and other information of the hand to represent the specific movement process of the gesture and then recognizes the gesture. Wi-Fi signals are typically characterized by Received Signal Strength Indication (RSSI) or Channel State Information (CSI), which can also be combined with machine learning algorithms to recognize gestures. Wearable devices can also study whole-body behavior and posture. Wearable sensors mainly capture position and spatial state information during finger movement and then use this information to analyze gestures to achieve the purpose of gesture recognition. Existing gesture recognition research is based on wearable sensor devices or wireless communication signals (Wi-Fi, RFID), as well as computer vision research methods using optical cameras and depth cameras to collect data. For example, gestures are used in applications or video games. The gesture is an essential tool for human–computer interaction and a significant field in wireless signal perception. Millimeter-wave radar also plays a role in human gait recognition and vital signs (respiration and heartbeat ) detection. In terms of vehicle trajectory positioning and tracking and vehicle situational awareness, the detection capability of millimeter-wave radar is significantly more robust than other sensing technologies. For example, in autopilot, the unmanned user interface employs millimeter-wave sensing detection technology. With the development of millimeter-wave radar technology, millimeter-wave technology has been applied in an increasing number of production environments. The average recognition rate of the fused gesture features in the same scene domain is 87%, and the average recognition rate in the unknown scene domain is 83.1%, which verifies the feasibility of gesture recognition across scene domains. Experimental results show that the three-dimensional CNN can fuse different gesture feature sets. Then, the gesture is trained and recognized using the three-dimensional convolutional neural network (CNN) model. Then, they are fused to represent a complete and comprehensive gesture action. Then, three kinds of hand gesture features independent of the scene domain are extracted: range-time spectrum, range-doppler spectrum, and range-angle spectrum. Furthermore, the gesture recognition results of known scenes can be extended to unknown stages after obtaining the original gesture data in different scene domains. The remaining data is used as a validation set to validate the training results. The experiments are carried out in four scene domains, using part of the data of a scene domain as the training set for training. The four scene domains are the experimental environment, the experimental location, the experimental direction, and the experimental personnel. This paper uses millimeter-wave radar to recognize gestures in four different scene domains.
0 Comments
Leave a Reply. |