Abstract—Visualization of three-dimensional (3D) medical images is an important tool in surgery, particularly during the operation. However, it is often challenging to review a 3D anatomic model while maintaining a sterile field in the operating room. Thus, there is a great interest in touchless interaction using hand gestures to reduce the risk of infection during surgery. In this paper, we propose an improved real-time gesture-recognition method based on deep convolutional neural networks that works with a Microsoft Kinect device. A new multi-view RGB-D dataset consisting of 25 hand gestures was constructed for deep learning. The nine gestures that were associated with the high recognition accuracies were selected for the touchless visualization system. A deep network architecture, AlexNet, was used for the hand gesture recognition. The recognition accuracy was about 96.5%, which was much higher than that in our previous systems. We further demonstrated that this technique facilitates touchless real-time visualization of hepatic anatomical models during surgery. This system is expected to ultimately lead to better patient outcomes by enhancing the ability to visualize medical images in 3D during surgery.
Index Terms—hand gestures recognition, deep learning technique, surgery aid system
Cite: Jiaqing Liu, Kotaro Furusawa, Tomoko Tateyama, Yutaro Iwamoto, and Yen-wei Chen, "An Improved Kinect-Based Real-Time Gesture Recognition Using Deep Convolutional Neural Networks for Touchless Visualization of Hepatic Anatomical Mode," Journal of Image and Graphics, Vol. 7, No. 2, pp. 45-49, June 2019. doi: 10.18178/joig.7.2.45-49
Copyright © 2012-2024 Journal of Image and Graphics, All Rights Reserved