In 2017, Japan’s Softbank Corporation and Nihon Denki University jointly initiated this research, and cooperated with the Japanese AI start-up ABEJA funded by Google in the United States. The deaf-mute person will be able to communicate in sign language in front of the camera, and then the system will use image recognition technology to analyze the skeletal movement of several parts of the body such as fingers and arms, translate the gestures into Japanese, display them on the staff’s computer screen, and then speak The reply will appear on the screen in front of the hearing impaired in the form of text, making the communication smoother.
Officials said the AI system is “much smoother than written dialogue, because the translation is done instantly.” However, the system still has a lot of room for improvement. At present, it can only accurately translate gestures into approximately 1,500 Japanese words. A SoftBank engineer who participated in the development of the system said: “To build a model that accurately translates symbols into Japanese requires a lot of sign language data.” Therefore, in order to improve the accuracy of the system, mobile communications and Internet service companies believe that it is necessary to launch A website andcell phoneThe application requests the public to cooperate and calls on more people to send sign language images to enrich the corpus.
In addition, Hokkaido University and Nippon Telegraph and Telephone Corporation are also jointly conducting a similar research to develop an AI sign language automatic translation system, which aims to improve the communication environment for deaf-mute people in public places such as hospitals, pharmacies, and tourist attractions.
However, the Japan Federation of the Deaf believes that it will take a long time for artificial intelligence to become an effective tool for the deaf and mute.