At weekends we post our most popular previous posts – in case you missed them first time round!
It’s that time again! A bunch of really eager computer scientists have a prototype that will translate sign language to speech! They’ve got a really cool video that you just gotta see!
They win an award! (from a panel that includes no signers or linguists). Technology news sites go wild! (without interviewing any linguists, and sometimes without even interviewing any deaf people).
…and we computational sign linguists, who have been through this over and over, every year or two, just *facepalm*.
The latest strain of viral computational sign linguistics hype comes from the University of Washington, where two hearing undergrads have put together a system that … supposedly recognizes isolated hand gestures in citation form. But you can see the potential! *facepalm*.
Twelve years ago, after already having a few of these *facepalm* moments, I wrote up a summary of the challenges facing any computational sign linguistics project and published it as part of a paper on my sign language synthesis prototype.
But since most people don’t have a subscription to the journal it appeared in, I’ve put together a quick summary of Ten Reasons why sign-to-speech is not going to be practical any time soon.
- Sign languages are languages. They’re different from spoken languages. Yes, that means that if you think of a place where there’s a sign language and a spoken language, they’re going to be different. More different than English and Chinese.
- We can’t do this for spoken languages. You know that app where you can speak English into it and out comes fluent Pashto? No? That’s because it doesn’t exist. The Army has wanted an app like that for decades, and they’ve been funding it up the wazoo, and it’s still not here. Sign languages are at least ten times harder.
- It’s complicated. Computers aren’t great with natural language at all, but they’re better with written language than spoken language. For that reason, people have broken the speech-to-speech translation task down into three steps: speech-to-text, machine translation, and text-to-speech.
- Speech to text is hard. When you call a company and get a message saying “press or say the number after the tone,” do you press or say? I bet you don’t even call if you can get to their website, because speech to text suuucks:
-Say “yes” or “no” after the tone.
-No.
-I think you said, “Go!” Is that correct?
-No.
-My mistake. Please try again.
-No.
-I think you said, “I love cheese.” Is that correct?
-Operator! - There is no text. A lot of people think that text for a sign language is the same as the spoken language, but if you think about point 1 you’ll realize that that can’t possibly be true. Well, why don’t people write sign languages? I believe it can be done, and lots of people have tried, but for some reason it never seems to catch on. It might just be the classifier predicates.
- Sign recognition is hard. There’s a lot that linguists don’t know about sign languages already. Computers can’t even get reliable signs from people wearing gloves, never mind video feeds. This may be better than gloves, but it doesn’t do anything with facial or body gestures.
- Machine translation is hard going from one written (i.e. written version of a spoken) language to another. Different words, different meanings, different word order. You can’t just look up words in a dictionary and string them together. Google Translate is only moderately decent because it’s throwing massive statistical computing power at the input – and that only works for languages with a huge corpus of text available.
- Sign to spoken translation is really hard. Remember how in #5 I mentioned that there is no text for sign languages? No text, no huge corpus, no machine translation. I tried making a rule-based translation system, and as soon as I realized how humongous the task of translating classifier predicates was, I backed off. Matt Huenerfauth has been trying (PDF), but he knows how big a job it is.
- Sign synthesis is hard. Okay, that’s probably the easiest problem of them all. I built a prototype sign synthesis system in 1997, I’ve improved it, and other people have built even better ones since.
- What is this for, anyway? Oh yeah, why are we doing this? So that Deaf people can carry a device with a camera around, and every time they want to talk to a hearing person they have to mount it on something, stand in a well-lighted area and sign into it? Or maybe someday have special clothing that can recognize their hand gestures, but nothing for their facial gestures? I’m sure that’s so much better than decent funding for interpreters, or teaching more people to sign, or hiring more fluent signers in key positions where Deaf people need the best customer service.
So I’m asking all you computer scientists out there who don’t know anything about sign languages, especially anyone who might be in a position to fund something like this or give out one of these gee-whiz awards: Just stop. Take a minute. Step back from the tech-bling. Unplug your messiah complex. Realize that you might not be the best person to decide whether or not this is a good idea. Ask a linguist. And please, ask a Deaf person!
Note: I originally wrote this post in November 2013, in response to an article about a prototype using Microsoft Kinect. I never posted it. Now I’ve seen at least three more, and I feel like I have to post this. I didn’t have to change much.
This has been shared by kind permission from Angus’s blog.
Angus Grieve-Smith is a hearing linguist and programmer. He created one of the early sign language synthesis prototypes in 1997, and has recently been working on information extraction projects in New York University’s Computer Science Department. He teaches Linguistics at Saint John’s University.
Hartmut Teuber
September 16, 2017
One has to admire the capacity of a human brain! To-date, no most powerful computer can simulate the whole extent of the human brain, not even the huge IBM chess computer Watson who defeated the chess grandmaster Karpov. The computer needs to do the same as what happens inside the so small space of the human skull that is filed with a finite, though huge, number of nerve cells.
Just fathom how it could store such a huge repository of visual x-y-z bit arrays that change in milliseconds (or bytes if in color) and auditory linear frequency-intensity bits at the lowest level of the phonetic level of language, extract from it a set of elementary building blocks by a categorization principle and then a lexicon of the language. With them words or signs, you store their usage semantics. Then you incorporate to it the grammar or rule system of the languages you know, which are often vague and contradictory to expert observers and sometimes native speakers.
Language as it is written is much easier for a computer to handle. It is not a language recognition task. It can skip processing a huge database of visual and auditory data to recognize the elementary units. The matching of sounds with written symbols had been predetermined and transmitted in literacy instructions. Yet any machine translation has not ever produced a perfect translation between languages.
Despite all attempts of overcoming the obstacles of simulating natural languages in computers, a human brain still can make sense out of nonsensical or contradictory input. It would require another huge computer, databases, and powerful artificial intelligence algorithms to figure out fuzzy utterances.