DCAL: New progress in sign-to-text technology (BSL)

Posted on September 13, 2018 by



How many times have we read about amazing technology that provides automatic translation from English into sign language or automatic translation from sign language into English?

Watch this article in BSL below (or scroll down to continue in English!).

 

“Sign-language gloves” are described as translating sign language into English by sensing the user’s movements. The prize-winning car system Sym:Pony “can even detect sign language and read it out so the driver can stop and ask for directions. The car will translate the answer back into onscreen sign language.”

Angus Grieve-Smith, a linguist and computer programmer, has provided 10 reasons why automated translation between signed and spoken/written language is so difficult.

Although there have been some recent advances in sign language recognition, part of the problem is that most computer scientists in this research area do not have the required in-depth knowledge of sign language, and often have no connection with the Deaf community or sign linguists.

For example, one project described as translation into sign language aimed to take subtitles and turn them into fingerspelling. This is one of many reasons why much of this technology, including sign-language gloves, simply doesn’t help deaf people.

We might decide to abandon any attempts to achieve automated translation between sign languages like BSL and spoken/written languages like English.

But the ability to undertake automated translation has many benefits. For example, there are enormous amounts of video data on-line in BSL and other sign languages that are not searchable.

So even for Deaf and hearing signers, finding specific content requires watching the whole video. Studying sign languages involves analysing video footage – and this is extremely labour-intensive. Wouldn’t it be nice to be able to quickly search signed videos and find the content you’re looking for, the same way we do with English texts?

Until recently we have lacked large signed video datasets that have been precisely and consistently transcribed and translated – these are needed to train computers for automation. But sign language corpora – large datasets like the British Sign Language Corpus – bring new possibilities for this technology.

What is needed is a strategic collaboration between BSL linguists, the Deaf community, and software engineers who specialise in computer vision and machine learning.

The Engineering and Physical Sciences Research Council has recently funded just such a project – ExTOL: End to End Translation of British Sign Language) – a collaboration between the University of Surrey, University of Oxford, and DCAL (the Deafness Cognition and Language Research Centre at University College London), with the aim of building the world’s first British Sign Language to English Translation system and the first practically functional machine translation system for any sign language.

The work already done by the researchers at DCAL on transcription and analysis of the BSL Corpus will provide the essential data to be used by computer vision tools to assist with video analysis.

This will in turn help linguists increase their knowledge of the language with a long term ambition of creating the world’s first machine readable dataset of a sign language.

To achieve this the computer must be able to recognise not only the shape, motion and location of the hands but also facial expression, mouth movements, and body posture of the signer.

It must also understand how all of this activity in connected signing can be translated into written/spoken language.

The technology for recognising hand, face and body posiitons and movements is improving all the time, and we know we can make significant progress in speeding up automatic recognition and identification of these elements (e.g. recognising when someone is using their right or left hand, or recognising specific facial expressions or mouth movements).

Full translation from BSL to English is of course more complex but the automatic recognition of basic position and movements will help a lot.

The ultimate goal that this project is working towards is to take the annotated data and understanding from linguistic study and to use this to build a system that is capable of watching a human signing and turning this into written English.

This will be a world first and will be one of the first practically useful applications of this technology, based on the principle that sign language technology will only ever progress into something useful when sign linguists and Deaf signers are actively involved.


Enjoying our eggs? Support The Limping Chicken:



The Limping Chicken is the world's most popular Deaf blog, and is edited by Deaf  journalist,  screenwriter and director Charlie Swinbourne.

Our posts represent the opinions of blog authors, they do not represent the site's views or those of the site's editor. Posting a blog does not imply agreement with a blog's content. Read our disclaimer here and read our privacy policy here.

Find out how to write for us by clicking here, and how to follow us by clicking here.

The site exists thanks to our supporters. Check them out below:

Posted in: DCAL, Site posts