Web - News - UK-IE

Back Print

AIIC UK & Ireland News

08/01/2021
The fourth webinar in the series "Artificial Intelligence and the Interpreter"
AIIC UK&I Bureau

The fourth instalment in AIIC UK & Ireland’s popular “Artificial Intelligence and the Interpreter” webinar series took place on Friday 8 January. The theme was “How Soon Is Now?” and had attracted a large, global audience, all keen to discover the capabilities of two ambitious automated speech translation devices available on the market today.
 
In her introduction, Monika Kokoszycka, the chair of AIIC UK & Ireland, referred to the English author and screenwriter Douglas Adams’s popular series The Hitchhiker’s Guide to the Galaxy, which featured the Babel fish, a small, bright yellow fish, which could be placed in someone’s ear to enable them to hear any language translated into their first language. As we were to learn, this science fiction concept has now become a little closer to reality, as a result of the efforts of companies like Waverly Labs and TranslateLive, our guests at the webinar. Monika also emphasised that the inclusion of third-party providers in AIIC events is for the information of participants only. Such providers are not related in any way to AIIC, its Regions, Groups or Committees. AIIC neither endorses nor approves any product or service they provide.
 
The first speaker was Peter Hayes, the CEO of TranslateLive, who explained the origins and applications of his company’s “Instant Language Assistant (ILA)”, and gave a live demonstration of how it worked. ILA devices were originally built to improve accessibility, allowing the deaf, hard of hearing, blind and deaf/blind to communicate by transcribing their conversation. This transcription capability has been enhanced by machine translation and text-to-speech, so that the device also now enables speakers to communicate in more than 120 languages/accents. The “interpretation” is consecutive, and users of the device must configure the languages before they start, so that the device knows what to “listen for”. Attendees were given a code to access a website and try out the system for themselves.
 
A second presentation was given by Sergio Del Río, Co-Founder and VP of Product at Waverly Labs, who demonstrated the capabilities of the “Ambassador”, over-the-ear earpieces worn by the user, which capture speech and provide a translated version, enabling users to have a conversation in two different languages, or follow a lecture in a foreign language. Once again, the “interpretation” is consecutive, and the languages used must be configured beforehand. Sergio demonstrated the product by speaking his native Mexican Spanish and allowing us to listen to the English translation.
 
The two demonstrations prompted a multitude of questions from attendees, mainly focusing on the specific capabilities of the devices and their potential to improve. Both Peter and Sergio revealed that they purchase data to feed into their devices’ memories, improving the number of topics that they can cover. Peter also uses professional translators to translate common phrases in specific fields, for example used by coastguards, to ensure that the language pairs are absolutely accurate. Peter’s reference to having “live interpreters” on standby to take over from the ILA if users wished prompted a number of questions about the interpreters’ working conditions and how exactly that system worked, although it was clear that the interpreters concerned were public service interpreters rather than conference interpreters. Both Peter and Sergio said that it was important for the device to know which language and dialect it was listening for to avoid confusion, particularly when languages were very close to each other, e.g. Italian and Spanish. An attendee from Montreal pointed out that English conversation in French-speaking Canada was littered with French place names and words, which both Peter and Sergio admitted their devices would find more difficult to cope with. Finally, Sergio’s reference to research into animal language sparked a great deal of interest, particularly from pet owners keen to question their cats about their exploits.
 
The discussion revealed some interesting parallels between the obstacles faced by automated speech translation devices and human interpreters. For the developers of devices, the biggest challenge is speech recognition, largely due to the myriad of different accents used by speakers, coupled with pronunciation and grammatical errors and incomplete sentences. This struck a definite chord with the many conference interpreters in the audience! Intriguingly, the device developers suggested that speakers would need to modify the way they speak to get the most out of the translation devices, speaking clearly and in full sentences. 
 
The session closed with the results of a poll, asking participants in which circumstances they would consider using a portable machine speech translator (as opposed to a human interpreter). While a majority of participants would consider using one in situations in which human interpreters would not normally be used (while travelling (91%), in a hotel or restaurant (79%), while sightseeing (63%)), the numbers dropped dramatically for situations where human interpreters would be hired (medical appointments (30%), business meetings (9%), political meetings (4%), political negotiations (2%)).
 
The application of these devices as they stand in the conference interpreting setting would appear to be limited for the time being, but that is one of the vital questions to be addressed at next week’s webinar.
The AIIC UK & Ireland organising team - Françoise Comte, Louise Jarvis, Monika Kokoszycka, Stefanie MacDonald, Deborah Muylle and Monica Robiglio - would like to thank the two speakers for speaking so openly with us about their products, and sharing their thoughts with us.

We hope you will join us for the final instalment of the webinar series on 15 January 2021: "Under Pressure. AI and Interpreting in the Future: Disruption, Perception and Reality". Registration is open. 

Some footage from previous webinars in the series can already be found on AIIC's YouTube channel (more to be added soon).