Mental Workout

Take a load off: How do interpreters cope with cognitive load?


07 July 2021   
Part 4 of A Journey into the Interpreter's Mind

Speaking and listening at the same time comes at a cost, meaning that simultaneous interpreters have to allocate their limited cognitive resources efficiently. Even then, they have to rely on processing shortcuts to cope with the complexity of the task. But how do these shortcuts work? And what pushes interpreters to, and beyond, their limits?

 

In our quest to better understand what goes on in the interpreter's mind, we last saw how particular features of a language can gobble up many of the limited cognitive resources we have at our disposal, making it more challenging to process. We also saw how differences - and sometimes even similarities - between languages can contribute to making their real-time translation a demanding endeavor. Professionals might argue, however, that these challenges are par for the course for any conference interpreter. The real challenge inherent to simultaneous interpreting might often have less to do with the language used, than with the way in which it is presented. Factors such as the speed at which speakers talk and the proficiency with which they speak,  will impact the load generated on the system.

 

As we have seen, simultaneous interpreters engage in the relatively unnatural task of listening while speaking. In other words, they constantly speak and listen in adverse conditions: they have to talk over someone else and they have to listen to someone whilst not getting distracted by their own output. The resources required to understand incoming speech, therefore, might compete with those required to produce outgoing speech which could result in the two tasks  interfering with each other. And yet, there is ample evidence, from professional musicians to circus clowns, that practicing a combination of different tasks, also known as dual-tasking or multitasking, enhances performance.

 

Clown


From a cognitive perspective, tasks can be combined thanks to different mechanisms, including the rapid shifting of attention between them and the automation of certain processes. This means that interpreters will regularly reroute resources away from listening and not attentively process each word uttered by a speaker. They are then tasked with filling in the missing parts - something our brain does rather well. Similarly, at times, interpreters will divert resources away from speaking, no longer closely monitoring their own output. This means that, in principle, when speaking and listening are combined, something has got to give.

 

This is where automation comes into play. In fact, while deliberate tasks very quickly deplete our mental resources, automated tasks have been shown to generate much less load. This is how set phrases, for example, can be understood or produced with fewer resources than those that have to be comprehensively analyzed – so long as they have been memorized and are recognized.

 

The same principle probably applies to recurring syntactic structures. But does that mean that, in actual fact, simultaneous interpreting is but an exercise of substituting previously memorized phrases, and that the interpreter, much like Charlie Chaplin in Modern Times, repeats the same handful of tasks almost mechanically? Not quite - after all, spoken discourse is much too complex for that. What it does mean, however, is that the more formulaic the input, the more likely it is that the comprehension (and corresponding production) of these parts of speech will have been automated, allowing the interpreter to - at least temporarily - free up resources.

 

Modern Times

 Charlie Chaplin in Modern Times. Credit: United Artists, Public domain, via Wikimedia Commons

In today’s world of multilateral diplomacy, discourse is indeed rather formulaic, potentially allowing simultaneous interpreters to take such shortcuts. Different social and political phenomena, however, have led to a rise in multilingualism. English, for example, is spoken as a first language by some 350 million people worldwide, but another billion people speak it as a second language. As a consequence, multilingual multilateral diplomacy is often conducted in a second (or third or fourth language), relying on English as a vehicular language, or lingua franca, spoken more, or less, fluently.

 

As a result, what is sometimes downplayed as 'an accent', (suggesting that some words might be pronounced differently), often represents a much more extensive departure from what might be considered conventional grammar and lexicon. This is an additional challenge for simultaneous interpreters who, all of a sudden, can no longer rely on a relatively predictable set of norms or rules to facilitate the comprehension process by means of prediction and automation. On the contrary, evidence shows that the brain reacts very quickly and systematically to unexpected stimuli, including the use of non-conventional grammar and words. Constantly having to devote resources to the comprehension of the input, however, reduces interpreters' ability to relocate them to other tasks.

 

Speed

 

Another hallmark of modern-day multilateral diplomacy is the speed at which speakers read their statements. The challenge here is not necessarily the speed itself, which in improvised oral discourse tends to be attenuated by natural pauses as well as the redundant nature of speech. Simultaneously interpreting prepared statements changes the interpreter’s task. Rather than creating oral discourse from oral discourse, the task has become one of transforming written text to oral discourse. The two, however, differ considerably in a number of aspects, from average density (in other words, the relative number of content words used), to word frequency (the relative occurrence of the words used) to syntax (the complexity of sentence structures used) and prosody (pauses, intonation and rhythm). Add to that the increase in speed from about 120 to 150 words per minute for improvised conversation to between 160 and 190 words for statements read in international organizations, and it becomes evident why, when the manuscript is not made available to interpreters beforehand so they can prepare and (at least partially) even the odds, they are forced to resort to more radical shortcuts. Only by condensing, approximating and at times truncating the input, can they keep the process from grinding to a halt.

 

AR

 

The constant evolution of technology, however, has not stopped at the booth, and today’s conference interpreters can - and sometimes have to - rely on technology to compensate for their own cognitive limitations. What these technologies look like and how they are changing the interpreters’ workplace, as well as the cognitive demands associated with the job, will be the object of our next stop on our journey through the interpreter’s mind.

Part 5: Getting ready for the tech revolution in interpreting: lock and load.... or just load?


About the Author

Kilian Seeber

Kilian G. Seeber is associate professor and Vice Dean of the University of Geneva’s Faculty of Translation and Interpreting (FTI). He is the program director of the MA in Conference Interpreting (MACI) and the MAS in Interpreter Training (MASIT) as well as PI in the Laboratory for Cognitive Research in Interpreting (LaborInt) and in the Laboratory on Interpreting and Technology (InTTech). Kilian earned a graduate degree in Translation and Interpreting from the University of Vienna (Austria), as well as a postgraduate degree and a PhD in Interpreting from the University of Geneva (Switzerland) before completing his postdoctoral work in psycholinguistics at the University of York (United Kingdom). Kilian’s research interests include cognitive load and multimodal processing during complex language processing tasks, topics on which he has published widely. ( kilian.seeber@unige.ch)




Share this story ­









Back to the AIIC Blog