Screeen translation – dubbing, voiceover and lip-sync
In 1927 a new era in the film history began, although silent movies continued to be made until the early 1930s (the talking quickly became a norm). Dubbing and subsequently subtitling rapidly grew in importance in Europe. Although Hollywood responded initially to the language problem by reshooting its films in several languages using foreign authors.
Screen translation is currently the preferred term used for translation of a wide variety of audio visual texts, displayed on one kind of screen or another, while it’s normally associated with the subtitling and lip-sync,or audio-visual material for TV or cinema. Its range is much greater, covering TV programs, videos, films, DVDs, operas and plays. Other terms sometimes used to include media translation, language versioning and audio-visual translation, although the first of these could also cover print media or radio, while the latter also covers simultaneous interpreting of films and film festivals.
Revoicing is the term used to describe the various means of rendering a translated voice track, namely lip-synch,, , narration and free commentary, while subtitling and surtitling describe the main means used to render the voice track in written form.
Dubbing is generally taken to refer to the preparation and recording to the target language voice track. The strict means of the term is simply lain down of a voice track, not necessarily a translated version.
Whitnan Linsen distinguishes between:
1) Pre-synchronization (e.g. using pre-recorded music or lyrics or broadway musicals on the soundtrack of filmed versions)
2) Direct sync (e.g. when voice and picture are recorded simulataniously)
3) Post sync, which is the most common dubbing procedure and involves the addition of sounds after the visual images have been shot.
Voiceover is often used to translate monologues or interviews.
Narration is basically an extended. The term screen translation may seem to suggest that the process involves translation between two languages but this is not always the case, where subtitles are concerned.
Subtitles may be either inter- or intra-lingual. Inter-lingual subtitling is associated with target text subtitles for the deaf or hard of hearing (real time subtitles), created and broadcast just seconds after the words on which they are based have been spoken live on screen.
This type of subtitle is used to carry inter-lingual translations when foreign language films are used.
Intra-lingual subtitles may be accessed on an optional basis and as well as assisting the deaf , they can also be of benefit to other minorities, such as immigrants, refuges, foreign students and others with literary programs, who may improve their language skills.
The provision of closed, optional subtitles on TV became possible in the 1970s thanks to the advent of the teletext technology, where by subtitles could be broadcast encoded in the transmission signal and then selected by those viewers with the teletext TV and decoder.
Subtitles are open if the viewer can’t remove them from the screen.
Open inter-lingual subtitles are used on many foreign language videos as subtitling usually proves a much cheaper option than dubbing.
First extending the context from the conventional meaning to the triad of synchronism:
1) Phonetic synchrony (matching sounds and lip movements)
2) Character synchrony (matching the dubbing voice, timber and tempo)
3) Content synchrony (matching the semantic content of the original and dubbed script versions closely)
Whitnon Linsen developed a more elaborate alternative model of dubbing synchrony. She suggests that the general concept of dubbing synchrony be subdivided into visual optical synchrony, audio-acoustic synchrony, content synchrony.
Visual optical synchrony is then broken down into lip synchrony, proper synchrony, syllable synchrony and kinetic synchrony.
Audio-acoustic synchrony covers tone, timber, intonation, tempo (prosodic elements) and cultural specifics (regional accent and dialogues).
Content synchrony is understood to contain all challenges involved in the dubbing process.
Whenever politics is viewed as a struggle for power or as the political institutions and practices of a states, the associated social interactions are kinds of linguistic actions and types of discourse: e.g. Parliamentary debates, written constitution, ect.
All these types of discourse have specific characteristic features to fulfill specific communicative functions such as persuasion, national arguments, threats and promises are thus closely related, the focus is on social, cultural and communicative practices.
This also means that the following questions are being asked:
* Who decides, which texts get translated and from and into which languages.
* Where are the translations produced?
* Which factors determine the translator’s behavior?
* What is the status of translations, translating and translator in the respective cultures and systems?
* Who chooses and trains translators, how many of which language combinations?
All these questions are related to politics: any decision to encourage, allow or prevent to translate is a political decision.
Translators perform their work in socio-political context and environments. In this respect Leferve’s concept of patronage, which he developed in his investigation in the role of power and ideology behind the production of the translations is of relevance.
The patronage has:
1) An ideological component, which refers to the fact that literature should not be allowed to get too far out of step with the other systems of a given society.
2) An economic component, which refers to the fact that a patron assures the writer’s livelihood by providing payment and similar support. Translation as a product and a process can highlight socio-cultural and political practices, norms and constrains, which can be relevant in political discourse.
3) A status component related to the writer’s position in society.