Lars Petersen Lars Petersen

New Office, New Perspectives

Sometimes, moving is all it takes. A change of scenery has given me the courage to think bigger!

Sometimes, moving is all it takes. A change of scenery has given me the courage to think bigger!

A year and four months have passed since my arrival in Berlin. Much has happened – professionally as well as personally.

Yet so much remains the same. 

I’m a translator at a large firm, and work generously supplies me with bounties of bureaucratic hassles and red tape in such enormous amount, that I am more in charge of overseeing process rather than I am expected to actually translate and write. 

Me writing is now but a mere vestige of former glory.

But, aha! This now leaves room for new thinking, and new ideas!

Because now, I am in charge of copy writing in big projects – reaching tens of thousands of people, every day.

That I find myself in a new, unexpected place is not necessarily detrimental to my life – no, it is just so.

Because in the horizon, I am starting to see certain elements assemble, suggesting greater things are in store.

On the famed Berlin boulevard Unter den Linden, you will find prestigious institutions such as the French Embassy, the Humboldt University, the German Presidency Office... and me!

Read More
Lars Petersen Lars Petersen

Saint Jerome Translating the Bible from Hebrew into Latin: Three Major Challenges

Saint Jerome in His Study, by Ghirlandaio, 1480

Saint Jerome in His Study, by Ghirlandaio, 1480 (Wikimedia Commons, Public Domain)


In the year 382, theologian and translator Saint Jerome (Jerome of Stridon) was commissioned by Pope Damasus I to translate the Bible into Latin – a version of the Bible known as the Vulgate.

Back in the dawn of the world’s literary awakening, long before Gutenberg’s printing press, and before most people could even read and write, the approach to interlingual translation among scholars was a very conservative one, bordering to the rigid and inflexible.

Especially in theological circles, written work was seen as holy, and any translation should maintain the sanctity of the original by employing a very literal, word-for-word approach. At least, this would be a theological explanation for the literal approach, which often finds the reader struggling to readily understand, and has a clumsy overall appearance – aesthetically as well as content-wise.

Notwithstanding the deeper implications of the scholarly translation trends of his time, Jerome's approach to translation does mark a new era within translation, pivoting to a sense-for-sense approach. This was in the theological world frowned upon, and it was a ground breaking shift in the religiously conservative way of the learned.

To this day, Saint Jerome’s mode of translational operation is considered the most viable one – across all media formats, and even in the theological arena of our current times.


Challenge #1

Word-for-word or sense-for-sense?

Marcus Tullius Cicero

"If I render word-for-word, the result will sound uncouth, and if compelled by necessity, I alter anything in the order or wording; I shall seem to have departed from the function of a translator.”

(Khalaf et al. following Bassnett: 2002, 51)



A deliberate schism between the two schools of translation (word-for-word and sense-for-sense) was first publicly discussed by Roman statesman and philosopher Marcus Tullius Cicero (106-43 BC).

In Ciceros work De optimo genere oratorum (The Best Kind of Orator, 46 BC), he put forth the notion that a translator should avoid the word-for-word approach, and instead reproduce the “sense” of the original.

"from Cicero's position, to translate like an ‘interpreter’ is to practice within the restricted competence of the textual critic whose duty is to gloss word-for-word; and this is a restriction that the profession of rhetoric (Cicero's profession) historically imposed on the profession of grammar. To translate as an ‘orator’ is to exercise the productive power of rhetoric, a power which rhetoric asserted and maintained by purposefully distinguishing itself from grammar.”

Khalaf et al.

Ciceros thoughts had a big impact on the scholarly world, and according to Khalaf et al., Jerome was deeply influenced by these topics in his works.



Challenge #2

If Jerome did not translate word-for-word, what was then his guiding principle?


When you translate word-for-word (or if not entirely word-for-word, then at least very literally) then you’re already being heavily influenced and guided by the original author in terms of stylistic choices, syntax, and terminology choices.

Jerome had in his approach not this luxury, so he had to create a framework for reproducing new elements in his final copy.

Jerome had multiple sources at hand at time of writing, so he let himself inspire by a range of linguistic topics:

  • Jerome used Hebrew syntax

  • Jerome kept Hebrew word order

  • Jerome introduced new Hebrew words

  • Jerome copied Greek syntax (where the original was Greek)

(Khalaf et al., following Ackerley and Hale (2007:265))



Challenge #3

Obscuring the sanctity of the original

Jerome knew one crucial fact: translation is an act of violence against the original.

Because you cannot reproduce literary works one-to-one in any meaningful way, you’re forced to create something new which is likely to obscure the picture originally intended. Almost no matter what, the purity of the original is lost, and you’re creating a whole new work which the original author might not recognise and like.

Below, Jerome in a letter about his translation, commenting on his seeking to satisfy his target audience.

”A word with a significant meaning in the original possibly has no equivalent in the target, making the translator waste much time in seeking to satisfy the meaning to reach his goal.”

Saint Jerome on his Bible translation – the Vulgate – in a letter to Pammachius (Khalaf et al.)
















Sources



  • A. Kadhim Khalaf and L. Majid: St. Jerome’s Approach to Word-for-Word and Sense-for-Sense Translation. Journal of the College of Arts. University of Basra. 2015 (available at: https://www.researchgate.net/publication/335638360_St_Jerome's_Approach_to_Word-for-Word_and_Sense-for-Sense_Translation)




Images




Read More
Lars Petersen Lars Petersen

“What Do You Mean?” John R. Searle’s Chinese Room Argument and Problems Defining Consciousness and Understanding Within Early AI

A girl and a robot interacting. Is what we see on the picture constituents of real communication and understanding, or is it a shadow play which merely imitates? Let’s look at early definitions of computational AI (circa 1980’s), their reception by members of the academic community, and find out if AI has the potential for true intelligence.

A girl and a robot interacting. Is what we see on the picture constituents of real communication and understanding, or is it a shadow play which merely imitates? Let’s look at early definitions of computational AI (circa 1980’s), their reception by members of the academic community – and let’s find out if AI has the potential for true intelligence.

Roger Carl Schank’s 1977 computer based AI-system

At Yale university in 1977, professor of computer science and psychology Roger Carl Schank led a research program developing a computer based system aimed at simulating intelligent communication – well, at least to a degree compared to modern standards. (I’m here following Schank and Abelson 1977). So – this computer program was an early version of what we today would call ‘AI’. American philosopher John R. Searle described the function of the computer program created by Schank et al. as such:

“Very briefly, and leaving out the various details, one can describe Schank's program as follows: the aim of the program is to simulate the human ability to understand stories.” Searle, John R. 1980. Minds, brains and programs (417)

The work of Schank et al. was groundbreaking technology, which paved the way for AI research in the following years. With its more than 11.340 academic citations*, John R. Searles 1980 paper “Minds, brains and programs” takes on analysing the potential consequences of claiming that a computer based system can actually understand human ‘stories’, or human speech or text. Let’s try to understand Searle’s chain of argumentation, look at his within the academic community now famed “Chinese Room”-analogy, and see if his criticism of the intelligent computer have its merits.

John R. Searle’s 1980 AI-critical paper ‘Minds, brains and programs’


In the paper ‘Minds, brains and programs’, Searle refutes the idea that Schank et al.’s computer program may ‘understand’ the world or – like he says – understand ‘stories’, meaning human stories. (Searle later goes into detail about possible definitions of the word ‘understanding’, see also below here).

Bearing in mind that his paper is set in a 1980’s context, Searle comes with this rather distinct statement:

”[…] the computer ‘understanding’ is not just (like my understanding of German) partial or incomplete; it is zero.”

Searle compares Schank’s programmed computer with an automatic sliding door, which reacts to input from a photoelectric sensor, which is again prompted when a physical object is placed before it.

The automatic door analogy is of course an oversimplification – and a cheeky one at that – but Searle is pointing out that Schank’s computer is missing certain essential components, meaning they are “not understanding” human activity – or ‘stories’, as Searle calls it. Searle, John R. 1980. Minds, brains and programs (419)

John Searle photographed in 2005

The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man’s ability to understand Chinese.
— Searle, John R. (1980)


What is Searle’s Chinese Room Argument?

In short, Searle is imagining an English speaking person sitting in a room, tasked with producing responses (or questions) in Chinese language, to stories written in Chinese language. This English speaking person does not understand Chinese, but he is provided with a rule book (a grammar books of sorts) which serves as his main tool for producing correct Chinese responses to the Chinese stories. (Searle calls this rule book a “ledger”, Stanford calls these symbol-processing programs – we are to understand this is a metaphor for a computer program).

The rule book, ledger, the symbol-processing program (or whatever we now may call it) contains all the necessary information needed to produce a syntactically correct response in Chinese language, and with syntactical relevance to the Chinese stories. In other words, a native Chinese speaker would be able to read the stories first, thereafter reading the responses or questions written by the person in The Chinese Room, and finally regard the responses intellectually coherent in a way that there would be no signs the person in the room was not a native Chinese speaker – the responses would be “absolutely indistinguishable'“ from those of any native Chinese speaker, despite the person in the room not speaking a word of Chinese. (https://plato.stanford.edu/entries/chinese-room/)

How is this scenario possible in real life? With the advent of artificial intelligence, we’ve seen the development of systems using this mode of operation (or at least nearly such systems). We really do have systems which are able to produce the responses following the mode of operation which Searle illustrates by using the Chinese Room analogy – but of course having an actual computer do the process in lieu of a person in a room.

Now, the question still remains – are these systems sentient? Do they understand, or are they somehow pretending, and doing a really good job at it? Is the reason they come up with such precise responses to just about anything and everything that they understand the stories we tell them, or is there something else going on? Let’s continue to understand Searle’s argument by looking at how he describes it in his own words.

Here’s how Searle defined his Chinese Room in 1980:

“Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I’m not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes.

Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program."

Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view - that is, from the point of view of somebody outside the room in which I am locked - my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker.

From the external point of view - from the point of view of someone reading my "answers" - the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols.”

Searle, John R. 1980. Minds, brains and programs (417-418)


The idea of being locked in a room and tasked with translating Chinese – while not being able to even speak or understand Chinese – seems ridiculous and cruel to the poor translator, who’s just sitting there like that.

Thinking about it, this is actually not that far from what I do every day as a professional translator, as some of the things I’m tasked with translating are of such low original quality that they might as well be written in the languages Chinese, Japanese or Old Church Slavonic for that matter (none of which languages I understand). My work in these cases regarded as passable only due to my experience as a translator, and sometimes simply my ability to be a good guesser of what the source text intention is.

But the ridiculous picture of a Chinese Room should of course not be understood literally, for it is a metaphor for – insofar as I discern – a computer. Our poor translator being locked in the room should be understood as a computer operation, an algorithm of sorts, programmed to perform a specific task following a set of predetermined rules (again, Searle’s ‘ledger’, and Stanford’s ‘symbol-processing program’). In today’s language and understanding, and in relation to language translation, we’d probably just call this entire setup ‘machine translation’ or something along those lines. Searle is merely trying to paint a picture of what’s going on in Schank’s computer, so that we’re able to talk about it easier.

Is it intelligence?

Some early researches claimed that this very mode of operation can in fact rightfully be labeled as ’intelligence’. Early AI-reseachers Newell and Simon wrote that “the kind of cognition for computers is exactly the same as for human beings” Newell and Simon (1963). In response to this very fundamental and important distinction (of which we to this day still don’t know the validity for certain) Searle responds with clarity: “It is not.”

If you translate a single word from for example English to Chinese in its most literal sense, simply by using a dictionary, then you may produce a ‘correct’ answer (in a purely formalised, syntactical understanding of language and grammar).

But language is deceptive and elusive. It will evade you at a moment’s notice – like a leaf caught by the wind! If you follow the above strategy, you will quickly lose track of the chain of logical reasoning.

Words, word-structures, sayings, expressions (and especially, full sentences) almost always have have many, many different semantical connotations in translation depending on the situational context, the historical context, the dispostion of the author, and many other factors. The translator must therefore ask himself if he is actually carrying over the intended meaning – in the truest sense in which it was originally written.

Searle says “[…] the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics.” (Searle, John R. 1980. Minds, brains and programs, p. 422)

"How do you know that other people understand Chinese, or anything else?”


Searle’s paper received counterarguments from different places (for example Berkeley, Stanford university and others). He calls these counter arguments ‘replies’ – in the understanding that they are ‘replies’ to his arguments. I will go over some of the replies which I found most interesting and relevant to this discussion. 

In his paper, Searle explains there have been raised multiple different critical questions stemming from the upper echelons of various major U.S. universities – to his Chinese Room argument (see also Searle’s 1980 paper for all the replies). They’re from universities where he lectured (for example Berkeley, Stanford and others).

First and foremost, there is the Systems Reply, which is a response to Searle’s argument. It does not refute Searle’s theorisation as a whole, but perhaps more elaborate it and enhance it. The Systems Reply agrees with Searle that the man sitting in the Chinese Room indeed does not ‘understand’ Chinese. But it claims that the system of which he is part of may understand Chinese – as a whole. The ledger in front of the man in the Chinese Room is a product of multiple, other ‘understanding entities’. For example is his pencil and the paper part of an entire semiotic system.

“Understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part.”

(From the Systems Reply, Searle, John R. 1980. Minds, brains and programs, p. 419.)



The Robot Reply (Yale)

Next up, there is the Robot Reply (Yale) to Searle’s problem, which I also thought was relevant. The Robot Reply says that in order to obtain a more true and recognisable understanding within the computer, we better scale up the system, and expand the program so that it can control an entire humanoid robot, with digital eyes, arms and legs sensitive to touch etc. Basically, let’s build replicate a human in a more physical way. This robot can move about in the world, and acts and looks like a human. The robot reply claims that by creating this ‘extended’ computer, we are one step closer to a really understanding entity, because the device may now interact with the world, in contrast to the passive computer sitting in a server room.

This reply serves well as a transition to the next reply, which is actually much more interesting – and also more entertaining. Coming up, my personal favorite part of Searle’s paper, the entire reason I’m even writing all this. Behold, the fascinating, eerily spooky ‘brain simulator’!

If/then/else-statements

Yale’s Robot Reply extends Schank’s computer intelligence to a more all-encompassing, more humanoid computer system. But as far as I can deduce, such a robot would still built on computer programming conditionals.

While such a system may be able to produce somewhat reliable results in a system of simple logic, will if/then/else-programming conditionals also be able to produce true, intelligent answers to some of the world’s hardest problems?

(Shown left: an elementary if/then/else-statement).


The Brain Simulator (Berkeley and MIT)

The Brain Simulator Reply contains a rather striking comment or idea, namely what might happen if we radically change the way the computers are built and set up. Now, let’s remember that the computer by Schank et al. are pre-programmed with a set of rules for providing the framework for ‘understanding’. This set of rules is written by actual humans. The Brain Simulator Reply suggests that we abolish the pre-programmed rules entirely, and instead try to create a framework, which is simulating an actual, human brain – namely in the form of a brain of a Chinese speaker (so that we may actually get the translation done properly this time).

Here’s what they suggest in their reply: 

 "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them.”

What they’re really saying is we built an exact copy of the infrastructure of a human brain and incorporate the signal pathways into a computer.

Et violà! This will do the job,, right? Or…?

This concept is not new (Stanford call this idea “Chinese Nation”, pointing toward several earlier philosophers – see for example Anatoly Dneprov and Ned Block).

Okay, hang on for a bit… Why this proclivity for China? I mean, where’s the ‘The People’s Republic of China’ coming from in the context of AI-research? I do not know the answer exactly, but I‘m guessing the great country of China and her language serves well as an example in the very Anglo Saxon, WASPy academic context Searle and others live within, because the Chinese language is sufficiently distinct from English and any Roman languages, and therefore serves well as a test site for linguistic analysis. It is the perfect ‘other’, ready for a starting-from-scratch interpretation for LLM’s trying to simulate human understanding. 

Searle’s response to the idea of replicating a literal human brain – down to the its very detail – is clear: It cannot be done. Building a computer brain by replicating the neural pathways of a brain (Searle calls them ‘water pipes’ in the 1980’s paper). Searle compares this entire ‘water pipe’ and nut-and-bolts-approach to his own Chinese Room, but instead of a rule book, the guiding principle is now water pipes, valves and faucets – i.e. physical things:

“However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a mono- lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them.” (Searle 1980, p. 421)

Ned Block’s ‘China Brain’

Another variation of this concept is a thought experiment by American philosopher Ned Block in 1978. In Block’s vision of the scenario, the “China brain”, we simulate a human brain by giving each Chinese citizen a small handheld radio. The signals from each radio route via those of the other citizens’s (via very specific routes) and then to satellites orbiting above the Chinese landmass.

The people on the ground may with their humanly perceived stimulus in their different environments signal various mental states to each other. This results in a single, performed actionable output, as finally collected by the satellites. 

This approach to thought experiments mirrors a functionalist understanding of the world, i.e. meaning that human ‘understanding’ is simply just nuts and bolts (or water pipes, valves and faucets) and there is no more to the notion of understanding than pure mechanics.

This is not entirely Block’s point, however, for he is actually asking the question: Is there more to human consciousness that flesh, blood and electric signals? And following this disposition, the big questions – if yes, then what is the unknown factor? What makes a human, human?

This concept draws upon the notion of functionalism within the academic field called ‘Philosophy of Mind’. Stanford defines functionalism as such: 

“Functionalism is the doctrine that what makes something a thought, desire, pain (or any other type of mental state) depends not on its internal constitution, but solely on its function, or the role it plays, in the cognitive system of which it is a part. More precisely, functionalist theories take the identity of a mental state to be determined by its causal relations to sensory stimulations, other mental states, and behavior.” https://plato.stanford.edu/entries/functionalism/ Apr 4, 2023

Ned Block’s ‘China Brain’

A billion people, each with their own handheld radio transmitter, collectively simulating the human brain.

A ridiculous idea? Yes!

Does it make for a fun thought experiment, forcing us to reflect on the what connotes the notion of consciousness? Also yes!


The Other Minds Reply (Yale)

Searle’s paper contains a few more replies, but I will skip a few of them to faster reach commentary to this last one of them – the Other Minds Reply. Because I think a rather big and important question is raised here, and that is: “How do you know that other people understand Chinese or anything else?”

I love this question. I don’t think we know the answer, however (yet, ha!). And such is also the response to this reply by Searle. Searle remarks that:

“One presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.”

So there are presuppositions in place. Knowledge we just take for granted. Basically, he’s saying that we best just believe that Chinese citizens normally understand the world around them, just like for example a group of Hungarians would in their own (albeit different) context. 

But I still think the point is relevant, because it forces us to admit that we cannot know to which degree others understand. We of course have to assume other people mostly do understand and are conscious, like ourselves but… What about a young child or an infant? What about other animals? do they understand? An LLM? 

Early attempts at defining consciousness and understanding dates back to Aristotle’s theory of the soul (350 BCE), and later notably Gottfried Leibniz (1646–1716), but with the advent of computers, AI, neural networks and LLM’s, we are now forced to decide and define what the integral components of consciousness are.

Is it all purely a matter of physical properties, or is there an unknown force hidden in the human consciousness? In other words, what makes us, us?


Concluding remarks

Even calling artificial intelligence ‘intelligence‘ itself implies that an artificial intelligence can perform the same cognitive tasks as the human brain, such as reasoning, deduction, and demonstrating an understanding of the world and demonstrating the ability to ask relevant, critical questions in response to input from the surroundings, the world.

Today's Large Language Models (LLMs) create an imitation of the product of human cognitive tasks, by drawing upon large language data sets, which are again created by actual humans. LLMs produce what they will consider the most likely or most common or relevant outcome in the form of a response as answer to a given command in a given context. However, the keyword here is "imitation". Human ‘cognition’ goes beyond just that, and is therefore able to produce real deduction and also form much more relevant, critical and novel questions, drawing upon lived experience.

With the advent of the almost omnipresence of various AI-systems today on the internet and other places, and the following overabundance of vague, semi-interpreted and auto-generated data, the strenuous road to obtaining precise, usable, to-a-human relevant data becomes more and more obscure and difficult to see. The more data-reliant and information-based our society becomes – the bigger and more valuable the market for fast, swift, half-baked analysis becomes.

This analysis is not perfect. Machine translation, for example, has come a long way (here as a translator thinking of Google Translate, DeepL, Microsoft Translator and others) and produces correct analysis in many, many very normative cases. But in edge cases, it almost always fails compared to a qualified humanistic analysis. 

Think just of those AI-chat bots we all have to deal with! Bad computational analysis results in misinformation all around us. Feelings of ambiguity and fuzziness sneaks into our professional and private lives. We start to question if we are in fact falling short in our own understanding, or if it’s the world around us that is the problem. Words start to lose sense. What is true, and what is misinformation? Correct, reliable data becomes the rare commodity. 

While AI and computer-based systems may so far be able to help with basic, normative tasks and produce data which make us do our jobs faster and more accurately, they have yet to be able show any signs of actual intelligence on a deeper level. John Searle foresaw this issue in the 1970’s, and did very early on raise concerns as to whether AI and computer-based systems will be able to understand the world and produce true information reliably.

At lot has happened with computational programming, ‘AI’ (this time deliberately keeping my critical distance with the quotation marks around the claim that AI really is ‘intelligent’), LLMs, and neural networks since Searle’s writing of this paper in 1980, but his fundamental opinion on what he calls ‘understanding’ (what I also attribute to the notion of ‘consciousness’) still remains relevant to the discussion.

Reflecting on whether computer programs are able to obtain the ‘sufficient condition of understanding’ – i.e. ‘understand like a human’ – Searle says that:

[…] formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolise anything. In the linguistic jargon, they have only a syntax but no semantics.” (Searle, John R. 1980. Minds, brains and programs, p. 422)

Searle implies there is a distinction between a formalised syntactical properness, and semantics.

And I understand the ‘semantics’ here as meaning ‘true understanding and a carrying over of critical opinion, meaning and commentary’. (Please correct me if you think I’m wrong in this assumption).

He is essentially categorising information into two groups: A (1) purely formally correct type of information which may be correct from a syntactical standpoint, but ‘meaningless’ from a consciously sentient (semantic) standpoint and then a (2) type of semantic information which is – on the other hand – actually ‘meaningful’ – in the truest sense of the word.




Sources


Searle, John R. 1980. Minds, brains and programs. Behavioral and Brain Sciences 3 (3): 417-457. (https://home.csulb.edu/~cwallis/382/readings/482/searle.minds.brains.programs.bbs.1980.pdf)

Stanford Brief on John R. Searle’s Chinese Room: https://plato.stanford.edu/entries/chinese-room/

Stanford Brief on Functionalism: https://plato.stanford.edu/entries/functionalism/

* Google Scholar, via https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=%22Minds%2C+brains+and+programs%22&btnG=


Images


’Girl and Robot’, licensed via Unsplash

‘China Brain’ via https://commons.wikimedia.org/

‘Dictionary’, licensed via Unsplash

John Searle in 2005, via Wikimedia Commons, https://upload.wikimedia.org/wikipedia/commons/6/69/John_searle2.jpg

If/then statement, P. Kemp, CC0, via Wikimedia Commons

Read More
Lars Petersen Lars Petersen

Miscommunication – an Example

"Misunderstandings and inertia perhaps cause more errors in the world than deliberate slyness and mischief".

_

Such were the words of Johann Wolfgang von Goethe in his 1774 Briefroman “The Sorrows of Young Werther", and while the context was then a romantic drama, the point also applies to other arenas of life. Misunderstandings are everyday occurrences, and I think they are overlooked and ignored as an object within all fields of activity – academia, politics, business, and military. In military applications, battles are lost when communication goes awry. Let's find out how misunderstandings occur.

"Misunderstandings and inertia perhaps cause more errors in the world than deliberate slyness and mischief".

Such were the words of Johann Wolfgang von Goethe in his 1774 Briefroman “The Sorrows of Young Werther”, and while the context was then a romantic drama, the point also applies to other arenas of life. Misunderstandings are everyday occurrences, and I think they are overlooked and ignored as an object within all fields of activity – academia, politics, business, and military. In military applications, battles are lost when communication goes awry. Let's find out how misunderstandings occur.

The 1988 USS Vincennes tragedy

Correct intel is crucial to a positive mission outcome, and misinterpretations can result in fatal consequences. By example I will look at the disastrous events in the Strait of Hormuz in July 1988 when a civilian airliner was shot down by surface-to-air missiles fired from the American missile cruiser USS Vincennes, presumably by error. I shall also present possible solutions as to how we may avoid such instances, that not only be in military applications, but moreover in business, academia, journalism and other arenas of activity.

Two SM-2MR missiles struck the civilian flight Iran Air Flight 655 (Airbus A300), leading to a fatal crash, killing everyone aboard. The airplane was no threat, and operating nominally and in line with aviation regulations, so why was the fire order given aboard USS Vincennes? Let’s look at what happened in the Strait of Hormuz – a series of events which lasted mere minutes before ending in disaster.

USS Vincennes in June, 1986.

On the morning of July 3rd, 1988

During Operation Praying Mantis in 1988, U.S. Navy warships were on a security mission in the Arabian Gulf to protect civilian merchant ships from anticipated attacks by Iranian warships. A misunderstanding led to a missile strike on a civilian airliner crossing the ocean.

Left, an Iranian postage stamp depicting the tragic incident.

Events leading to the incident


On the morning of July 3rd 1988, two U.S. warships – USS Vincennes and Elmer Montgomery – had in the Arabian Gulf detected three Iranian gun boats advancing toward first a Danish civilian merchant vessel (Karama Maersk) and then shortly after, a Pakistani civilian merchant vessel.

After receiving intelligence about the threat against the civilian merchant ships, the two American warships at once prioritised the situation and advanced toward the three gun boats in the waters off the coast of Iran. A fast attack helicopter from aboard USS Vincennes was in advance sent toward the gun boats. Upon the helicopter arriving, it was targeted by small-arms fire from the gun boats, but was not hit critically. The two war ships, upon hearing the news of the helicopter getting shot at, was now advancing full speed toward the scene. And in doing so, crossing into Iranian territorial waters despite standing orders against this from U.S. high command.

When USS Vincenne arrived at the scene, it registered the gun boats “commencing an attack” (Naval History and Heritage Command), and USS Vincennes fired almost 100 cannon rounds toward the gun ships, with some fire returned by small-arms. Two gunship were promptly sunk, and another damaged. This incident, while tragic and important in itself, is not the centre of this story. The incident is, however, what led to the following events.


Iran Air Flight 655 taking off from Bandar Abbas

Before the incident with the gun boats took place, the civilian aircraft of Iran Air flight 655 took off from Bandar Abbas, heading for Dubai. Aboard the aircraft, mostly Iranian civilians (and various other nationalities) on their way to Dubai. The published flight route of this aircraft was planned directly above the scene of the sudden naval battle, unbeknownst to the first pilot of the civilian aircraft, and also initially unbeknownst to the commanding officers of the two U.S. war ships on the scene.

USS Vincennes was equipped with an at the time very advanced radar system called the Aegis Combat System. The system promptly detected an aircraft taking off from Bandar Abbas, Iran – which was perceived to be a combined civilian and military application airport. The detection of any aircraft in the surrounding airspace was normal and expected. The aircraft was upon detection mere minutes of flight time from the battle scene in the waters in the Strait of Hormuz.

The crew aboard USS Vincennes concluded the aircraft was a military F-14, and therefore an immediate threat. This would later prove to be a false designation.

Interpreting data from the Aegis Combat System

Meanwhile, the Aegis Combat System aboard USS Vincennes classified the approaching aircraft as a so-called Mode III (civilian) airliner. This designation was given on basis of the airliner’s IFF-transponder, which was set to that mode. Despite this information, the anti-air warfare coordinator aboard USS Vincennes gave the aircraft a mode II (military) classification. According to Naval History and Heritage Command, the anti-air warfare coordinator gave the aircraft the military classification and not the civilian one because “Iranian military aircraft were known to transmit both Mode II and Mode III IFF.” (Naval History and Heritage Command)

 

Above, Strait of Hormuz, where the incident took place.

The anti-air warfare coordinator aboard did however cross-reference the detected aircraft with published flight schedules in the area, and found no civilian aircraft flight scheduled to be on the observed coordinates at that specific time. This cross-reference yielded no results because Iran Air Flight 655 was 27 minutes late compared to its published flight plan.

The military designation given by the anti-air warfare coordinator would later prove to be false, but was given as such “partly due to “darkness” in “CIC” (the Combat Information Center), a time-zone change between Bandar Abbas and Dubai, and that Flight 655 was late”. (Naval History and Heritage Command)

Therefore, the captain of USS Vincennes was briefed that an Iranian F-14 was heading toward USS Vincennes. The direct quote from Naval History and Heritage Command reads:

“Rogers (the captain) received a report that an Iranian F-14 had taken off from Bandar Abbas and was on a course toward Vincennes”.


USS Sides

Because of these highly important news, another nearby U.S. warship, the USS Sides, was also briefed that an Iranian F-14 was incoming. However, the commander of USS Sides followed procedure via his own CIC (command information center) to confirm the reports of an alleged incoming military F-14, and found no indication of this, but instead deemed the aircraft civilian.

Here’s an interesting fact – the CIC (the Combat Information Center) aboard USS Sides painted the unknown aircraft with its on-board missile radar – an action which would immediately prompt a warning in the radar warning receiver on the aircraft itself – had it been a military aircraft! USS Sides saw no response or reaction from this radar painting attempt, but instead noted that the aircraft continued its steady climb.

However, the commander of USS Sides did not communicate his contradictory assessment to USS Vincennes. He knew USS Vincennes had the more advanced Aegis Combat System, and therefore deemed the data output from this system more accurate than his own. This non-communication is noteworthy, and will later prove to be a key component in the conclusion of this story.

A military threat or a harmless, civilian airliner? A question of massive importance!


USS Vincennes gave the approaching aircraft verbal warnings via a designated radio distress channel which should have been picked up by the airliner, but these call were never answered. We don’t know why the warning messages were never answered, nor do we know whether they were received or not. The aircraft had only been airborne for a few minutes at the time, so pilots would on that leg of the flight be expected to be engaged in various operations with controlling the fast ascent. None the less, radio warnings should be able to come through at all times during the flight.

With an assumed hostile aircraft rapidly approaching, the commander of USS Vincennes gave orders to fire two missiles toward the aircraft. The aircraft was hit by both missiles, and crashed into the ocean, killing all civilians aboard.


Key takeaways


There are more details about this incident, from various sources, which could provide different nuances and angles to the story. I have deliberately only followed an account written by the U.S. Naval History and Heritage Command, and the full report can be read here. I cannot verify the validity of claims made by the U.S. Naval History and Heritage Command, but their story may serve as one data point in your own continued research on the topic should you be interested. I recommend reading another account of the incident written by U.S. Marine Corps Lieutenant Colonel David Evans, which can be found here.

My point is not necessarily to uncover any definitive truths, but merely highlight a few key components of the incident which has to do with the communications aspect of the tragedy.


We learned that:

  • officers in the CIC (Combat Information Center) aboard USS Vincennes ignored data output from the Aegis Combat System that the aircraft was civilian and not military. This was allegedly because Iranian military aircraft “were known” to use the civilian designation in their transponder

  • the captain aboard the accompanying warship, USS Sides, concluded the aircraft was civilian, but did not communicate this finding to USS Vincennes because the commander understood the Aegis radar system aboard USS Vincennes to be more accurate, and therefore more able to produce a more valid assessment than himself

  • had the captain of USS Vincennes trusted his own systems 100 % and accepted the civilian designation of the aircraft, the missiles would not have left the ship. Instead, he drew upon his own and his crew’s knowledge of the battlefield that Iranian military aircraft would sometimes use a civilian designation as a way of camouflaging intent. This informal knowledge have potentially saved him and his men in other situations, but it was not relevant in this particular instance

  • you cannot trust computers alone, because adversaries will find ways to trick and alter the output of systems which have a predictable outcome

  • human intuition and combat knowledge is vital on the battlefield, but such knowledge should be paired with all other relevant data, even if repetitive, to maximise the chance of determining the level of threat on the battlefield.


In a final concluding remark, I do not necessarily condone nor blame the actions performed by the crew aboard USS Vincennes. It’s blatantly clear that the fire order was an error in itself, and shouldn’t have been given. However, understanding what information the crew and the captain were operating with, given the time pressure (the incident played out in mere minutes), and with the overall circumstance taken into account, the picture becomes anything else than simple.

Therefore, as a purely hermeneutical exercise, I do not necessarily believe blame should be assigned to a single individual, even if the captain, since the incident involved several actors, a reliance on electronic combat systems, a reliance on formalised procedures, and perceived deviations from the standard picture.




Sources


Account of the incident by the U.S. Naval History and Heritage Command: https://www.history.navy.mil/about-us/leadership/director/directors-corner/h-grams/h-gram-020/h-020-1-uss-vincennes-tragedy--.html

Goethe quote: From Johann Wolfgang von Goethe’s 1774 Briefroman “Die Leiden des jungen Werthers”. Original quote: “Mißverständnisse und Trägheit vielleicht mehr Irrungen in der Welt machen als List und Bosheit“

Images

Combat Information Center aboard USS Vincennes: Credit to the U.S. Navy under the U.S. Federal Government. Public domain in the United States. Via wikimedia.org


USS Vincennes (CG-49), June 1986: Naval History & Heritage Command Photo Section, Photo #NH 106519-KN. Use of released U.S. Navy imagery does not constitute product or organizational endorsement of any kind by the U.S. Navy

Read More
Lars Petersen Lars Petersen

Russian Formalism and Defamiliarisation

That judgmental glance... What does human activity look like from the perspective of a horse? Pretty weird, probably! A contemplative horse is the narrating observer of the human world in Leo Tolstoy’s Kholstomer (1886).

That judgmental glance... What does human activity look like from the perspective of a horse? Pretty weird, probably! A contemplative horse is the narrating observer of the human world in Leo Tolstoy’s Kholstomer (1886). Key figure in the Russian Formalism movement, Viktor Shklovsky, uses Tolstoy’s commenting horse as an example of ostranenie (the Russian word for defamiliarisation) meaning making strange. Defamiliarisation as a literary device was discussed in post-revolutionary Russian literary circles branching out from the literary movement Russian Formalism. So what does defamiliarisation do? It’s trying to make the world harder to understand – naturally!

Analysing the world

The hermeneutic principle is interested in discovering meaning through interpretation. You observe something, that could be a literary work, or something else, and then you try to produce meaning through thorough analysis. The hermeneutical analysis can be based on various literary instruments, such as logical deduction, close reading, your own five senses and your own knowledge of the world.

As a literary framework and as a literary way to deduct meaning from a topic, defamiliarisation is very different from hermeneutics. Defamiliarisation in fact seeks to hinder and halt our arrival at meaning, because it is more interested in what happens before the arrival at meaning.

Professor of English at Yale Paul Fry describes this hindering as ‘a roughening of surface’ on the road toward reaching a final conclusion. Defamiliarisation creates unfamiliar (defamiliar, if you will) data points to the reader, which are unfamiliar, and which hinders the arrival at meaning because of a certain estranged essence of the work at hand. The roughening of the surface makes it impossible for the reader to arrive at meaning smoothly, and it makes it hard to make sense of the subject matter. And that’s the point! Defamiliarisation is encouraging, even forcing, the reader to make himself think again anew. Paul Fry also describes this artefact as “defamiliarisation creating an arabesque instead of a straight line between us and the arrival at meaning”. (After Paul Fry, 2009)

Ready-to-digest, centrally produced, and pre-fabricated meanings were not interesting to the Russian formalists such as Roman Jakobsen and Viktor Shklovsky. They were rather interested in science, structure, the way the sentence is put together, and what might come out of that from the standpoint of the reader. This way of seeing art carries with it a shift in focus from the search of meaning, to pure structure.

Bear in mind, the ideas of Russian Formalism were formulated in the post-revolutionary Russia with Leninism and social realism dominating the arts. Some would see Russian Formalism as an attack on the ideals of social realism which had been established as the guiding principle in the communist state.

Defamiliarisation as a literary device is attempting at making you feel the roughened structure (feel the world, if you will), touch it, and understand anew what the true meaning of the ideation at hand is really about. (After Paul Fry, 2009)

With Russian Formalism comes a shift in focus from the search of meaning to structure.

(Paul Fry, 2009)

“Bilingualism is for me the fundamental problem of linguistics.”

Roman Jakobsen

“A word is a thing. And a word will change, according to verbal practices, which are again related to for example the physiology of speech.”

Viktor Shklovsky

Above, two central figures in the Russian formalist movement Roman Jakobsen and Viktor Shklovsky.


Why must the surface be roughened?

Viktor Shklovsky argued that we no longer see what’s around us. (And by we, think citizens in USSR in the 1920’s). He called this phenomena ‘automatisation’, or ‘automism’, implying that our response to art, literature, and just everyday life around us had become automated, like it’s on autopilot. (O Teorii Prosy. 1929)

Now, who’s to decide what society means for the average citizen? Is that the task of the state? Is it the artist’s job? Or is it up to every single individual to figure out? In the USSR in the 1920’s, the state played a major role in defining the arts, and what a purposeful life was supposed to look like. But this is where Viktor Shklovsky comes in from the sideline to roughen the surface of various modes of literariness – and on a mission to defamiliarise Soviet citizens from the grey, dulled, automated world surrounding them – the Homo Sovieticus, the socialist archetype – and make them see things from another perspective than that of state-sanctioned artists and writers.

For example did Shklovsky and other writers in the Russian formalist movement oppose the notion that physical ornamentation and sound is subservient to meaning. Russian formalists oppose the claim that ornamentation and sound does not elucidate anything. To them, ‘artistic’ artefacts can be ends in themselves. (Paul Fry, 2009)

And Shklovsky warned sternly about the dangers of automatisation, and how the people might lose the ability to properly see things for what they are. He warned against taking the established reality for granted, accepting it too readily, and not questioning how reality if presented to us. (O Teorii Prosy, 1929)

Touching upon this, is here an excerpt from O Teorii Prosy:

This property of thinking not only suggested the path of algebra, but even suggested the choice of symbols (letters and specifically initial ones). With this algebraic method of thinking, things are taken into account by counting and space; they are not seen by us, but are recognized by their first features. A thing passes by us as if packaged, we know that it is there by the place it occupies, but we see only its surface. During the process of algebraization, the automation of a thing, the greatest economy of perceptive heals is obtained: things are either given only by their feature, for example, a number, or are carried out as if according to a formula, without even appearing in consciousness.

(O Teorii Prosy, p. 12)

Shklovsky uses math and algebra as a metaphor for the automised life, which has been reduced to numbers and formulae in a (perhaps) over-bureaucratised system.

And at the end I think a quite interesting remark about losing the fear of war:

This is how life disappears into nothingness. Automation eats everything, clothes, furniture, your relationship with your wife and your fear of war.

(O Teorii Prosy, p. 13)

A grey, dull, automated, pre-understood life which has lost – among other things – its fear of war.

Viktor Shklovsky on Tolstoy

Shklovsky does not take credit for inventing defamiliarisation. In O Teorii Prosy, Shklovsky for example uses Leo Tolstoy’s way of defamiliarising as an example for understanding it in a current (for his time) context. Tolstoy, Shklovsky claims, has mastered the art of not calling a thing by its current name, but more describe it as if he had seen it for the first time – for what is really is. Tolstoy, he continues, talks as if an incident had happened for the first time in history of man. Thereby, Tolstoy is describing a thing not by using our current, commonly agreed-upon terminology, but names each phenomena or object as if they had no reference in particular to the situation at hand, except reference to an outer world which is unfamiliar to the given situation and which is purely giving names to said phenomena or objects, completely untainted by the described situation. (O Teorii Prosy)

Leo Tolstoy on a horse

In “Kholstomer”, we meet a conscious and very well spoken horse, alone pondering and commenting on the peculiarities of human life and human speech. Here’s part of the horse’s monologue, questioning his own status as subservient to the humans. The horse also raises the more general question of why humans use words which carry implications that are way beyond the scope of the true, original meaning of said words.

Below, an excerpt from “Kholstomer”, a monologue by the horse.

I did understand what they were saying about Christianity, but I was completely lost when they started uttering the words ‘his’ this and ‘his’ that. For example did I not understand the meaning of the word combination ‘his foal’, from which I saw that people assumed some kind of connection between me and the stablemaster. What this connection was, I didn’t understand at the time. Only much later, when I was separated from the other horses, did I finally understand what it meant. For long, I didn’t grasp what it meant to be called a person’s property. The words ‘my horse’, referring to me, a living horse, seemed as strange to me as concepts such as: ‘my earth’, ‘my air’, and ‘my water’.

Those words had a huge impact on me. I kept thinking about this for a long time, and only after getting to know many different humans did I finally understand the meaning that they’d ascribe to these strange words. Here’s what I found out about the notion of meaning in the human world: People are guided in life not by deeds, but by words. They love not so much the opportunity to do or not do something, as the opportunity to speak about different subjects in the words mutually agreed upon amongst them. One of the most important words in the human vocabulary is the little word ‘my’, which they apply to various things, creatures and objects, even about the earth, about people and about horses. They all seem to agree about this, so that only one person speaks—mine. And the person who is able to apply the word ‘my‘ to the greatest number of objects is considered to be the happiest person. Why this is so, I don’t know. But it is so.

[…]

Many of those humans who called me ‘their’ horse, did not even ride me! It was completely different humans who rode me. It was not the same ones who fed me either, but different ones, shiftingly.

[…]

A human will say: ‘my house’, and then continue to not even live in said house, but merely tend to and maintain it. A merchant will say: ‘my store’, or ‘my clothing store,’ for example, and yet, the merchant in question will then continue to not even wear clothes made from the best cloth that his own store has to offer. There are also humans who claim certain lands to be theirs, yet they have never even betrodden the lands in question, or as much as laid their eyes on them.

(Kholstomer, chapter 6)





Sources

Paul Fry. Introduction to Theory of Literature, Yale University. 2009. Yale Lecture on available here: https://www.youtube.com/watch?v=11_oVlwfv2M (Yale, 2009))

Viktor Shklovsky. O Teorii Prosy. 1929. https://monoskop.org/images/7/75/Shklovsky_Viktor_O_teorii_prozy_1929.pdf

Leo Tolstoy. Kholstomer. 1886. https://ru.wikisource.org/wiki/Холстомер_(Толстой)

Images

Main image, horse: Licensed via squarespace.com

Apartment blocks: Licensed via squarespace.com

Roman Jakobsen: Philweb Bibliographical Archive via wikimedia.org

Viktor Shklovsky: Public domain in Russia (via wikimedia.org: https://commons.wikimedia.org/wiki/File:Viktor_Shklovsky.jpg)

Leo Tolstoy on a horse: Underwood & Underwood. The World's Work, November 1908: https://archive.org/stream/worldswork17gard#page/n19/mode/2up



Read More
Lars Petersen Lars Petersen

New Job

Office, Zimmerstraße, Berlin

Office, Zimmerstraße, Berlin

I got a full time job! In my freelance existence as a film subtitle translator, work volumes through 2023 were up and down. And the general trend was… well, down. It was becoming increasingly difficult to budget in even a small net salary for myself.

In November 2023, I was offered full time employment as an in-house translator for a Berlin based fintech app. And now, mere months later, I’m sitting in Berlin, building and translating an entire fintech app to be made ready for the Danish market very soon.

Subtitling never took off the way I had envisioned it. So this was a needed change. Fintech is a much different beast than subtitling. Now, I’ll see translation from a different angle. Now, I’m applying everything I know about legal translation, copy writing, and financial terminology. Ich bin ein Berliner!

Fun fact: The cobbled stripe embedded in the road marks where the Berlin Wall once stood. (Photograph taken from the office building where I work, which is placed on the eastern side of the former wall).

Rest assured, I still have my beloved satellite office in the hills of Eastern Jutland on the old family farmstead, right by the coast line of the Kattegat sea in Denmark. This office I may - for tax reasons - use only in weekends and holidays away from my full time job. But I’ll take that deal! Because it feels good to know there’s always something called ‘home’.

Photographs my own

Read More
Lars Petersen Lars Petersen

Media Content Localisation- CEO David Lee: What Exactly Is “Localisation”?

Founder and CEO of media giant Iyuno, David Lee, talks about localisation

Founder and CEO of media giant Iyuno, David Lee, talks about localisation. (Still image from youtube.com)

Netflix, Disney+, AppleTV+, HBO … Streaming services often have an excellent selection of subtitles and dubbing options for each film. Films are being subtitled and dubbed into other languages like never before. Globalization brought with it the golden age of hyper-localised content.

Now, I can translate movie dialogue from one language to another. But what if I localized it? And what’s the difference?

Iyuno is one of the largest media localisation companies around, and the world’s largest streaming platforms are among Iyuno’s clients. So why this emphasis on localisation, and not just translation?

Iyuno’s founder and CEO, David Lee, explains:

“Our purpose is to regenerate the original content creator’s intent. […] Local studios want to interpret the content in their own way. For example, if it’s Disney’s ‘Frozen‘, Elsa needs to sing like this. But then this voice actor in this country thought that Elsa was a really funny character, so he might want to record in a very funny way.“

This is a very free way of translating, and you’re not very bound by form or style here, but more bound by the original creators creative intent with the film. Did the original content creator want to produce laughter and smiles in the audience? Then it’s up to the localisation professional to make sure that same effect is created in the localised version of the film. And the localisation professional may do so with whatever means at his disposal.

This approach to translation taps into Vinay and Darbelnet’s description of translational adaptation. (See my article on Translation Strategy here).

*

Youtube Channel “ArirangTV”. Video: [The Globalists] David Lee’s Iyuno, the World’s Leading Content Localization Firm.

Time code: 4.48 - 5.15

Available on youtube.com. (https://www.youtube.com/watch?v=Qhf9uy_nA4I&t=51s)

Still image from YouTube.com, at Youtube Channel “ArirangTV”. https://www.youtube.com/watch?v=Qhf9uy_nA4I&t=51s. Under Fair Use for critical commentary and analysis on translatorslife.eu.

Read More
Lars Petersen Lars Petersen

Deciphering Signs

Jeon Jong-seo (playing Haemi) pretends to peel an orange with her hands in the 2018 Korean psychological thriller “Burning”.

Jeon Jong-seo (playing Haemi) pretends to peel an orange with her hands in the 2018 Korean psychological thriller “Burning”. She’s using a Korean word for orange, and she is moving her hands as if there was an orange present. Her words and her actions are signifiers, pointing toward a certain concept - an orange. (Still image from Burning © 2018 owned by Pine House Film.)

What do people really mean? What do they think about, psychologically, in their mind, when they say things?

There are many reasons why translation is difficult, but once very specific reason stands out: People assign different cultural objects different meaning.

This is a problem! Because sometimes it can be hard to know what people mean. Well, there is a solution. And it has one key component: Cultural decipherment.

Abovestanding still image from the film Burning has a woman pretending to peel an orange. An orange is a tangible object, and the world’s dictionaries generally define the word “orange“ the same way.

But what happens when we talk about more abstract concepts? And this is relevant both in strictly interlingual translation, as well as in a broader sense of intercultural interpretation. How do we translate abstract concepts interculturally?

Friend? Time? Honor? Faith? Work? Altruism? What connotations do various people assign to these words? Think about how differently an Englishman and a Chinese person would relate to these words. Or a wealthy CEO and a member of the labouring class? A soldier and a nun? These people would all assign different connotations to the same words.

Linguist and semiotician Ferdinand de Saussure described in Cours de linguistique générale (1916) the relationship between concepts and speech. He called this framework “signifié et signifiant”. These terms are often translated as signified and signifier.

The signifié refers to a concept (Saussure uses the French term concept). This is the abstract, mental image any person might have of an object, an idea or a concept. An example would be the idea of a tree.

The signifier refers to the concrete manifestation that represents the signifié. This is known as the sound pattern.(Saussure uses the term image acoustique). An example would be the English word “Tree“.

It’s important that there is not a connection between a thing and a name, but between a concept and a sound image.

If all the world’s things each had one, single name attached to it, which everyone agreed upon, then there would be no need for translators. In such a world, we could just look up everything in a dictionary. But the world is nuanced, and it is up to translators to decipher meaning from the source, and then produce signs which can in return be deciphered by the target audience.

The concept (signifié) and the sound image (signifiant). Photo copy from Cours de linguistique générale. 1916.

The concept of a “tree” (signifié) and the sound image “arbor“ (“Tree”) (signifiant). Photo copy from Cours de linguistique générale. 1916.

*
Still image from Burning © 2018 owned by Pine House Film. Used under Fair Use for critical commentary and analysis on translatorslife.eu.

Saussure, Ferdinand de. Cours de linguistique générale. 1916.

Read More
Lars Petersen Lars Petersen

's-Hertogenbosch

I give you … 's-Hertogenbosch!

I have no particular reason tell you about this Dutch town, other than feeling a certain curiosity and sense of mystery brought forth when I read the name of it. There is a strange artefact hidden somewhere in the stylistic expression of this word, but I can’t seem to really put my finger on what it is.

‘s” is a contraction of des in the full name des Hertogen Bosch (the duke’s forest).

 
Read More
Lars Petersen Lars Petersen

Euphemisms

George Orwell in 1946 on politics and language: “All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred and schizophrenia. When the general atmosphere is bad, language must suffer”.

George Orwell in 1946 on politics and language: “All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred and schizophrenia. When the general atmosphere is bad, language must suffer.

A euphemism is changing direct and straight-forward statements, which may be of unpleasant, inconsiderate or hurtful nature, with more fair-sounding words. Take for example;

“I was fired”

vs.

I was relieved of duty.”

“John is dead”

vs.

John is no longer with us.”

It’s polite and good taste to use euphemisms with your family and friends.

In science, academia and politics, however, there’s often too much at stake to be so polite. Here, euphemisms cause more harm than good, because they hide the true, actual meaning of a statement. Euphemisms camouflage or mask the truth of what really happened in the circumstances at hand.

“CIA used enhanced interrogation techniques toward captives”

vs.

CIA tortured captives.”

Big difference, right?

Here’s George Orwell on euphemisms in his 1946 essay Politics And The English Language:

The inflated style is itself a kind of euphemism. A mass of Latin words falls upon the facts like soft snow, blurring the outlines and covering up all the details. The great enemy of clear language is insincerity. When there is a gap between one's real and one's declared aims, one turns, as it were instinctively, to long words and exhausted idioms, like a cuttlefish squirting out ink. In our age there is no such thing as "keeping out of politics." All issues are political issues, and politics itself is a mass of lies, evasions, folly, hatred and schizophrenia. When the general atmosphere is bad, language must suffer.

(The excerpt is from Orwell, George. Politics And The English Language. Project Gutenberg Australia, Public Domain Australia (http://gutenberg.net.au/ebooks02/0200151.txt))

Read More