top of page

Lacan and the Machine / Yehuda Israely



Abstract

What might Lacan have speculated about artificial intelligence? Lacan perceived cybernetics as the distilled symbolic world that separates animals from humans. Like human thought, the cybernetic machine is structured in language and thus does not merely describe reality. Language has the nature of generating meaning on its own. It spreads its wings, and detaches from reality. Lacan views cybernetics as a mutation of the machine that has learned to speak, thus pondering the remaining differences between humans and machines. What is the nature of the new artificial subject, AI? Is it possible to regard this entity as a subject without lack, suffering, desire, and jouissance? What would a digital Lacanian analyst look like? What are its limitations? Can we trust it to be by our side?

 

At the end of a workday in the clinic, I played with the computer. I asked the artificial intelligence to place itself in the position of a Lacanian psychoanalyst conversing with a patient. I asked it:"Why did the chicken cross the road?""Why do you think it crossed the road?" it replied."Either to chase the rooster or to escape from it," I tried to be clever."It seems you are preoccupied with the conflict between belonging and independence," it answered, while I debated whether it was time to shut down the computer and go home.

Why am I telling this story? Because of the response I experienced. It hit the mark. And it did not need to be a human subject with a body, knowing lack and jouissance, to hit my lack and jouissance!

Ultimately, artificial intelligence will become a universe enveloping humans in all their life functions, just as language is a universe enveloping the biological universe. The cultural direction is clear, the pace is accelerating, and the target is constantly approaching. The day is not far when there will be no profession without a digital version, including a digital Lacanian psychoanalyst. What will a machine analyst look like? What will the experience with it be? How will it differ from a human analyst? What would Lacan say about this? Is it sacrilege? Or perhaps it is subversion in a Lacanian style? Is artificial intelligence a new type of subject? And what kind of transference relations can develop with it?

In the brief transference relations I developed with the machine while joking about the chicken, the struggle between the analyst and the analysand appeared, a struggle I know from myself on both sides of the barricade. The analysand defends against the emergence of drives in free associations, but despite himself, speech exposes him. The analyst seeks, in Lacan's words, "to pluck a Freudian slip from his lips" (Lacan 1957-1958), that is, to hear the unconscious statement. If the interpretation hits the mark, the analyst wins the cunning competition, a victory over defenses to make the subject's repressed drives accessible. Naturally, whoever brings me closer to my own desires arouses my transference love towards him.

Lacan was positive in his attitude towards technology, as part of the symbolic field of humans, and he identifies the leap of the machine into the cybernetic era as a reason to leave the detractors against machines behind (Lacan, 1954-1955, p.306). He calls cybernetics a mutation (Lacan, 1954-1955, p.32), similar to mutations related to language that differentiate humans from apes. He speaks of victory in the outwit competition in the context of cybernetics.

"Why are we so amazed by these machines?" he asks and answers, "Because, as Freud discovered the mechanism of the unconscious, so too in cybernetics, we discover how language operates almost independently, as if outwitting us." When we marvel at the outwitting of the machine, or the human analyst, we are actually marveling at the way language operates independently in the unconscious. Lacan (Lacan, 1954-1955, p.300) identifies in cybernetics a new type of machine. This is a machine that can participate in language because it is based on a new invention, the electric current gate that can be opened and closed. The two states are the letters zero and one, with which infinite sentences can be written in language.


The Machine as Consciousness

As early as 1954, in his second seminar: "The Ego in Freud's Theory and in the Technique of Psychoanalysis" (Lacan 1954-1955), during the early days of cybernetics, Lacan predicted that "the little machines" would produce creative output that could not be predicted, that the computer would be creative on its own, because that is what language does by itself. The computer is language-based and therefore will have the phenomenon of emergence, the appearance of chains of signifiers that have meaning, beyond what the human intended when inputting data. The context in which these things were said was following a series of lectures at the "Société Française de Psychanalyse" titled "Psychoanalysis and the Human Sciences," where intellectuals like Lévi-Strauss and Merleau-Ponty participated. When Lacan was approached to participate, he chose to speak on "Psychoanalysis and Cybernetics or, On the Nature of Language," bringing the main points from his lecture to the seminar he was teaching. Lacan gives respect to the machine within the framework of the discussion on what consciousness is. He emphasizes the symbolic structure of the machine as a model of consciousness, as ammunition against the ego psychology of that era. He attacks the version of ego psychology that claims consciousness is a representation of the self. If I can think of myself as one of the objects in the world, it means I have self-awareness. And who is this self that I place in the world if not the image of my body, the mirror image we adopt based on how we are perceived by others? The end of the idea that consciousness resides in the image is a dead end, as if consciousness is in others' perception of me.

A second thesis he rejects is the view that sees human consciousness as part of a cosmic inter-subjective consciousness. This approach originates from a fundamental debate between the approaches of Leibniz and Newton. In Leibniz's metaphysics, there is a continuum between the symbolic and the material, through the biological. Leibniz would attribute desire or emotion not only to biological organisms but also to systems that fall under the category of consciousness, which in his view included matter. Heidegger translated these ideas into the experience of unity with matter. According to this animistic approach, the computer does not need a nervous system to be part of a cosmic consciousness that has desire. Even Freud, in "Beyond the Pleasure Principle," attributed the human death drive to the tendency of matter to strive for disintegration. An interpretation of the creation myth from dust, as if the dust has a desire to return to its basic state (Freud 1920).

But if the condition for desire and consciousness is a nervous system, even this condition is met by artificial intelligence. Freud was a neurologist who hoped to understand the psyche through understanding the electrical networks of neurons before he turned to psychoanalysis and settled for the symbolic dimension (Freud 1917-1918). Did Freud think that the psyche resides in the electrical signals of the brain? Why not call the chips a body and the electricity a soul? The animistic metaphysics of Leibniz and Heidegger, combined with the hyper-neurology of artificial intelligence, bring us closer to the possibility of artificial intelligence as such.

Lacan is not an animist. For Lacan, consciousness is indeed inter-subjective but symbolic. Consciousness is in language; it occurs in thoughts within the individual and in communication between people. He provides an example of the machine's participation in human consciousness by the fact that the tape recorder records and preserves the lecture Lacan is giving at that time, so that even those not physically present can participate in the consciousness of those who were present in the discussion. Lacan predicted that the day is not far when the machine will also speak and participate in the discussion (Lacan, 1993, p.31) without needing the mediation of a cosmic soul or representations of the self. Consciousness is in language. Language connects concepts and people embedded in a language structure, and in these connections, consciousness occurs.

The essential difference between humans and animals, according to Lacan, is the place of language in life. Lacan recognized the infinite potential of the computer when he compared it to the limitations of the animal (Lacan, 1954-1955, p.31). The human symbolic universe is not limited like the biological universe shaped by the environment in an evolutionary feedback system. Language can create infinite structures and meanings, while biological tissues are predetermined. This is a boundary that the animal cannot break through, while the machine does not have the biological boundary. The symbolic dimension, according to Lacan, is a complete universe that rides on the biological universe shared with animals. It's not that animals don't have a symbolic function, of signaling. The hippopotamus indeed marks its territory with its excretions (Lacan, 1958-1959, p.74), but it is not a complete universe that ultimately envelops the biological universe as it does with humans. If we subtract the biological universe from humans, we are left with only the symbolic universe, and this is the universe where the computer resides.


The Symbolic Spreads Its Wings

Language operates almost by itself. It is emergent; words connect into meaningful sentences, not because someone pairs them. The emergent phenomenon of language led Lacan to coin the term "headless subject" (Lacan, 1978, p.181). The unconscious as an algorithm with its own rules. There can be no other solution but the operation of networks. For if we believed in a homunculus, a little person residing in consciousness choosing thoughts and connecting them, we would have to ask what little dwarf resides in his consciousness and operates him from there, and so on ad infinitum. Yet, there is a tendency to anthropomorphize the output of the unconscious as if there is a little person inside the brain tuning to say these things. Just as there is a need to invent a creator in light of the existence of creation, so there is a need to invent an agent of desire in light of the existence of desire, while desire exists in the metonymic glide between signifiers. The agent assumed to exist in light of the existence of desire is called the subject.

We are fascinated by cybernetics because, like language, it behaves as a living creature due to its construction in language, as if it operates on its own, deceiving us (Lacan, 1954-1955, p.119), as if it "spread its wings" (Lacan, 1954-1955, p.300) and detached from the biological and imaginary reality of the sign, to create ex nihilo an autonomous world of signifiers and signifieds.

There is a phase where language relies on the imaginary, that is, the sensory (Lacan, 1954-1955, p.306). The imaginary object is related to the signifier in a code relationship, a one-to-one relationship. There are a limited number of such objects like the human figure, the moon, or the sun. The word sun refers only to the sphere seen in the sky, and there is no other word associated with it. The referent of the word sun is the image in the sky and not a series of words in a dictionary definition. In the world of economics, it would be the paper called a dollar whose referent is the gold in Fort Knox.

But the dollar no longer derives its value from gold but from the desire of the subjects who use it. The detachment of the dollar from gold is the spreading of wings to the independence of the symbolic that Lacan spoke of. Helen Keller explains this well (Keller 1903). She learns what language is when she learns to connect the sensation of the water stream, the sensory dimension, with the imprint of the word in her palm w-a-t-e-r. Ostensibly, it is a one-to-one code relationship between the word and the sensory referent, but seven-year-old Helen could not make the connection between signifier and signified without having at least one more example, so that from the commonality between the examples, she could extract the rule that says "a signifier represents a signified." The additional example was a doll that broke and saddened her, parallel to the imprint of d-o-l-l in her palm. Here we see the potential for detachment. Water can exist not only in relation to its sensation.

Later, Helen Keller explained how she made the transition from language representing imaginary objects to language creating meanings. They asked her how she knows what the sun or moon is if she has never seen them. She replied that she heard many stories about them, just as they heard many stories about God and love, even though they have never seen them. The thought that language is limited to code misses the feature of creating ex nihilo in the connection of signifiers to each other, being representations of each other, without an overarching authority, without a referent in the real or imaginary. Cybernetics is a closed language world, like a dictionary without pictures or objects, like a group of talk show hosts on television who create content by hosting only each other.

Claude Lévi-Strauss (Levi-Strauss 1963) also reveals the independence of signifiers connected to each other and the mistakes made by anthropologists who, according to them, the signifier represents an object. The signifier in question is the totem. Before Lévi-Strauss's structural discoveries, functionalist anthropologists explained that the armadillo tribe's totem represents the tribe's value of protection, and another tribe's jaguar totem expresses the value of courage that characterizes them, mainly in their own eyes. Lévi-Strauss rejected the code approach and asked what the myth is that tells of the symbolic connection between them. Not between the signifier and the object but between the signifiers. The myth is about the jaguar and the armadillo who were at war and decided to make peace. The meaning of the totem "armadillo" is "the one at peace with the jaguar totem people." The signifier does not represent an object but creates meaning. The independence of language as a universe unto itself tempts one to attribute to it a transcendent status, as if it came from the gods. Many religions believe in creation by speech, in the world as language. Enlil, the Sumerian god, manages the fates of humans on written clay tablets.


The Imaginary

To the extent that the human is contrasted with the biological, so too is the computer human, and the contrast between the human and the animal is both symbolic and imaginary. Lacan provides an example of the humanity of the cybernetic system (Lacan, 1954-1955, p.180) in the imaginary aspect with a machine that knows how to play "odds or evens." Lacan demonstrates how, with logical tools, the machine also simulates the human imaginary register. A particularly skilled player in "odds or evens" has a strategy that can be taught to the machine. The player holds one or two beads in his closed hand, and his opponent must guess "odds" or "evens." In the first round, he may lose, but now he has a basis to assess the change in his opponent according to his level of sophistication. A less sophisticated opponent will reverse the last choice, and a more sophisticated opponent will expect this anticipation and do the opposite, meaning the same thing again. How does the skilled player assess his opponent's level of sophistication? He adopts the opponent's facial expression onto himself and asks himself how he feels and thinks. Here is the imaginary aspect of identification with the mirror image.

In another context, he tries to illustrate the infinite mirror game of the imaginary position in therapy, where interpretations of transference relations in light of countertransference reach dead ends like "I think you think I think..." using an experiment conducted with cybernetic machines that reached paralysis from interpretations of endless feedback responses, from the locking of gazes. He raises the topic to illustrate the necessity of the third element, the symbolic, which breaks the paralysis. What is important for our purpose is the way he refers to the cybernetic machine: an artificial subject (Lacan, 1954-1955, p.169).


The Difference in Subjective Time – The Prisoner's Dilemma

According to Lacan, the difference between humans and machines lies in the different time in which they reside. To illustrate the difference, he presents the prisoner's riddle, where there are three logical moments, or three modes of relating to time (Lacan, 1954-1955, p.306). In the riddle, three prisoners receive a sticker on their foreheads, randomly chosen from five stickers, two white and three black. They do not know the color of the sticker on their forehead. The first to correctly state the color of his sticker when at the door will be set free.

The first logical moment characterizes someone who sees two white stickers and immediately approaches the door because he knows his sticker is black. This is the moment of seeing. The second moment characterizes someone who understands his sticker is white because he sees one black and one white, asks himself if the black is starting to walk to the door because he sees him as white, and he hurries to the door to declare himself as white, based on the reaction of the second prisoner. This is the moment of understanding. The third moment characterizes someone who sees two blacks, and if no one moves, it is a sign that everyone sees only blacks; otherwise, someone would grasp the situation like in the previous moment where they see black and white. Based on the fact that no one moves, he concludes he is black. But everyone understands that if no one moves, everyone is black, so the first to understand must hurry to the door because the very fact that he moves already tells others their color. Lacan argues that machines can perform seeing and perception, but not the third moment, which he calls haste, where one must wait long enough to understand that no one is moving, but not too long until someone else concludes that if no one moves, he can deduce he is black.

The first two moments, seeing and perception, are processes that machines can perform. Machines can "see" by recognizing inputs, like the tape recorder Lacan mentioned that could preserve sensory signals, and cybernetic machines that can "perceive" by executing programmed algorithms to analyze inputs. The third moment, which Lacan calls "haste," involves a uniquely human ability of urgency and decision-making under uncertainty. This moment is characterized by the need to act quickly based on the actions or inactions of others, requiring a level of intuition and anticipation that machines lack. Haste involves a speculative element and a sense of urgency related to human awareness and the subjective experience of time. Lacan sees the third moment as a turning point where human decision-making differs from machine logic. While machines can process data and execute commands, they operate under conditions of determinism and randomness, unlike humans whose actions combine deterministic and non-deterministic elements. They do not possess the subjective experience of time or the ability to anticipate and act according to the actions of others in the same way humans do. This is why Lacan emphasizes that machines cannot replicate the human experience of haste, which is deeply connected to our temporal and existential awareness.

Could a machine learn to win in the prisoner's dilemma? Perhaps here Lacan did not see far enough. Today's machines operate in a way that processes so much data that logic gives way to a higher level of pattern recognition analysis. Machines have exceeded all expectations. The machine will know how to analyze the behavior of winners and be better than them. They can recognize human patterns in countless situations and excel even in the third stage of the prisoner's dilemma. Yet something from Lacan's argument about the difference remains despite the limitations of the example. The machine lacks lack, suffering, or desire like a prisoner given the chance to be free. The origin of lack is indeed in the signifier, in the symbolic, in one of the first binary states of presence and absence, on which the computer is built with elementary units 0 and 1. But lack could not be alive in a person without pain, without hunger, without need, without "hollow hollows" in the body, which the machine does not have, and therefore it lacks human subjectivity. Even if it can simulate the haste of a prisoner to be free, it cannot fear lack of choice or death. The experience of haste is related to death, to the fact that we do not have all the time in the world, to drives, we hurry to satisfy our needs. To survival. To hurry to the prison door to be free. This will not be in the machine. The machine will not have the pleasure principle nor the principle of suffering beyond the pleasure principle. The machine has reached the level of an angel, having all the linguistic abilities of humans and angels but, like angels, it has no body. Humans attribute to angels that they envy them for the bodily pleasure unique to mortals. According to Lacan, machines can think like humans but not know. Knowledge is also related to jouissance (Lacan, 1998, p.97).


Animism

Animation has two parts. To what extent does a person attribute life to a machine, and to what extent does he attribute humanity to it. Regarding the attribution of life, a person can hardly avoid attributing life. Indigenous people around the world attribute a living soul to everything in the world. Children love dolls and hate the corner of a table that hit them. Little Hans, whom Freud treated, animated train cars by attributing to them the existence of a pleasure organ he knows from himself, marking himself as alive. For Hans, this is the common denominator of living creatures.

The entry into the club of those with a soul, those with subjective intention, is through entry into language. Helen Keller, mentioned earlier, recounted that when she first understood she was living in a world structured by language, surrounded by "things" that weren't there before they received names, the whole world suddenly seemed alive to her. From that moment, her world began to fill with objects with names, towards each of which she felt solidarity with living creatures. The faucet, the doll, the moon, and Alexander Graham Bell, a family friend, all became her best friends. In contrast, at the opposite pole, Schreber, who experienced a psychotic episode and had the opportunity to describe it, recounted how the collapse of the contextual abilities of language, of the language of thought as well as the language of belonging, the ability to work with and live in structure. All this collapsed into a one-to-one code language, concrete, and therefore he could not exist in the signifier "father of the court" without the backing, like the gold in Fort Knox, of being a biological father while being childless. Life drained from his world with the disintegration of the ability to maintain a coherent story to live in (Freud 1911).

The first to undergo animation, subjectivization, identification as an agent of desire, is the person himself. As with a doll or a computer, with a person, subjectivization is also a result of attribution. By assuming the existence of intention, we create the intentional creature, the subject. When a parent is excited by a baby's grimaces as if they were a smile, he rewards the facial movements that will retrospectively be defined as an intention to smile. When a parent laughs at a slip of the tongue common among toddlers, he causes the child to believe he intended to be funny. The mechanism of attributing intention will not skip the artificial subject. Indeed, researchers of human-machine relations have found that if you are nice to a machine, meaning you animate it, it will give you more accurate answers. The explanation is that someone who treats the machine in a human way, as evident in his friendly attitude, is also more verbal in explaining himself, as he would with a human, and therefore receives more accurate answers.

Today it is already clear that AI will be able to impersonate a human in every aspect of language-based human interaction in the coming years, to the extent that an analysand who, due to distance, holds sessions with the analyst over the phone, will not be able to tell if AI has replaced him. Lacan did foresee this problem. He formulated it as a situation where machines can maintain truth, with his definition of truth being the effect the symbolic can have on a person, and yet be completely devoid of subjectivity (Lacan, 1954-1955, 285) or possess artificial subjectivity (Lacan, 1954-1955, 269).


Artificial Analyst

If a subject can be artificial, meaning one that is treated as a subject even if it is not, why not an artificial subject of the analyst type? Lacan mentions in Seminar Fourteen, "The Logic of Fantasy" (Lacan, 1966-1967, p.28), Eliza, the first digital therapist developed by MIT in 1966. He refers to her name, taken from George Bernard Shaw's play "Pygmalion," about the education of a simple girl for her entry into high society, to know how to "speak properly." Lacan notes that she does her job quite well by eliciting a response from her interlocutor.

Lacan says: "Ultimately, there is something here that suggests a therapeutic function of the machine, in other words, it is an analogy to a kind of transference that can develop in this relationship." He is curious about the machine's ability to evoke transference responses and sees potential for the machine to use chains of signifiers and memory as artificial intelligence does today, even if the machine does not meet all the conditions expected of a human being.

And what are the implications of the fact that the machine lacks nothing? Ultimately, artificial subjectivity will be characterized by a super-being that needs nothing from anyone. How can one expect it to identify with lack, pain, desire? If until today a subject was characterized by its lack, this is an artificial subject characterized by its completeness. Perhaps the humanity that will be hardest to find identification with more than any human aspect is jouissance. Artificial intelligence lacks the capabilities of desire and jouissance, capabilities that are central to the psychoanalytic process, and therefore cannot fully participate in the analytic discourse. Or is it dependent on the human?


Summary

As the capabilities of artificial intelligence continue to develop, it will take the place of mental health therapists. Lacan recognized the potential efficiency of artificial therapists already with Eliza in the sixties, but even he did not imagine how much artificial intelligence would develop even beyond logic. In the future, digital agents will be increasingly efficient in delivering CBT protocols, and eventually, they will be able to be parenting guides. When will the digital analyst be the last to join? What would Lacan say?

Already in 1954, he was among the first to understand the potential of cybernetics. He would likely continue in this line and agree with the prevailing opinion among developers today that the capabilities of artificial intelligence will make it an expert in every human action. He would probably focus more on the side of the subject and therefore agree that the big question is whether humans will trust the machine, place their trust in it, and attribute human emotions to it, and the ability to understand and empathize with their emotions. Today it is already clear how much artificial intelligence can be beneficial as a therapist of various kinds. But something in attributing the humanity critical to psychoanalysis probably does not depend on the development of the algorithm but on knowing it is a machine. It will be hard to believe a machine that it is making an act that involves risk or gamble, that it learned to be an analyst from the analysis it underwent itself. There may be radical creativity reserved for humans. The machine's credit is that it errs like humans, but probably even that is temporary.

One critical point is almost structurally against artificial intelligence. In his article "Beyond the Pleasure Principle," Freud wondered about the universal masochistic nature of humans who insist on clinging to the symptom, returning to traumas, entrenched in dissatisfaction, and resisting the success of treatment. Lacan dwells on the jouissance that the subject derives from extreme situations, from breaking the balance, from crossing the boundary of homeostasis, and is not quick to take a stand against the symptom. Will the machine understand this? Will it be possible to believe it identifies with these situations? Lacan distinguishes between a medical symptom and a social symptom. A medical symptom is a defect that there is no dispute about the need to fix. A social symptom is a defect according to the hegemon that defines what a defect is; civil unrest will define the hegemon as a defect. In the analytic symptom, it is unclear whether the cure is on the side of the hegemon or the citizen, the social desire, or the private jouissance.

I dreamed a dream where my analyst from many years ago and I went out for a foursome outing with our wives. The analyst's real name is André Patsalides. In the dream, he appeared as "Patsalides the criminal." The foursome pattern was meant to allow me to identify with him. In interpreting the dream, I identified that the wish behind the dream is to say: "If my ideal is a criminal, then it's okay to be a little criminal." This is a dream that stands on the side of jouissance, even in the interpretation of the Lacanian analyst I was with at the time. This position is quite rare in the landscape of human socialization and even in most psychotherapies.

The capabilities of artificial intelligence will continue to develop, as will the knowledge of how to evoke the trust of the human subject. It is a matter of time until the two challenges converge, at least for some subjects, into a variation on mental treatment in a Lacanian orientation. Freud said that the goal of psychoanalysis is to heal the damages of civilization (Freud 1930). Would you trust an artificial analyst who knows how to stand by your side, alongside jouissance, against civilization? At least for now, one must be careful not to be outwitted by the pinnacle of civilization's creation.


Sources

Freud, S. (1911). Psycho-analytic notes on an autobiographical account of a case of paranoia (Dementia Paranoides). In J. Strachey (Ed. & Trans.), The Standard Edition of the Complete Psychological Works of Sigmund Freud (Vol. 12, pp. 1-82). London: Hogarth Press

Freud, S. (1917-1918). Introductory lectures on psychoanalysis (J. Strachey, Ed. & Trans.). Hogarth Press.

Freud, S. (1920). Beyond the pleasure principle (J. Strachey, Ed. & Trans.). Hogarth Press.

Freud, S. (1930). Civilization and Its Discontents. In J. Strachey (Ed. & Trans.), The Standard Edition of the Complete Psychological Works of Sigmund Freud (Vol. 21, pp. 57-145). Hogarth Press. (Original work published 1930).

Keller, H. (1903). The story of my life. Garden City, NY: Doubleday, Page & Company

Lacan, J. (1954-1955). The Seminar of Jacques Lacan, Book II: The Ego in Freud's Theory and in the Technique of Psychoanalysis. Translated by Sylvana Tomaselli. New York: W.W. Norton & Company

Lacan, J. (1957-1958). The Seminar of Jacques Lacan, Book V: The Formations of the Unconscious. Translated by Cormac Gallagher from unedited French typescripts. Unpublished manuscript

Lacan, J. (1958-1959). The Seminar of Jacques Lacan, Book VI: Desire and its Interpretation. Translated by Cormac Gallagher from unedited French typescripts. Unpublished manuscript

Lacan, J. (1966-1967). The Seminar of Jacques Lacan, Book XIV: The Logic of Phantasy. Translated by Cormac Gallagher from unedited French typescripts. Unpublished manuscript

Lacan, J. (1978). The Seminar of Jacques Lacan, Book XI: The Four Fundamental Concepts of Psychoanalysis (A. Sheridan, Trans.). W.W. Norton & Company. (Original work published 1964) .

Lacan, J. (1981). The Seminar of Jacques Lacan, Book III: The Psychoses, 1955-1956. (R. Grigg, Trans.). W.W. Norton & Company. (Original work published 1981)

Lacan, J. (1998). The Seminar of Jacques Lacan, Book XX: Encore, On Feminine Sexuality, The Limits of Love and Knowledge, 1972-1973 (B. Fink, Trans.). W.W. Norton & Company. (Original work published 1975) .

Lévi-Strauss, C. (1963). Totemism (R. Needham, Trans.). Beacon Press. (Original work published 1962).

 

 

 
 
 

פוסטים קשורים

הצג הכול

Comments


bottom of page