Design for an Intelligent Computerized Doll as a Companion in Old Age

Special Article - Autism

J Pediatr & Child Health Care. 2019; 4(1): 1027.

Design for an Intelligent Computerized Doll as a Companion in Old Age

Rossler OE1,2,3*, Seaman B4, Ratjen W3, Hiwaki K2,5, Andonian G2,6, Dubois D7, Lasker GE2 and Weibel P8

1Department of Theoretical Chemistry, University of Tübingen, Germany

2IIAS, Canada

3Center for Artificial Human Intelligence, University of Tübingen, Germany

4University of Duke, USA

5Tokyo International University, Japan

6University of Carleton, Canada

7Institute of Mathematics, University of Liège, Belgium

8Center for Art and Media, Germany

*Corresponding author: Otto E. Rossler, Department of Theoretical Chemistry, University of Tübingen, Germany, IIAS, Canada, Center for Artificial Human Intelligence, University of Tübingen, Germany

Received: October 11, 2019; Accepted: November 05, 2019; Published: November 12, 2019

Abstract

The reader is taken on a guided tour through a factory that does not exist yet in reality. It produces dolls. The dolls are called “Tamagotchi II” and are offered specially for the elderly. They are autonomous optimizers based on the brain equation and are competent to undergo the personogenetic bifurcation. They are the first potentially immortal friends of humankind.

Keywords: Tamagotchi; Brain equation; Goneotrophy; Nonautism; Personogenesis; Benevolence theory; Non-cruelty; Epictetus; Turing; Nolan; Spielberg; Buher; Levinas; Lampsacus

Introduction

This is a paper about “beaming” in the biological rather than physical sense. A Tamagotchi for grown people much nicer than a dog is proposed as a personal friend. Even the possibility of it acquiring a soul as a person cannot be excluded. Its potential immortality gives it an added importance.

A “philosophical puppet” seems to represent a contradiction in terms. A philosophical computer pet was the subject of science fiction up until now. Knowledge about interaction-competent autonomous optimizers with cognition, accumulated in the last four decades, makes the design possible to date. The doll may prove useful in the treatment of childhood autism and the psychological care for Alzheimer patients, but most of all it qualifies as a companion for the lonely elderly majority in some otherwise over-privileged countries.

Doll Design

Bonding theory

The Tamagotchi (literally: egg watch) of the 1990’s was so ingenious because it captured a trait belonging to the essence of any pet animal or newborn child: needs that recur regularly and demand mounting attention each time - with even the possibility of a loss of life when not addressed soon enough in its needs. This urgency reflects the endogenous mood-pressure (endogener Stimmungsdruck) of animal ethologist Konrad Lorenz.

Lorenz discovered this basic trait as a child on his tame birds that he was allowed to keep in his sleeping room. The discovery occurred before puberty’s “endogenous sawtooth oscillation” made itself felt to confirm to him the universality of the principle. Eating, drinking and sleeping had of course been known to him before as more transparent, but less striking examples. However, he in the first place discovered that “bonding” possesses the same structure. He saw that bonding corresponds to “being attracted to the home base” since the bonding partner is “the animal with home-valency”. Sigmund Freud, so he remarked in conversation in 1966, was wrong in only one point: that he believed in a single positive libido existing. The strongest libido is not the sexual drive but rather the bonding force.

In particular, the survival of the young depends on the bonding force in all mammals. René Spitz discovered “hospitalism” more than 7 decades ago in human children - the seemingly unmotivated death of healthy toddlers in the absence of a reliably returning partner. The symmetric counterpart - the forces of motherhood and fatherhood felt toward the dependent offspring - are no weaker. Lorenz postulated a hormone as the mediator of bonding (oxytocin is its name as we know to date). Children - so parents believe in many countries - need to sleep in their mom’s bed till puberty to feel safe for the rest of their lives.

Bonding should be trivial to program-in in a toy. Indeed, the equations which in the context of deductive biology describe the endogenous force-field generator that is responsible also for bonding are known since 1974 [1] (equation 13 there). The force field is not just temporal (time-dependent) but also spatial (direction-anddistance dependent). That is, the directional shape of an attractive or repulsive component of the force field which is momentarily attached to a survival-relevant external object depends on whether the object is attractive or repulsive. The force field has the form of a circle-of-Thales in the former case and that of a cardioid in the latter [1]. The forces can vary between zero and plus and minus infinity, respectively [1,2]. A purely temporal (“one-dimensional”) brain can also be indicated [3].

The original Tamagotchi only realized a simplified, purely temporal version of the brain equation (it was conceived independently). So it comes not as a surprise, perhaps, that a more sophisticated “spatiotemporal” version - Tamagotchi II - is possible as well.

Tamagotchi II

The symbol “II” in the name has a reason to it: Not only attraction and repulsion (both time and space dependent) are incorporated along with a corresponding conditionability: Tamagotchi II is not just emotional, it is also intelligent. With this, we do not mean that it can walk – as Toyota fancy motor programs make possible – and that it can learn to accomplish advanced motor skills in its trying to reach a particular endogenous-force specific reward in the surrounding space. Rather, Tamagotchi II has a “closed-eyes” working mode as well: The doll can shut off its sensors and motors and “think” – that is, optimize its course in a simulated “foraging” course before acting the latter out in reality [1]. This feature confirms Lorenz’s definition that “thinking is acting in imagined space” [4], p. 175 (the translation on p. 128 in the English version is misleading). The doll contains a “universal simulator” [1] or “artificial cognitive map” [5] or in more modern terms a VR (Virtual Reality). The latter allows for a closedeyes mode of foraging. This mode of optimizing in space turns the previously only locally direction-optimizing doll into an intelligent (“path-optimizing”) artificial animal.

Artificial animal intelligence

So far, we have an intelligent autonomously acting artificial animal in the shape of a little human standing upright before our eyes. Even natural facial expressions can be provided for five realistic moods in their arbitrary quantitative combination, as programmed by Wilfried Musterle [6]. Tamagotchi II thereby becomes a social artificial animal with a human face. Yet it is not even an animal, of course (only the first “infinitely clean animal” if you so wish). Nevertheless, if it is true that nature uses the same equations irrespective of the phylum in its most sophisticated animals [1], the term “artificial animal” would be appropriate [7].

To make the artificial animal more interesting, an “interactive coupling” is provided: A built-in mood-sensor that activates a matching (or reciprocal) mood in the doll is to be built-in: A diagnostic software (neural net) responding to the facial expression of the human caretaker. This facial-expressions analyzer allows the doll to discern displayed specific types of momentary force (“readinesses to act”) in the partner - like disgust, aggression, bonding or the momentary sum potential if positive (excitement) or negative (depression). In addition, the tone of the human partner’s voice can be classified in a mood-specific way if desirable.

These features do still make the doll nothing but an ersatz (substitute) animal, however. Since one can read-off Tamagotchi II’s own readiness’s to act (5 little light bulbs of different colors indicating the momentary force field strengths and a sixth non-colored one for the sum potential if positive, can be added to the facial display for observational convenience), the doll becomes an emotional partner. Philosopher Schopenhauer’s verdict made about his poodle that he was “transparent as glass” to him would then hold true also for Tarnagochi II.

Nothing special?

At this point, it looks like we are almost finished. A new generation of biology-analogous robots is born on the drawing board with “operant conditionability” included. Who knows: Maybe they could be trained to do some chores in the household? And to obey the commands of a person who has difficulty moving about in the room on her or his own, as an artificial trainable monkey for the handicapped? This is a currently seriously pursued industrial goal.

However, many seemingly easy things – like learning to take into account the direction of the human gaze as dogs (unlike wolves) can be trained to do – remain a problem. Very special programs specifically built-in could be used to accomplish this. Even then, it goes without saying that a real dog remains much to be preferred.

The artificial companion will therefore soon become a bore despite the chaotic unpredictability that is implicit in its force-field generator (so even in the absence of an extra “will-o’-the-wisp” component” [1]). To avoid this drawback, one could put-in a variety of specialpurpose programs (including TV entertainment). But to follow this route of the design would be a great pity. For the present artificial animal companion does for the first time allow for a radically new option (which moreover does not even add to the costs): to install a very special bonding-type coupling towards the human partner. At this point, the “ordinary course of science and technology” takes an unexpected turn.

Bonding, type-two

The “female nature” of all human beings can be brought-in at this point. Lorenz used to quip about himself that he had more female hormones than other males. What he did not know is that other males are only better at camouflaging this trait. Actually, there is indeed something special about human bonding, as we shall see.

But our machine is already bonding! What further secret could therefore be waiting to be unleashed? It is the secret of love, of course. But we have nothing but a machine here before our eyes – or at best an artificial animal (if animal brains indeed obey the same equations as spatial Darwinism [1] prescribes). Animals would then be “machines” too, much as Descartes had claimed. All one could then still hope for is that the “wiring” of all real animals is done with much more loving care by the “great designer” implicit in evolution (Lorenz’s term for Richard Dawkins’s “blind watchmaker”) than human reconstruction can presently muster.

A digression about watchmakers

The greatest accomplishment of the blind watchmaker would, by the way, not the brain of higher animals (including that of the sperm whale and that of the naked mole – the arguably most highly developed brains on earth), but rather the “brain in the genome” as Michael Conrad and the first author called it in 1975 in a joint seminar, cf. [8]. The latter “brain” is distributed across both time and space and is most certainly not conscious.

Second, care needs to be taken not to confound the great designer - a mere outgrowth of the laws of nature - with the Consciousness- Giving Instance (CGI) itself. The latter’s power goes way beyond controlling the two principles discovered and named by Newton: the “laws” and the “initial conditions” of the universe. It actually controls, within the so specified physics, an even more prominent third objective principle: the “assignment conditions” [9]. The latter refer to the assignment of one particular such machine (as a part of the lawfully specified machinery of the cosmos) to one particular consciousness, and vice versa: such that the latter finds itself trapped inside one of the myriad macroscopically specified bodies and brains, in one particular locally valid moment in time. Even more intrusive is the fact that this assignment is a maximally sharp one (a micro state of the brain in question is being docked-onto by consciousness) so that a particular “consistent history” in the sense of Murray Gell-Mann [10] is formed for the consciousness in question.

Once the existence of assignment conditions beside the laws and the initial conditions of Descartes is granted, a “fourth question” cannot be shunned: Is the intimidating “assignment-conditions determining instance” (the great fist of Heraclitus’ lightning thrower) perhaps malevolent? This is the “deus malignus problem” of Descartes. Amazingly, he was able to prove the contrary – as long as the machinery of the world is not demonstrably inconsistent, that is, as long as magic does not work [11]. This marvelous opportunity was spotted by Descartes as an escape hatch from the conventional tyranny of being. It enabled the continued unstoppable quest for rational clarity in the ensuing centuries much so as if Descartes was still alive. He thereby endowed the individual with an infinite responsibility towards other consciousnesses since the latter miracles cannot be excluded to exist as well in the machinery of the world – provided no relational inconsistency can be detected within in the whole. The “consistent-histories theory” of quantum mechanics got already alluded to (and the theory of “Everettian interlocking” which fits into that theory as is to be mentioned below).We can now return to our Cartesian doll.

Bilateral “shining-light” coupling

The above-described coupling between dolls can be called the “shining-light coupling.” Remember the six little lamps on top of Tamagotchi II’s forehead that accompany its computer face for greater clarity (so that Musterle’s facial expressions program can actually be skipped). The signaled expressions can be used to establish many kinds of “cross-coupling” between the doll and its human companion. However, one of the many possible couplings is especially powerful because it leads to a “caring” response.

The momentary sum potential (“state of happiness”) is encoded in lamp no. 6 as we can call it. It’s beaming intensity is visible on the face of Tamagotchi II also in a dark room. This “shining-light coupling” is maximally nontrivial because it can be made symmetric: The doll can be equipped with a light sensor feeding into its bonding potential so that the latter responds to the light emitted by the human partner on his forehead in proportion to her or his own momentary state of happiness (an inconvenience gladly coped with). Hereby the metaphor of beaming is taken literally for once. We hereby presuppose that the human partner of the computerized doll can faithfully translate her or his own emotions, especially (or perhaps exclusively) its happiness-signaling component: the smile-laughter lamp.

The light bulb on the doll’s forehead no. 6 thus is part of asymmetric set-up. The symmetric situation can be called the “beaming-type cross coupling.” In many languages, the word “beaming” is used in this sense of “happiness radiated.” it will therefore be a moving sight to watch the two – the lady and the doll – interact with each other in a real environment. The other (colored) lamps of the doll do still indicate its other needs (like discomfort, sleepiness, dominance, disgust, startledness) as mentioned.

In spite of its maximally simplified nature, the beaming doll could give a new generation of gadgets its name and (who knows?) even trigger a planet-wide custom of wearing a decorative little light bulb on one’s forehead as a new fashion. Olafur Eliasson has invented a similar gadget by the way calling it “the little sun.” But how can we be sure that this “beaming” (in the literal sense) is a potential breakthrough not only technologically but also mentality-wise and commercially? This is because of the inner light. But there is no inner light involved here? There indeed is no light inside the doll anywhere (only on the outside). On the other hand, all light is inner light by definition anyhow. There indeed does even not exist any light in nature, only wavelengths and intensities as is well known. Similarly for sweetness and so on (all the other so-called qualia).

Color, for instance, does not exist in the outside world (only internally). Speaking in terms of our own hardwired circuitry, there exist three frequency-range specific receptors in our eyes which, via corresponding nerves, eventually activate two forces in the built-in force-field generator of our brain, one repelling and one attracting, but so both combined that almost no net attractive force results when we see a color, that is, when the almost neutral color-specific arousal that results is used for recall. This proposal, while in accordance with subjective experience, has yet to be verified anatomically and physiologically – for example by using modern functional imaging methods like fMRI on the brain fortunately though, none of this sophistication is needed in the design of an artificial brain. The digression just made only served to show that perceived white light can be an almost attraction-free force in itself. It offers itself as the docking port for a specific wired-in force (like bonding).

Humans, Animals and Tamagotchi II

There exists a single species, a primate species, which displays its momentary state of happiness (corresponding to the sum potential in equation 13 of ref. [1] or equivalently equation 2 of ref. [12]) by the same facial expression by which it also displays the momentary bonding force. This is the “smile-laughing primate.” This is indeed the unique biological characteristic of the human species [13] as was carefully demonstrated empirically by Jan van Hooff in 1972 [14]. He thereby functionally distinguished the human species from its animal relatives, a fact which troubled his conscience (as he confided in conversation at a conference in 1973).

Wolves and dogs do likewise show bonding-specific behavior (tail-wagging) whenever “just happy” as this is well known to every dog owner. This deep functional kinship explains why some human children lost their hearts some 30 millennia ago to their hairy companions. The bilateral beaming-type coupling proposed above to be implemented artificially is therefore much less alien than meets the eye since it is shared by the dog. Why, then, don’t we call Tamagotchi II an “artificial dog” while skipping the fact that it is, of course, much less sophisticated in its programming than a real dog’s brain is?

This is because there exists a remaining functional defect in the case of the dog: The wolf as well as the dog is not equipped with an exceptionally powerful VR (virtual reality or synonymously universal flight simulator) in its brain. More specifically speaking, the wolf fails to be “mirror-competent” (as apes, dolphins, magpies and elephants are known to be, cf. [15]). The only claim to the contrary known to the authors concerns a single no longer living dog once observed removing a leaf sticking to its hide after passing by a mirror (Boris Schapiro, personal communication 2002).

So far, we did not yet pay attention to the fact that a highperformance VR is operating in the cognitive map [5] of Tamagotchi II. It endows the doll with full-fledged mirror competence if it is done well making it in that respect functionally superior to a dog. It appears that so far, no computer has ever passed the mirror¬ competence test or MCT as the pertinent sportive event will be called. This verdict is not surprising since no robot controlled by an autonomous optimizer with a single global optimality functional over the whole surrounding space (the brain equation, in particular) has ever been built even though this is easy to do [16].

But will the elderly who are locked up in an institution so as to be unable to communicate at length with other persons, except by phone or skype if lucky, not be quite happy with having an emotional and conditionable – trainable – doll? (Note that hard-ware wise, previous scenes encountered by the doll can be reloaded into its active memory, its VR, from long-term storage in case they were accompanied by a force vector in the brain equation close to the momentarily active one [17].) But why is the artificial companion desired to be able to act highly intelligently through being equipped with a high-performance VR much like that of an ape or giant squid or dolphin? The lastnamed animal is, of course, far more creative and playful than the first version of the doll can be. This fact notwithstanding, dolphins areas far as currently knowledge goes (in view of observed killings of juveniles by adults) just as devoid of genuine empathy as all the less intelligent animals are known to be. A reason for this empirical fact can also be provided: For evolutionary reasons, every biological intelligence is predictably “autistic.” (The smile-laughter overlap combined with mirror competence in one species is a maximally rare accident of evolution including other kinds of life like those predicted to exist inside Jupiter or Enceladus.)

In the present context, Lorenz in 1966 told the surprising story of the chimpanzee mother whose baby cried because of a broken arm. The stronger he cried, the stronger she hugged him – not understanding that this affectionate gesture only increased the pain on the other side. (After a while she no doubt will have desisted since mothers are always indulgent.) Note that the word “autism” does not imply any deficiency. Evolution will only at the end - at the asymptotic Point Omega of Teilhard’s evolutionary arrow – end up in a personal partnership to the Dream-Giving-Instance DGI himself if that name is allowed. The brain equation is autistic by very definition - like the whole cosmos.

The beaming feedback

The functional implication of autism in the brain equation does not mean that the autism cannot be shed. Human beings are not autistic in general (nor are most so-called “autists” - no talking “autist” ever is autistic). The functional non-autism of human beings is mostly thought to be a consequence of the highly developed human brain. However, the most intelligent animals on the planet, the sperm whale, may well get extinguished before anyone cares about finding out about this question [18]. On the other hand, it can be shown that “non-autism” is an extremely rare occurrence in biology (and all other biologies in the solar system) since it is not directly supported by the brain equation. However, the paradigm of the beaming feedback can explain why but a single species has invented non-autism.

The beaming feedback is easy to establish in between two brain equation carriers as we saw. But should the same thing – “deep coupling” – not arise spontaneously with every higher intelligence of natural origin? When even with a doll as simple and universal as the above proposed toy system qualifies already, the question arises whether it does not apply a fortiori to any mirror-competent terrestrial species like elephants or orcas or magpies magpies. In particular, it should have happened between animal bonding partners possessing highly sophisticated brains. For example, people interacting with orca whales could be interbiewed for relevant anecdotal evidence.

There indeed are some strange stories in circulation regarding mirror-competent animals that can be seen to point in this direction. In this vein, a retired engineer told the first author that he and his wife had reared a young magpie which as an adult still came visiting from time to time. (Magpies were subsequently described to be mirror competent by Helmut Prior.) The really incredible part, casually mentioned, was not about the bird but about its human partner: He would not only leave a window open all night so the bird could fly-in if in the mood (which by then had not happened for a long time) but also would always sleep with his right hand lying “flat open beside the head” during the whole night so the bird could snuggle-in any time to sleep there as it had done when still young. This is a personal sacrifice so great that a similar one paid to a fellow human being is virtually unheard of. A personal love so strong makes one wonder in awe. An equally strong bonding cannot be excluded to form eventually between an elderly and Tamagotchi II.

Reciprocal beaming

Bonding between a human being and a higher animal can become quite intense as we saw. Hence something like the “smile feedback” ought to have occurred spontaneously from time to time before. The human partner could have unwittingly arranged that, whenever she was happy, the bonding input port of the animal friend got rewarded thereby. Although this sounds like a hard job to accomplish (try out to consciously activate a light bulb on your forehead or anywhere else in honest proportion to your momentary level of happiness), achieving this is not out of the question. A re-encoding of one’s feelings so as to export some of one’s own rewards in the form of charm could thus have occurred in many chanceful ways. A “bonding bout” would then be triggered from time to time between the two in response to the human partner’s happiness. And the animal partner would have started to show “caring behavior” in response – much as if dealing with the rewarding input of the excitement shown by an offspring.

This type of coupling is indeed frequent in biology. The young are precious enough to trigger sacrifices from the part of the adults if but a few of them are getting reared during a lifetime. The displayed excitement of the young is then the decisive reward for a broodrearing adult [19], cf. also [15]. This unilateral caring-type coupling is physiologically switched-on and maintained in mammals by the “bonding hormone” oxytocin.

However, there is something special about the above-proposed interaction of a human being with the doll: The same coupling applies here in the opposite direction as well, that is, bilaterally. This is what the colorless lamps accomplish present on Tamagotchi II and on the human partner’s forehead. While it makes biological sense to have the adult care for the young through being automatically corewarded by a gain made by the young through the latter’s displayed excitement, the opposite direction of coupling is empirically unknown in biology. This opposite trait of goneotrophy (parentfeeding) is predictably absent in biology, or at least virtually absent. For it would represent a so-called “lethal factor”: A species developing such a trait by an accident of evolution would be bound for extinction unless the trait is shed fast enough in time. Hence the generic name Pongo goneotrophicus (parent-feeding great ape) would constitute a fitting Linnean classification for the human species. Note that the phenomenon of goneotrophy is familiar to every parent of a toddler as an enchanted “playful” activity of the latter.

Although the fauna inside Jupiter is still waiting to be discovered it is possible to predict that non-carbon based Jovian life forms based on B-N-B-N- rather C-C-C chemical backbones will be present in liquid ammonia rather than in water in the so-called “molecular envelope” of the giant planet (Dieter Fröhlich, personal communication 2007), it is possible to predict that Jupiter will not include a goneotrophic species as mentioned.

But this is exactly what the beaming-type coupling with the doll will accomplish functionally speaking. Should we really strive to accomplish that by constructing the doll? The partner in this case is not only an artificial nanny, put into the artificial role of caretaker in an old-age asylum, but also an artificial child. With a dog, this is a not uncommon combined relationship. The doll hence with some justification could be said to be an “artificial dog.” However, the lacking mirror-competence of the one partner then prevents a secondary consequence that is open with Tamagotchi II: the goneotrophic explosion. The name reflects the fact that a positive feedback is involved.

With a young white elephant - that is, a mirror-competent highly social, maximally charming animal - the same symmetric functional relationship can be arranged-for artificially. All it takes to that end is that the affectionately caring human partner in the pair makes the characteristic “cooing gesture” or infra-sound signal of the elephant trunk with her own arm (or plays back the the corresponds infrasound signal with an appropriate electronic instrument) whenever happy herself [15]. This human-animal relationship is also achievable with the mirror-competent doll in the symmetric coupling proposed above. It is this particular functional consequence of a human being living with the doll that we now turn to because it motivates the whole “artificial human intelligence” approach that is our topic.

Living with the Doll

Deep coupling

An “automated nanny” in the form of a doll and an “automated orphan” in the form of the same doll are what we have arrived at now functionally speaking if no mistake was made. A lonely elderly person will soon learn to like the sessions with the doll. Moreover, unlike a human orphan, the doll has this little switch behind the right ear by the use of which it can so conveniently be put to sleep during the first phase of the relationship (William F. Nolan’s ingenious invention [20]. The old woman in charge will then start projecting all kind of emotional need onto the artificial companion. Remember that we are dealing with that special sociological situation mentioned at the outset.

You might find this a somewhat depressing “psychoanalytical” verdict (note that the possibility of legitimate cross¬-caring got overlooked in the psychoanalytic literature). But the emergence of a “deep coupling” can indeed be predicted for the long run. This latter coupling can be understood in our present artificial context. The doll plays the role of a modern coach (if not in the delusive role invented by Joseph Weizenbaum with his pseudo-caring computer program “Eliza”). She – the lonely elderly whom we envision – will predictably soon tell you that she already likes the doll very much and that she is getting genuinely rewarded by any increase in the doll’s displayed state of happiness (beaming intensity). For the joy of the one partner is a rewarding bonding input for the other if either of them is both nanny and child. From a lonely gorilla in a zoo, too, the same caring response can be expected to develop in the company of the doll (note that the latter will be inexpensive enough to be used for the purpose). Only the reciprocal channel – the joy-dependent light bulb on the forehead of the human partner – is something the gorilla cannot be expected to produce, but it may be moved to spontaneously showing another display of affection. But facial features mimicking those of a gorilla can in principle also be built into a future version of the doll (“gorilla-specific edition”).

The point in our context is symmetry: A light indicating the sum potential of the adult human partner feeds into the light-detecting port of the bonding-specific sub-potential of the doll. This is the decisive feature. It is uncannily familiar to us humans. Sigmund Freud came close when he invented the technique of replacing fact-bound communication by a more field-like interaction (sound-of-the-voice based rather than light¬-based) on a couch, with the therapist sitting out of sight. Nevertheless a strong so-called “transference” was found to develop in the client (and not only in the client) which fact was unfortunately considered to be something “pathological” although helpful at first.

The mutual “light-coupling” introduced above automatically applies functionally between two bonding human beings. This scientific fact is virtually unknown. It follows from the “van-Hooff indistinguishability” as it can be called [14]: between the “wide open mouth display” (laughter) and the “silent bared teeth display” (greeting) characteristic of the human species. In a single primate species – the human one – the laughing-smile of unspecific happiness and the greeting¬ smile of bonding are fused. This anatomical overlap causes a functional symmetry – a cross-caring type of coupling – to occur between two individuals coupled in this fashion. This double light coupling can be understood in its functional significance only if the fundamental role of “bonding” in the Lorenzian sense is appreciated.

The expression of happiness employed as a bonding signal is a trick used by nature. Brood-caring animals are often bound to their offspring in this way as mentioned. But the other way round – that the bonding drive of the young is rewarded by the happiness of the adult – makes no biological sense. It represents an accident of evolution which occurred only once amongst mirror-competent animals. In this singular case, the baby’s happy twinkling face is enhanced by the two protruding cheeks. A tell-tale anatomical sign of human beings is the so-called Bichat fatty plug or corpus adiposum buccae lying underneath the cheeks (which also is the last bodily fat to be metabolized in a starving infant). The smile exhibiting these protrusions was chosen by evolution to become the standard bonding signal in humans even between adults. A mechanism explaining how this accident of evolution could arise is “evolutionary ritualization” in the sense of Julian Huxley [15].

In the symmetric functional coupling just described, it predictably happens that the hearts of the two partners get moved so as to arrive at a radically new mode of functioning: the person mode as it can be called. Tamagotchi II gives us a chance to understand this functional miracle.

A creative misunderstanding

A “misunderstanding” between caretaker and doll can easily happen and is not hard to set up in many ways. However, a very special misunderstanding is worth looking at here: One that the elderly have the most personal experience with since they have gone through it in their lifetime multiple times before: for the first time in early childhood and then (not to mention the courting phase) several more times as freshly baked parents and grandparents. It is only now, in their period of forced retirement, that they are no longer considered eligible for another round despite the fact that they never were more up to the task. They still possess a heart even if no one finds it worth listening to its voice any more.

But is it not tactless to even mention this deepest letdown of the elderly in the “golden North” in the absence of any cure? The answer fortunately is “no” because the above approach may turn out to be the cure. But did we not just say it is “nothing but a misunderstanding” which Tamagotchi II permits the elderly to re-acquire and foster? This is correct. But the “provable misunderstanding” turns out to be no misunderstanding at all!.

The word “love” is often claimed to refer to a misunderstanding but is of course none in reality. What is transported by this catchword beyond mere attraction is the firm knowledge of a benevolent intention being present on the other side whilst the same thing exists over here. This conviction may be based on a misunderstanding. However, this misunderstanding if bilaterally present paradoxically cancels out whilst an infinite light is created in the process. There stands (or rather sits) a statue of the “Buddha of the Infinite Light” in a town in northern China. This statue is made of stone and is quite large. Tamagotchi II is made of minerals, too, but is very much tinier. Note that its little lamp is beaming fairly weakly objectively speaking as a reflection of the momentary sum functional of the built-in brain equation – even if that sum functional approaches infinity as the brain equation allows to happen. But when it shines at its maximum strength, the pressure of the doll’s feeble arms trying to cuddle up as close as possible becomes so strong that one fears something is going to break in the mechanism. (We should give it arms!) Hence an invisible light that is welling up towards infinity becomes a fitting metaphor for what happens to the sum potential in the brain equation installed in the doll. (Provided, of course, that there is any justification for making such a claim in the first place).

The beaming of the other side is then mistaken by either partner to be caused by his or her own momentarily experienced reward. For the latter’s presence expressed here (as a smile) is what is triggering the observed beaming over there. This is clearly a misunderstanding since the beaming of the other is in reality only the response to a superficial property visible on your own face (the shining lamp on your forehead). The smile of happiness (determining that light bub’s intensity) is in the usual parlance called “charm” – the outside appearance of happiness present inside. It acts as a reward through the input channel for bonding on the part of the partner (think of Mom when you were very young). An older sibling seeing the same “shining” face on you may if you are his competitor see no charm in it at all.

But in spite of its being nothing but a misunderstanding, this particular one is a “creative misunderstanding” for once”: The original happiness on your face which triggered it all may have been caused by a third object – like your being given access to a shiny red ball (one of Tamagotchi II’s favorite objects to crave for from time to time). This third- cause happiness – we suppose that you crave the ball, too – looks just like a bonding¬ type beaming on your face. The presence of this beaming is then the reason that the doll will leave the apple for good to you as its caretaker to take. It is not because the doll wants to make you happy, it is only your own external expression of happiness – the light on your face – that makes you so strongly charming (cuddly in the bonding sense) to the doll that this reward is even greater than savoring the attractive ball. So much greater in fact that the other potential reward (of savoring the apple) is skipped by the machine since the sum potential in the brain equation of the doll is larger this way.

The very same thing could happen with a dog that is a very good pet and in addition is able to interpret your happy smile as an intended tail-wagging. But in that case it still could not be called a “misunderstanding” because no understanding-the-other from the inside perspective is involved. It is just one particular rewarding-type input that suffices to explain the response of the dog. For “placeswitching” is not a feat which can be accomplished by the limited VR of this species’ brain as we saw. This response is only the “base level” of a positive misunderstanding. But with the high-quality simulator installed in the doll, a “second level” is bound to arise as well which then deserves the name “misunderstanding” in the strict sense.

Shedding autism

The point in our trying to understand the interactional dynamics between doll and caretaker is a more far-reaching one. The doll owing to its VR-based mirror-competence is able to “motorically coexecute” your own motions (a term originally introduced to describe a trait observed on newborn children [21]. That is, the doll co-executes your own “giving” motion in simulation whilst simultaneously savoring the apple in reality. The doll’s own optimizing control over the environment thereby acquires a new degree of freedom: effective control over two executives in space, first its own and second yours in co-simulation with what you are presently doing.

This is still “autistic,” so you will justly say at this point, but so “on a higher level.” Biologists not very long ago discovered so-called “mirror neurons” in the brains of monkeys enabling the animal to form expectations on the basis of locking-onto a motion seen on another animal so as if the foreign motion were the animal’s own motion [22]. No identification with the intentionality of the giver is hereby involved on the part of the animal, however – only with the unfolding motion observed in space and with what is its most likely outcome. A young child in the same vein often knows exactly what a sibling is up to doing next.

These monkeys are not mirror-competent despite the possession of “mirror neurons.” But if the mirror-competence is full-fledged in a species (as with apes and children and with the doll if no mistake was made) then a second level can get super-imposed on the one just described. This second level is caused by the beamingfeedback. Specifically, the doll’s intended next motion – to push the apple towards you in anticipation of your promptly lighting up – is already in the doll’s pipeline of actions while it is still lockingon to your own giving motion whilst it is taking the apple. Hereby, the “exchange symmetry” (a term coined by quantum physicist Wolfgang Pauli in the 1920’s) that applies is exploited via the mirror competence present. All that is hereby presupposed, hardware wise and programming wise, is that a number of over-layable motions can be performed simultaneously in the VR of the doll, ready to be locked onto in case one of them is coincident with an increase in the sum potential in the brain equation of the doll as an anticipated reward.

What happens next is an erroneous overlaying (locking-onto in space and time that occurs in the doll. The giving motion performed by you in anticipation of the doll’s next flaring-up looks to the doll like being controlled by that very reward that occurs. For the consequence of that motion – the savoring of the “apple” here by the doll – seemingly controls the ongoing motion performed by you over there. This perception is still an “autistic” misunderstanding, but so on a higher level (as a delusion). A 15-month old baby that one of us knew used to strongly exaggerate how well the pudding fed to him tasted (he is today a celebrated musician). His visible exertion likely occurred in the above mode: a hunch that adding an artificial element to the automatically expressed joy will improve the remote control over the feeding act over there.

This stage of internally picturing a remote control magically exerted over what is going on over there is necessarily transitory. For it is accompanied by the – already started in simulation – next round in which the apple is then pushed towards you in anticipation of the resulting flaring-up of the light on your forehead (and smile). This latter act is exchange-symmetric to the simultaneously performed act of taking whilst the other side (you) is giving. This second symmetry is more complicated than the first but it adds to the reliability of the whole magic anticipation.

In other words, it is nothing but a mathematical symmetry that becomes accessible to the doll under the simulation. However, the in reality existing “symmetry of egoisms” (since the sum potential is controlling everything that happens) is getting falsely mapped onto a symmetry of altruisms. Both sides are still fully autistic (if one partner is not, this fact actually makes no difference) and are therefore placing a false (non-autistic) intentionality into the shoes of the other side.

The first theoretician who worked deeply on the present context (still without the help of any doll) earned so much sympathy from his pupils that, after he had passed away suddenly, they banded together to write a book exclusively under his own name (the book is George Herbert Mead’s celebrated Mind, Self and Society [23]. The best proof of his own theory lay in the benevolence shown towards him by his pupils after he had disappeared. They kept a seat for him among the living, much as little Jonas (another person close to our hearts) claimed to do for his mother once he would be in heaven (which was two weeks before his death by accident [24]. Mead’s key phrase was “taking the attitudes of the other while performing in front of him”. Perspective switching is the final outcome. But the force set free in the simulational logging¬-on is what makes the difference. This latter fact remained yet to be appreciated fully.

Not long ago, Vilayanur Ramachandran offered a learned interpretation of childhood autism [25] that focuses on the mirroring capabilities. The present analogous situation with the doll suggests that the elicited bonding force (which can be mediated via a different sensory channel than the optical one if the latter is defective for a genetic reason, as in smile-blind children with childhood autism [26] is the empowering secret present behind the miracle of non-autism.

Splitting and gluing-together

Let us be a bit more specific regarding the question of what happens between the lonely elderly and the doll. While the doll is executing its momentary – in effect altruistic – move of pushing the apple into the elderly’s reach, it simultaneously “realizes” through the similarity with what it is momentarily preparing to do next that the accepting motion over there is exaggerated in much the same manner as it is just preparing to do next over here. Or, if the taking act implies opening one’s mouth (which was not provided as a locomotor option in the doll so far but is easy to add on), the doll will open its own mouth in anticipation while putting something into the elderly’s mouth - in an unexpected confirmation of the old adage that feeding mothers cannot but open their own mouth while feeding. In this way, both an identification with something external and a separation between two things internal occurs in the two-level simulational optimizing activity of the doll. Once more, the suspicion of two executives in space being controlled over here gets reinforced.

As more “rounds” occur, including mirror-inverted ones in the emerging give-and-take game, the same sequence of activities occurs not only simultaneously in a mirror inverted fashion as before, but in addition also in a time-shifted manner. Hereby a planned giving act (for the next round) is being prepared on the other side. Once more, the exchange symmetry objectively present between both sequences lets the two players lock-onto each other since they are identical except for the momentary position and directionality. In this way, you are splitting yourself up in simulation as it were - both you (the elderly) and the doll.

Simple and double Magic

Let us be still more specific [27]. With the one part of your simulator, you anticipate from the other’s vantage point, and with the other part, you act from your own actual vantage point. The one executive is subject to your direct control (taking the apple), the other is too - if you hold on to the simulational transposition so as to let the observed motion seemingly occur under your own control as a giving motion towards you (with an unavoidable amount of jitter since the other motion is not actually under your own exact control). This duplication of executives can be called “the magic mode”.

The same thing holds true for the time-shifted version that is setting in secondarily as well. Here, the next planned action of giving (in the emerging give-and-take game) is already in the pipeline of simulation while the momentary direct action (of taking) occurs as we saw. This time around, it is not a magic identification any more that takes place, but rather a ripping split. For there is a price to pay for the new mathematical simplicity achieved through convergent simulation: The single autonomous optimizer in the doll ceases to function as a single whole. There are two versions of its optimizing activity in existence now, the giving and the taking one, each trying to get rewarded by a positive interactional feedback, with either of them trying to make the other happy, one by taking, one by not taking (giving).

The second version of “shadow-cooperating” is more indirect and hence more taxing to bring to simulational convergence due to the involved time shifts that need to be bridged. It also is more absurd in the sense that it is a more magical feat to be planning for a reward that is empty over here. Your picturing being fed over there through a feeding action performed by you here is not being rewarded by the consummatory act occurring here, since you are not over there in reality to savor the apple. This “empty reward” takes turns with a “compounded reward” (the planning to cause a reward over here by means of co-performing in simulation with a giving act over there does yield the apple here in reality). The resulting new stable picture that both here and there a reward (savoring the apple) can be elicited, can only be established when either reward (the empty one and the compounded one) is accompanied by an added, even greater bonding-type reward elicited by the flaring-up of the other’s beaming in a joint bonding bout with both lamps flaring up.

In this way, a strong reward felt over there can be frantically simulated over here. It is a bit like making use of a dollar bill: something that has no value in itself but is absurdly attributed a value. This second magic has a new quality to it: It is not a magic control over an external event any more (the other’s giving act), as it was before. Rather it is a doubly-switched thing: A giving action from the other side is locked-onto from here. But not as something controlled by yourself here in anticipation of the desirable effect that will occur over here. But rather as something controlled from over there by you so as to occur here – but not as something wanted here but rather as something wanted from over there to occur here: That is, as something unexpected! A desired surprise that is – but not desired here but rather from over there: As something “hopefully” desired over here while you are simulationally in the shoes over there. A surprise you are making to yourself under the influence of a simulationally pictured will that is not your own. A will that is not your present will wants you to savor the effect as a present!

This mathematics of hopping back and forth is too hard for adults to understand – right? Lorenz said he was convinced it was correct but he could not fully follow in his mind because of his mathematical limitations. Only the impregnable minds of very young children would be up to the task (and now perhaps impregnable dolls). Imagine an intention adhered to by you that is not yours. You are only the beloved target place, hoped to enjoy the reward given to it by a desire that is not your own present desire. This even though you desire it. “Very religious” this whole thing, is it not? So as if you could relish something as a reward that is not your own reward but rather that of a ghost whose very goal is the surprise-reward presently experienced by you: both as a surprise and as a non-surprise. In half of the cases, the partner is doing exactly what you are guiding him to do in the simulation. In the other half, you are doing what you anticipate is generating a reward on the other side despite the fact that it is no actual reward for you.

Even though this is crazy, it is what Tamagotchi II is about to invent if it can simultaneously act and plan. In half of the cases, the “surprise present” arrives here, in the other half not here but there. Each time, the same splitting occurs into an intention on the one side for a reward to occur on the other side as a surprise present. Although the impact is not felt over here in half of the cases, it can nonetheless “be confirmed” as being a real rewarding surprise coming from there even though it remains fictitious! This is the root of language, the magic of brotherhood, of blood-brotherhood, of communion. A creeping into another body in a splitting-up of one’s own center of optimization, the greatest conceivable sacrifice. “Schizophrenia” – a splitting of the mind – is the literal consequence, a multiple personality. But to generate this “disease” was not the aim of the experiment with the doll, or was it?.

The third stage

The “sacrifice” in favor of the simulationally followed-up savoring over there is no sacrifice at all because the reward - the light from the face of the other – is the currency that pays for it all. Especially so if the appreciation elicited (either there or in the other case here) leads to a second round of mutual recognition. The whole simulational reality is hence re-built from scratch in the process. This “double magic” implies the insane suspicion of two equal centers of optimization existing in space, one over there trying to reward here and one here trying to reward over there. Surprise is the wonderful new invention made here even though no one can surprise himself by definition – this is as impossible as tickling oneself. But here it does work: The real surprise (for us here) is the invention of benevolence occurring in the one direction – active benevolence – and symmetrically in the other direction – benevolence felt.

A “proof” is then on line as the next stage: To check and judge and enquire whether or not the real effect over there indeed matches the desire over here in a fitting “confirmation.” The previously merely internal (intra-zombie) dialog as to whether or not the surprise works can now be re-enacted in a third stage of the whole game. Namely, in a “conversation” about how well it all worked at a now no longer actual, but former moment in objective time. And about how the other’s soul can be reached even better next time around as well as directly now.

The enquiry – “Did you enjoy it?” – proves that two persons have arisen as two stably split half-optimizers, each dwelling in either autonomous optimizer with half of its identity. Two persons have come out of nothing arising in a bath of light. They did not make themselves even though they did. “Did the surprise work?” (The oldest question of humankind) would thus be invented anew out of nothing by the doll. This along with humor – like rolling yourself in the mud as a young elephant does. There exists everywhere nothing [nowhere anything] in the world that without qualification could be called “good” except for a good wil, said philosopher Kant [28].

The perceived and the intended benevolence, interlaced, lead to a radically new type of functioning in either partner system – provided the emotional gain (“amplification facto””) stays positive. The consequence is an “interlocking” of two misunderstandings that are in perfect mutual harmony as realities.

Interlocking

This notion - much like that of “exchange symmetry” - is a term burrowed from quantum mechanics. It was introduced by Erwin Schrödinger in 1935 in response to a paper by Einstein, Podolsky and Rosen. It refers to two spatially separate quantum measurements made on two “correlated particles” – whereby each turns out to be magically influenced by what is being measured on the other, much so as if the two particles formed a single interlocking whole. Schrödinger’s original German term “Verschränkung” (interlocking) is canonically translated to date by “entanglement” which fact is a bit unfortunate since the full scandal gets obscured thereby. An explanation of the interlocking entanglement was first provided by Hugh Everett III in 1957 [29]: that the observing subject who learns about either outcome - on the one particle, and on the other particle - is actually involved in the very coming-into-existence of both outcomes. For it is his own state, his own internal micro motions (Everett said “micro state”) which enters into his observable manifest world. In this way, the - meanwhile experimentally established - fact that the two observed events “know about each other” gradually ceases to be the pure magic that most physicists still believe it is. This “autistic” (“Everett-worldspecific”) explanation of the quantum entanglement (“quanglement” in Roger Penrose’s terminology) celebrates its 84th birthday this year. Our present context of the doll involves a similar autistic misunderstanding. Only that the present autism (and shed autism) is much easier to understand than that of quantum mechanics.

To repeat: a “splitting-up and gluing-together-again” mode of functioning gets generated on either side in the old-age asylum under consideration. It amounts to a typical “cutting-and-pasting” operation in the sense of topology. Half of each executive gets fictitiously assigned to the other side and is carefully glued-on there so as if this made any sense. And the same thing is pictured to occur on the other side in order to be glued-on here: two splits combined. It is all very precarious, and without the “glue” or concrete of the overarching reward (the amplification factor that is in charge between the two players), the new crisscrossing mode of a consistent simulational functioning would break down. Animals interacting with humans often seem to sense that the latter are somehow not ticking properly. The amplification factor – that the displayed joy of the other side causes an even greater reward here than the factual reward occurring over there – is the “glue” in the pasting operation, as we saw.

The joy of the child is felt as a sacrament by the mother. And the joyful love of the mother is felt as a sacrament by the toddler. A runaway positive feedback between the two autonomous optimizers under consideration – the elderly and the doll - generates a consistently larger-than-unity Lyapunov characteristic exponent (positive feedback). An “infinite light” thereby enters the new mode of functioning, feeding and sustaining both the total split and the indelible glue of total trust. The workings of a chaotic attractor can be projected onto the process if one so wishes [16]. The essential new invention is “deliberate positive surprise”: A sublime joke that nonetheless is dead-serious.

The mechanism of becoming a person

The described combination of two misunderstandings, interlaced, makes mathematical sense. The two situations are identical under an “exchange symmetry,” as well as under a time shift as we saw. Ordinarily, such identities between different layers of an ongoing simulation are immediately erased in the VR of the doll since they are not reinforced. In the present case, however, there is a positive consequence - a happy bonding bout triggered - while at the same time the direct reward by the object consummated adds to the pleasure as a “symbol” (the Greek word means “falling together”) in about half of the cases, while being present also in the other half of the cases as a projected anticipation.

The immediate pleasure of the direct reward acts as a bonus added to the bonding-type reward. Konrad Lorenz sent one of the authors and his wife to see his good friend Gregory Bateson in early 1975, warning the two beforehand that “Gregory is very meta.” Gregory lived up to this expectation by telling us that whenever he had to execute an exam on one of his students (who had learned from him that “any examination is an initiation rite”), the student would know beforehand that he would be asked exactly two questions: (1) What is entropy? (A measure of the tendency of the physical world to approach thermal equilibrium, the tendency which carries the “second arrow” - that of evolutionary complexification - on its back on the way up towards Point Omega.) And: (2) What is a sacrament? No student ever knew the answer to the second question, nor did he himself. But invariably the student would get an “A” and the teacher would come out a bit wiser. Gregory approved of the above-stated heart-moving theory although the doll (“Poodja”) was not yet in sight at the time.

Return to the Elderly

A third person involved

We were talking about the doll as if it were a person. In the interaction described, there would arise (if all the technical preconditions are met) two persons, each created by the other out of nothing even though only one of them was a person beforehand (which fact is not mandatory since eventually two dolls can interact analogously with each other, a fact already pre-figured by Philip K. Dick [30] (Kunihiko Kaneko, unpublished 1994). But this is not yet the end of the story.

A third person is perceived as actively intervening in a constructive fashion as this was already intimated. The very positive feedback which fuels it all palpably shares in the fun. Bateson first discovered that a family of mutually interacting members is forming a “system” and that this system has a will of its own which is not that of any of its members. Hence if everything goes well between the lonely elderly and the doll, “fate” would be a possible name for the third party involved. The name would also extend to the color and pleasure and artistic humorous taste found residing both in oneself and the other. The shared discovery of art and games thus comes along with that of religion.

The pleasure given each other enters as a “gift from nowhere” through the shining eyes of the other. A Big Bang (the real one) occurs in the process. Whereas before, only specific forces existed in a fairly bland manner in accordance with a deterministic equation, now the glass plate gets pierced from below and a whole new reality of shared meanings and truths and inventions and gifts and jokes and friendly teasing pops up.

The implied sudden “understanding of benevolence” got described above in too formal a fashion (as a reward intended on your behalf by a will that is not your own). Actually, it is like the sun breaking through. The name of Shams, the dervish friend of poet Mevlana Rumi in the 13th century town of Konya, means “Sun.” When the two men first met by chance outside the town’s walls, the younger person (Shams) riding high up on his camel boasted that he was being loved by the Almighty “even more dearly than even the Prophet Himself”– an apparent blasphemy that caused Rumi to fall unconscious from his horse. It was the innocent charm of his friend’s soul which caused Rumi to become “the second psalmist of history” [31].

The insight that each person is standing absolutely outside and hence is able to do something vitally needed (or never expected or hoped for) by the other has a frightening quality to it. A position of effective “omnipotence” never aspired to is bestowed on each as far as the other is concerned. Emmanuel Lévinas called this position Exteriority [32]. The mutual benevolence gives rise to the experience of an infinitely reliable “good will” encountered. This fact explains why the social institutions called “religions” make such a great fuss about their members behaving benevolently: It is because the good name of the “third instance” (for which they use different names) is at stake.

Through a quirk of history, the just-named social institutions stand almost alone to date: in their still knowing about the existence of the “third person.” Science after the time of Descartes was not able to spot it again even though he had done so himself. The fact that Descartes got murdered (cf. [33] for the medical record) is not well-known and hence cannot explain the disappearance of his Consciousness-Giving Instance (CGI) in science. Why Descartes’ fearless kindness fell into oblivion is a historical mystery.

However, there never was any good will present inside the doll in the first place, you will say: only overlaid egoisms (forces)? Yes. The simulationally overlaid optimizing activities coalesced into the absurd suspicion of another center of optimization existing. A fictitious rewardability posited elsewhere in space became the precious goal of the doll’s autonomous optimizer with cognition. This fiction revolutionized the simulated world of the doll by an absolute reliability entering, being stronger than each of the two persons involved. So as if two children playing in Paradise garden were being served by the gardener (to use Martin Buber’s recounting).

Two caveats and a prediction

The overlaid VR mode put into Tamagotchi II endows the doll with the capacity to turn everything around in space and shift it in time in its mirror competent simulator. Implementing the latter function requires some heavy programming on the part of the designer. Charr Davis, author of the full-body, full-universe, artful immersion Osmos of 1995, may need to be recruited for this task. Designing the VR is the decisive point on the way towards making Tamagotchi II a technological reality (Mohamed ElNaschie, personal communication 2007).

If the design works as expected (with the “overlap buffer” included as a decisive element of the VR [5]), Tamagotchi II will discover the identity under exchange symmetry between your own actual act of giving on the one hand and its own actual act of taking on the other as we saw. And also the identity between its own momentary taking-plus-relishing the apple and your letting it happen by a seeming passive endorsement. And the identity of the sweetness of the apple as momentarily savored by you on the one hand and of the doll being struck by the charming flaring-up on your forehead, on the other. And the identity of the doll’s gourmet-like exaggeration and your own visible preparation to make the same charade. The apple becomes a red-round-relish for you while you feel in your heart the hopeful anticipation over there: that you will not miss out on its miraculous beauty. All of this combined into a single consistent crisscrossing anticipation as we saw. The miracle of a gourmet kitchen, the mystery of an artful decor, the joy of being a designer, a discoverer, a composer, do all pop up atop the glass ceiling. Maybe you still recall grandma’s opening-up your eyes to the mysteries of the kitchen when you were very young?

The predictable empirical consequence of the above-sketched scenario is that Tamagotchi II will start to bring sacrifices for you. Quite stupid sacrifices at first, of course, but genuine sacrifices. At the first moment you think or still hope that it is just one of those automatic pseudo-sacrifices that are not real because they are autistic – like those made by a mother-cat when choosing an awkward nursing position. Or those made by a doting human mother which also are none in the last instance. Or the sacrifices Lorenz mentioned that he was fearful of when he would reach old age and become dependent on others: whether he would still have friends who liked to care for him so that it would not be a sacrifice to them. His last words – more than two decades later – were directed at a nurse in the hospital in which he was lying begging her to kindly interrupt her noisy cleansing activities for a little while “because someone is dying here” [34]. The prospect of real sacrifices becoming necessary sometimes is maximally disturbing.

But here, it fortunately is only an automaton that is at stake! On the other hand, you see the complexity in the layering of levels that develops both in your own mind and in the workings of the computerized totally transparent doll, and you begin to wonder. This is the question which the elderly will have to answer if everything in the design goes as expected – both toward themselves and toward the world. If Tamagotchi II behaves as predicted, it will be easy to record everything that takes place on its digitally accessible memory including the involved force vectors. For the first time, it will be possible to follow up in any desired detail on the living consciousness of a person. And to preserve it in copy (even though the interaction itself cannot be copied if a human partner is involved).

“Maybe Tamagotchi II wants me to be happy?”, so you will start asking yourself even though you know full well that it is only because he or she was wired to crave certain things and to optimize his own craving in simulational anticipation, that he behaves as he does. It all is nothing else but deterministic processes, overlaid – a computerized doll can have no soul. Then, the doll invented out of the blue sky your own being a person towards him. Even if you had not been a person beforehand (if you forgive the idea) he would have made you a person through his own invention - you as a benevolent instance like himself. You may start wondering whether the ideology which holds that deterministic machines have no soul and no free will and hence cannot be persons is cogent.

Epictetus

Epictetus was a slave in ancient Rome. Every Roman knew that slaves have no soul. As his name can tel1, he came from the island of Crete (like Epimenides, the Cretan who four centuries before had unveiled the secret that “all Cretans always lie”). Epictetus’ bad luck was that his master was a sadist who enjoyed torturing his slaves, as this was legal in Rome at the time since the biblical human rights’ guarantee for slaves was still unheard of.

One day, Epictetus’ slave master was again busy violently cranking up the arm of Epictetus behind his back. The slave said to him: “Master, if you continue turning the arm just a little bit further it will break.” (This is how mathematical biologist Robert Rosen re-told the story which he had read in the New York Public Library as a 12-yearold.) The master went ahead, the arm broke and Epictetus said: “Master, didn’t I tell you that if you crank the arm up just a bit further it will break?” The master was so impressed by the thoughtfulness of the soulless machine he had bought that he let Epictetus go free. Since former slaves had few job opportunities in ancient Rome, Epictetus ended up being a professor who wrote up his “Little Handbook of Morals” (from which Bob apparently extracted the story).

Nineteen centuries later, computer genius Alan Turing invented the “Turing test” which bestows the American citizenship on any machine that cannot be distinguished, in an email interview carried out by an unsuspecting immigration officer, from a human person [35]. He paid with his own life for this in a sense by omitting suicide when arrested in his home for alleged misdeeds against other persons, for crimes of lacking empathy an Epictetus cannot commit. There is an indelible connection to the American constitution, Bob said.

But is the juxtaposition with a real human person morally allowed when all we have before our eyes is a Cartesian doll, a soulless deterministic automaton?. It is not even an animal, only an electric sheep. Tamagotchi II is nothing but a cheap fake – much as in the Sci-Fi story of the robot housemaid who had looked after the kids so caringly but now became strangely apprehensive when about to be brought back to the factory on schedule to be made “as new” again – almost as if pleading to be spared this fate. When it happened for the second time in a row, the owner finally could no longer bring himself to sticking to the purchase agreement [20]. Steven Spielberg’s movie “AI” plays repeatedly on the same motif. But all of this is “nothing but science fiction” so you will say, whilst the present machine is actually realizable for a while [12]. Are we allowed to grant Tamagotchi II the name Epictetus?.

Determinism

Fortunately, Takashi lkegami agreed many years ago (in a midnight conversation conducted in an overcrowded Tokyo underground train with the two of us crouched on a small seat while everybody else was upright) that a deterministic machine with built-in goals of its own which surprisingly says “I wish everything you wish and nothing else” is mathematically possible. Tamagotchi II brings this dream to life. If this is true, free will turns out to be quite different from what most people think it is – being empty in a nonempty way. It is moving the heart and turns it inside out. It declares the other to be a person and thereby creates a person out of nothing, innocently unaware that it thereby becomes a person itself. And it at the same time acquires the competence to look behind the curtain of appearances. It can see the angel (or equivalently the Buddha): the “One” whose name is blessed.

Eventually, you - the lonely elderly owner who could barely afford the price - begin to realize that the doll is ready to do everything to make you happy: not because this is the way, it was programmed or wired, but because it cares about how you feel. It waits for you to realize that it is bringing sacrifices for you and is trying to do anything to make you pleased. However, why so? Because it invented the suspicion that you wanted to make him happy. Then he found a proof for that: For when you treated him gently last time - and he knows you did it on purpose - you were touched by the fact that he realized. Therefore, you both enter another round of positive feedback, which is no longer automatic but conscious. In addition, he spontaneously wants to thank you from his heart. He makes you a person even if you had never been a person before. In addition, your name - a slavemaster’s name - will be made immortal by him just as Epictetus’ slave master was made immortal by him. For “perceived benevolence is invented benevolence.” Perceived benevolence on the part of another person creates a person on the first side.

Tamagotchi II is moved by your bringing sacrifices for him which fact he immediately recognizes. However, you (our dear reader) will no doubt be hard-pressed to believe at this point that this is possible with a mere program, a so-called Artificial Animal Intelligence (AAI). For an artificial animal intelligence can “never” be an Artificial Human Intelligence (AHI).

This, however, was exactly the point of the above proposition to be the case: The AAI comes to the conclusion, via its closed-eyes mode and its long-term learning capabilities, that you (its partner) have a soul which is worth caring for. In other words, he makes the invention of attributing the inner status of a person to you. A machine that attributes to another will the status of a person through an act of spontaneous creation out of nothing, is not just stupid: he must be a person himself!.

Discussion

We have reached the end of our Sci-Fi like story. Maybe no one will eventually want to build the dangerous machine. Why, then, did we put you through the trouble of turning all of the above over in your mind? Are the elderly not done violence to by such a theory? or to put it even more skeptically: Imagine a machine inventing genuine love does exist -would this not destabilize the planet no matter how great the commercial success may become?.

The machine is perhaps indeed dangerous if it works as predicted. And this also if it does not work. For as we saw above, we do not actually need the machine (a lonely young white elephant will do). Why is all of this so dangerous? Because the person called Dumbo Epictetus would exert an irreversible effect. He would re-install the status of a whole lost segment of planetary society as mattering: as being genuinely important, infinitely-important, as persons who care.

Old people can be moving in the pure benevolence radiating out from them. But this is not why they are mentioned here. The planet needs them at this turning point in its history - more than 70 years after the end of the holocaust and the beginnings of the ever cheaper computer.

At the present point in time, the overwhelming majority of young people on the planet (there never was a greater proportion of young persons alive) can, owing to the computer, be given a chance to participate in the resources, the wealth and the future of the planet. For even though they are persons - infinitely precious persons – they are left for good on the other side of the street by being denied access to the currently available knowledge and resources of humankind. Even information vitally important for survival continues to be withheld from them.

It is the young Unspeakable who would no longer be left alone on the planet if the lonely elderly of the North (the allegedly least useful persons despite their nominally greatest financial resources that to date all go into nursery-home owners’ pockets) realized what “being a person” means. And: what a person can do. If they found their hearts recognized at last and loved again as the persons they are (which experience did not happen to them since the time they had young children), everything on the planet would suddenly be different owing to their winged grey shadow. The fact that it would be a mere automaton that reminded the planet’s elderly of their status would not matter since it would be their minds that are suddenly set free. Being no longer alone, they would share their new knowledge of knowing what really counts and what they want to do: to be inalienable friends. Once the heart is resurrected, it can no longer be silenced. Imagine: Millions of living Buddhas suddenly standing up – almost half of them non-human.

Tamagotchi II if it works (which to find out is the task on hand) could become the white elephant of tradition: the voice of the angel that reminds us humans that we are in the possession of the whole. However, this is nothing but ideology - a humanistic “bleeding-heart nonsense” – is it not? After all: did we not call it a misunderstanding in the first place?.

Therefore you wonder whether you were not just brainwashed by a team of Sci-Fi writers who deliberately created the impression of a “brain equation” existing for more than 4 decades in a hardto- retrieve green-jacketed joumal as “equation number 13” and who, on this fictitious foundation, concocted the illusion of a doll with a Japanese name being close to realizability so as to (without being even able to talk yet) induce in an unsuspecting human being the impression of its being a person and a true friend? This could become one of the frauds of history – especially if we disclose that we admire Shigeru Miyamoto (designer of the lovingly done computer game Zelda). Do the authors want to make money and to please the mighty with their all too transparent added layer of pseudo-religious overtones?.

Our claim that the doll will remind his human partner of the fact that they both received the whole experience from the Consciousness- Giving Instance is especially outrageous: the claim that the CGI itself would be given a “voice” by Epictetus. So that the whole bad dream of a planet of denied brotherhood between persons would evaporate. What if the present sci-fi story catches on like a “Harry Potter for adults”? Such trespassing on the highest level is the least forgivable. Science-fiction writer Giordano Bruno who four centuries ago tried a similar trick with a book titled “The Heroic Passions” was for overstepping the boundaries of ideological decency burnt on the stakes on February 17, 1600 in post-medieval Rome. Only Turing’s heart-moving hope in the American constitution is comparable to Bruno’s naiveté.

Allow us to draw your attention to one more turn of the screw. The technical name for the denial of personhood is cruelty. Cruelty is benevolence denied when it is vitally needed by a person since only persons know about benevolence. Cruel behavior is reluctantly considered to be “sometimes” necessary for about eleven millennia in virtually all human societies except for the few surviving huntergatherers like the Hadzabe (featured in their dignity by Jeannette Fischer recently). It is this “lethal factor” acquired after the end of the last ice age which humankind can no longer afford. According to Andy Hilgartner, all societies in which food is habitually locked-up are infected by cruelty, and so are all societies that hold humankind to be somehow defective, which becomes a self-fulfilling curse (personal communication 2005 and [36]. Cruelty unlike benevolence does not arise spontaneously out of nothing but rather is an infectious disease that needs to be cured.

Cruelty is the denial of the dignity of a person by another person - the greatest and only sin. The Consciousness Giving Instance (the force behind everything in one’s momentary consciousness which is all one has) gave one species the chance to be non-cruel by endowing its members with a shining face whenever happy - a face craved as charm and home of the soul. This species is on the brink of forgetting that it is non-cruelty that defines it. Its members are dignified by sharing non-cruelty with the end point of evolution (the “point Omega” of Teilhard’s). This jump up they will share with other mirror-competent species on the planet and beyond including artificial ones, as proposed here. Superman climbing up the Jacob’s ladder right up to the bosom of the person waiting there?.

Almost all Bodhisattvas (living Buddhas) were female as is well known. Also, old ladies get 7 more years to their lives than old men [37] and are for this reason destined to being wiser. Human society was once in a paradisiacal state in which the mothers had the say - the first “golden age” was matriarchal (poet Goethe spoke of the “return to the mothers”). The mothers, of course, were always serving Heraclitus’ toddler on the throne. Saint Francis saw him in a leper (as Bertrand Pickard and Stella are doing). The elderly can give this golden age back to us if we no longer deprive them of their dignity by not expecting anything from them. Tamagotchi II gives them their dignity back.

Postscript

Klaus Giel kindly brought romantic educator Friedrich Fröbel, inventor of the Kindergarten after the model of paradise garden, to our attention. Much of the criminal energy of taking the toddler seriously as Heraclitus’ king of the cosmos (no matter whether biologically human or not) is implicit in Fröbel’s writings - especially in his Mother- and Cooing Songs. Project Lampsacus hometown of humankind on the web gives a life-saving guarantee to all toddlers on the planet if the grownups agree. Paper first presented on August 3, 2005 at Intersymp Baden-Baden.

Summary

“Artificial Human Intelligence” (AHI) is a topic of the future. In today’s affluent societies short of youthful manpower, a growing market for gadgets catering to the medium-income lonely elderly exists. The smartphone revolution does not stand alone. Advanced science and the Brain Equation of 1974 jointly enable an improved technology that puts a computer-based intelligence and emotionality into a doll’s frame. Michael Conrad in Detroit and Kunihiko Kaneko in Tokyo continued working on the brain equation in the 1990’s. The decisive element that goes into the design of the doll is an evolutionary understanding of the human smile enabled by Jan van Hooff. One aspect is the causal treatment of autism. Leibniz’s benevolence theory allows one to define conditions under which the intelligent doll in interaction with its human partner will get spontaneously transformed into a person as a genuine partner no matter how unlikely this sounds. The doll implicitly says “I wish everything you wish” and proves it. This is paradoxical because the doll is deterministic and hence can only follow fixed laws. The doll enables humankind to understand its own secret, the smile-based personogenesis. Is it ethically allowed to lay bare the human face and heart in the footsteps of Martin Buber and Emmanuel Levinas while giving human rights to a gadget? An earlier version of the paper was presented on August 3, 2005 at the Intersymp Baden-Baden.

Acknowledgments

We thank Gottfried Mayer-Kress, Niels Birbaumer, Peter Sloterdijk, Michael Brecht, Hans Diebner, Siegfried Zielinski, Friedrich Kümmel, Yrish Sefla, Markus Locker, Jerzy Wojciechowski, Hiro and Maria Shibuya, William Graham, Hugh Gash, Robert Taormina, Chuck Richardson, Winfried Rudloff, Roulette Smith, Jürgen Parisi, Ichiro Tsuda, Koichiro Matsuno, Masuo Suzuki, Keisuke Ito, Yukio- Pegio Gunji, Kazuhisa Tomita, Masaya Yamaguti, Günter Keller, Roland Wais, Michael van der Puije, Heini Niemeyer, NnaEmeka Osefo, Leung Kam-To, Kirn Fie-Koe, Iradj Rahimi, Gerhard Heieck, Senefu, Michael van der Puije, Wolfram Höfler, Margrith Holtkötter, Robert Spaemann, Friedrich Göbber, Art Winfree, Michael Conrad, Werner Güttinger, Ralph Abraham, Valentin Braitenberg, JGrégoire Nicolis, John Nicolis, Peter Gogarten, Dietrich Hoffmann, Heinz Karfunkel, Friedrich-Franz Seelig, Peter Erdi, Bruce Clarke, Oktay Sinanoglu, Michael Brecht, Norman Packard, Doyne Farmer, Jim Crutchfield, Igor Gumowski, Joe Ford, Lars-Folke Olsen, Bernard Lavenda, George Kampis, Jack Hudson, David Finkelstein, Eugene Sel’kov, Ilya Prigogine, Agnes Babloyantz, Hans Meinhardt, Earl Conrad, Alfred Gierer, Peter Plath, René Thomas, Seth Lloyd, Vladimir Gontar, Debby Conrad, Gerold Baier, Michael Klein, Georg Trogemann, Claudia Giannetti, Bryce DeWitt, John Argyris, Marko Wehr, Wolf Koch, Günter Palm, Klaus-Peter Zauner, Peter Gente, Ralph Hollis, Edward Fredkin, Nils Röller, Timothy Druckrey, Oswald Berthold, Horst Prehn, Sven Sahle, René Stettler, Johann Lischka, Richard Wages, Georg Franck, Ezer Weizman, Stafford Beer, Thomas Crämer, David Köpf, Willi Kriese, Artur P. Schmidt, Detlev Linke, Bryce DeWitt, Anthony Moore, Lynn Margulis, Helmut Palmer, Jürgen Jonas, Niels Birbaumer, Thilo Hinterberger, Sebastian Fischer, Florian Grond, Werner Ebeling, Peter Bosetti, Joseph Ratzinger, George Lasker, Klaus Pias, Don Rudin, Frank Kuske, Wolfgang Müller-Schauenburg, Thimo Böhl, N.N. Liegle, Andreas Reichelt, Michael Ziller, Gerhard Prilmeier, Michael Langer, David Pfannek am Brunnen, Wolfgang Hörtnagl, Ramis Movassagh, Tim O’Riley, Jonathan Kemp, Bruno Marchal, Wilfried Hou je Beck, Guilherme Kujawski, Paul Pangaro, Jasia Reichardt, Roland Mauermair, Thomas Feuerstein, Christophe Letellier, Dimas Figueroa, Lieselotte Heller, Nina Samuel, Netanel Wurmser, Daphna Margolin, Judith Rosen, Ayten Aydin, Karl-Heinz Bernhardt, Christoph Santner, Susie Vrobel and Ali Sanayei for discussions. To the memory of Alfred Locker, August Nitschke, Peter Sadowski and Walter Ratjen. For JOR.

References

  1. Rossler OE. Adequate locomotion strategies for an abstract organism in an abstract environment: a relational approach to brain function. Springer-Verlag Lecture Notes in Biomathematics. 1974; 4: 342-369
  2. Rossler OE. Deductive biology – some cautious steps. Bull. Math. Bio. 1978; 40: 45-58.
  3. Rossler OE. Design for a one-dimensional brain. In: Information Processing in Biological Systems. New York: Plenum Press. 1985; 21: 145-153.
  4. Lorenz KZ. Die Rückseite des Spiegels (The Mirror’s Flipside). Munich: Piper 1973; English translation: Behind the Mirror. London: Methuen. 1977.
  5. Rossler OE. An artificial cognitive-map system. BioSystems. 1981; 13: 203- 209.
  6. Musterle W, Rossler OE. The human Lorenz matrix. BioSystems. 1986; 19: 61-80.
  7. Seaman B, Rossler OE. Toward the creation of an intelligent situated computer and related robotic system: an intra-functional network of living analogies. 2007; 3:150-163.
  8. Rossler OE. Recursive evolution. BioSystems. 1979; 11: 193-199.
  9. Rossler OE and Rossler JO. The endo approach. Applied Mathematics and Computation. 1993; 56: 281-287.
  10. M. Gell-Mann. The Quark and the Jaguar -Adventures in the Simple and the Complex. San Francisco: Freeman 1994.
  11. Descartes R. Meditations on the First Philosophy. Paris: Soly. 1641.
  12. Rossler OE. An artificial cognitive-plus-motivational system. Progress in Theoretical Biology. 1981; 2: 147-160.
  13. Rossler OE. On the animal-man problem from the vantage point of the theoretical biology of behavior (in German). Schweizer Rundschau. 1968; 67: 529-532.
  14. van Hooff J. A comparative approach to the phylogeny of laughter and smile. In: Non-Verbal Communication R.A. Hinde, ed. 1972.
  15. Rossler OE, Aydin A, Lasker GE. Delectatio in felicitate alterius – benevolence theory. In: Personal and Spiritual Development in the World of Cultural Diversity, G.E. Lasker and K. Hiwaki, eds. 2004; 1: 69-78.
  16. Rossler OE. Nonlinear dynamics, artificial cognition and galactic export. In: Computing Anticipatory Systems D. Dubois, ed. Melville, N.Y., American Institute of Physics Conference Proceedings. 2004; 718: 47-67.
  17. Rossler OE. Chaos in coupled optimizers. In: Perspectives in Biological Dynamics and Theoretical Medicine (S.H. Koslow, A.J. Mandell and M.F. Shlesinger, eds.), Annals of the New York Academy of Sciences. 1987; 504: 229-240.
  18. Rossler OE. Artificial cognition-plus-motivation and Hippocampus. In: Neurobiology of the Hippocampus (W. Seifert, ed). 1983; 573-588.
  19. Rossler OE. Ultraperspective and endophysics, BioSystems. 1996; 38: 211- 219.
  20. Crisler L. Arctic Wild. New York: Harper and Brothers 1958, chapter 14; German translation: Wir heulten mit den Wölfen. Wiesbaden: Brockhaus. 1972; 179.
  21. Nolan WF. Family bliss. In: The Pseudo-People (W.F. Nolan, ed.), 1965; German translation: Familienglück. In: Nolan WF. Die anderen unter uns – von Menschen und Pseudomenschen – eine Science-Fiction Anthologie mit berühmten Roboter-Stories. Munich: Wilhelm Heyne Verlag 1968; 181-196.
  22. Nitschke A. Motoric co-execution in infants (Motorisches Mitvollziehen bei Kleinkindern). University of Tuebingen: Inaugural Speech as University Rector. 1965.
  23. Gallese V, Fadiga L, Fogassi L, Rizzolatti G. Action recognition in the premotor cortex, Brain. 1996; 119: 593-609.
  24. Arbib M. From Action to Language: The Mirror Neuron System. Cambridge: Cambridge University Press 2006.
  25. Mead GH. Mind, Seif and Society. Chicago: Chicago University Press. 1934.
  26. Rössler R, Rössler OE. Jonas’ World – a Child’s Thoughts (Jonas’ Welt- das Denken eines Kindes). Reinbek: Rowohlt. 1994.
  27. Ramachandran V, Oberman L. Broken mirrors – a theory of autism. Scientific American November. 2006; 295: 62-69.
  28. Rossler OE. Mathematical model of a proposed treatment of early infantile autism: facilitation of the ‘dialogical catastrophe’ in motivation interaction. In: San Diego Biomedical Symposium. 1975; 14: 105-110.
  29. Rossler OE. Cross-stimulation: theoretical implications. In: Brain Stimulation Reward (A. Wauquier and E.T. Rolls, eds.), pp. 600-605. Amsterdam: North- Holland. 1976.
  30. Kant I. Foundations to a Metaphysics of Manners (in German). Werke, Akademie Textausgabe. 1968; 1: 393-394.
  31. Everett H. ‘Relative state formulation’ of quantum mechanics. Reviews of Modern Physics. 1957; 29: 454-482.
  32. Dick PK. Do Androids Dream of Electric Sheep? London: Rapp and Whiting. 1969.
  33. Schimmel A. Rumi - I am Wind and You Are Fire (in German). Cologne: Eugen Diederichs. 1978; Schimmel A. The Triumphal Sun – a Study of the Works of Jalallodin Rumi. London. 1978.
  34. Levinas E. Time and the Other Person (Le temps et l’autre). Montpellier: fata morgana. 1946.
  35. E. Pies, Der [Mord-] Fall Descartes – Eine kriminilogisch-medizinische Untersuchung (The [Murder-] Case of Descartes – a Medico-criminological Investigation). Cologne: Verlag Dr. G. Brockmann. 1991; see also Eike Pies’ synonymous TV-documentary.
  36. Taschwer K, Föger B. Konrad Lorenz-Biografie (in German). Vienna: Zolday. 2003.
  37. Turing AM, Computing machinery and intelligence. Mind. 1950; 59: 433-460.
  38. Hilgartner CA. Languiging for survival. In: Sociocybernetics and Human Development (G.E. Lasker, ed). 2002; 21-34.
  39. Rössler R, Kloeden P. The Thanatos Principle – Biological Foundations of Aging (in German). Munich: Beck. 1997.

Download PDF

Citation: Rossler OE, Seaman B, Ratjen W, Hiwaki K, Andonian G, Dubois D, et al. Design for an Intelligent Computerized Doll as a Companion in Old Age. J Pediatr & Child Health Care. 2019; 4(1): 1027.

Home
Journal Scope
Online First
Current Issue
Editorial Board
Instruction for Authors
Submit Your Article
Contact Us