Uncanny valley

From Wikipedia, the free encyclopedia
  (Redirected from The Uncanny Valley)
Jump to: navigation, search

The uncanny valley is a hypothesis in the field of human aesthetics which holds that when human features look and move almost, but not exactly, like natural human beings, it causes a response of revulsion among some human observers. The "valley" refers to the dip in a graph of the comfort level of humans as subjects move toward a healthy, natural human likeness described in a function of a subject's aesthetic acceptability. Examples can be found in the fields of robotics[1] and 3D computer animation,[2][3] among others.

Etymology[edit]

The term was coined by the robotics professor Masahiro Mori as Bukimi no Tani Genshō (不気味の谷現象) in 1970.[4] The hypothesis has been linked to Ernst Jentsch's concept of the "uncanny" identified in a 1906 essay "On the Psychology of the Uncanny".[5][6][7] Jentsch's conception was elaborated by Sigmund Freud in a 1919 essay entitled "The Uncanny" ("Das Unheimliche").[8]

Hypothesis[edit]

Mori's original hypothesis states that as the appearance of a robot is made more human, some human observers' emotional response to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong revulsion. However, as the robot's appearance continues to become less distinguishable from that of a human being, the emotional response becomes positive once again and approaches human-to-human empathy levels.[9]

This area of repulsive response aroused by a robot with appearance and motion between a "barely human" and "fully human" entity is called the uncanny valley. The name captures the idea that an almost human-looking robot will seem overly "strange" to some human beings, will produce a feeling of uncanniness, and will thus fail to evoke the empathic response required for productive human-robot interaction.[9]

Theoretical basis[edit]

Hypothesized emotional response of human subjects is plotted against anthropomorphism of a robot, following Mori's statements. The uncanny valley is the region of negative emotional response towards robots that seem "almost human". Movement amplifies the emotional response.[10]

A number of theories have been proposed to explain the cognitive mechanism underlying the phenomenon:

  • Mortality salience. Viewing an "uncanny" robot elicits an innate fear of death and culturally-supported defenses for coping with death’s inevitability.... [P]artially disassembled androids...play on subconscious fears of reduction, replacement, and annihilation: (1) A mechanism with a human facade and a mechanical interior plays on our subconscious fear that we are all just soulless machines. (2) Androids in various states of mutilation, decapitation, or disassembly are reminiscent of a battlefield after a conflict and, as such, serve as a reminder of our mortality. (3) Since most androids are copies of actual people, they are doppelgängers and may elicit a fear of being replaced, on the job, in a relationship, and so on. (4) The jerkiness of an android’s movements could be unsettling because it elicits a fear of losing bodily control."[13]
  • Pathogen avoidance. Uncanny stimuli may activate a cognitive mechanism that originally evolved to motivate the avoidance of potential sources of pathogens by eliciting a disgust response. "The more human an organism looks, the stronger the aversion to its defects, because (1) defects indicate disease, (2) more human-looking organisms are more closely related to human beings genetically, and (3) the probability of contracting disease-causing bacteria, viruses, and other parasites increases with genetic similarity."[12][14] Thus, the visual anomalies of android robots and animated human characters have the same effect as those of corpses and visibly diseased individuals: the elicitation of alarm and revulsion.[citation needed]
  • Sorites paradoxes. Stimuli with human and nonhuman traits undermine our sense of human identity by linking qualitatively different categories, human and nonhuman, by a quantitative metric, degree of human likeness.[15]
  • Violation of human norms. The uncanny valley may "be symptomatic of entities that elicit a model of a human other but do not measure up to it".[16] If an entity looks sufficiently nonhuman, its human characteristics will be noticeable, generating empathy. However, if the entity looks almost human, it will elicit our model of a human other and its detailed normative expectations. The nonhuman characteristics will be noticeable, giving the human viewer a sense of strangeness. In other words, a robot stuck inside the uncanny valley is no longer being judged by the standards of a robot doing a passable job at pretending to be human, but is instead being judged by the standards of a human doing a terrible job at acting like a normal person. This has been linked to perceptual uncertainty and the theory of predictive coding.[17][18]
  • Religious definition of human identity. The existence of artificial but humanlike entities is viewed by some as a threat to the concept of human identity, as constructed in the West and the Middle East. This is particularly the case with the Abrahamic religions (Christianity, Judaism, and Islam), which emphasize human uniqueness.[19] An example can be found in the theoretical framework of psychiatrist Irvin Yalom. Yalom explains that humans construct psychological defenses in order to avoid existential anxiety stemming from death. One of these defenses is "specialness", the irrational belief that aging and death as central premises of life apply to all others but oneself.[20] The experience of the very humanlike "living" robot can be so rich and compelling that it challenges humans' notions of "specialness" and existential defenses, eliciting existential anxiety. The creation of human-like, but soulless, beings is considered unwise; the golem in Judaism is a well-known example. Like anthropomorphic robots, a golem may be created with good intentions, but its absence of human empathy and spirit can lead to disaster.[citation needed]
  • Conflicting perceptual cues. It is hypothesized that the uncanny valley effect is associated with a more general psychological phenomenon in which negative affect is produced by the activation of conflicting cognitive representations. Perceptual tension occurs when there are conflicting cues to category membership, such as when a humanoid figure has robotic movement, which is then experienced as psychological discomfort (i.e., "eeriness"). This is supported by converging lines of evidence, including behavioural and neuroimaging studies, and mathematical modelling. First, Burleigh and colleagues demonstrated that faces at the midpoint between human and non-human categories produced a level of reported eeriness that diverged from an otherwise linear model relating human-likeness to affect.[21] Second, Saygin et al. found increased neural activity in the parietal cortex when participants were viewing a category-inconsistent robot (i.e., a robot with a human-like appearance and robotic movement), as compared to when they were viewing a category-consistent robot (i.e., a robot with a robot-like appearance and robotic movement) or a human, and suggested that this activity indicated increased prediction error due to perceptual conflict.[17] Moore used a Bayesian mathematical model to provide a quantitative account of perceptual conflict.[22] This model not only predicts the shape of Mori's hypothesized curves, but may also allow predictions to be made for a range of social situations in which conflicting perceptual cues might give rise to negative reactions.[citation needed]

Research[edit]

One study conducted in 2009 examined the evolutionary mechanism behind the aversion associated with the uncanny valley. A group of five monkeys were shown three images: two different 3D monkey faces (realistic, unrealistic), and a real photo of a monkey's face. The monkeys' eye-gaze was used as a proxy for preference or aversion. Since the realistic 3D monkey face was looked at less than either the real photo, or the unrealistic 3D monkey face, this was interpreted as an indication that the monkey participants found the realistic 3D face aversive, or otherwise preferred the other two images. As one would expect with the uncanny valley, more realism can lead to less positive reactions, and this study demonstrated that neither human-specific cognitive processes, nor human culture explain the uncanny valley. In other words, this aversive reaction to realism can be said to be evolutionary in origin.[23]

As of 2011, researchers at University of California, San Diego and California Institute for Telecommunications and Information Technology are measuring human brain activations related to the uncanny valley.[24][25] In one study using fMRI, a group of cognitive scientists and roboticists found the biggest differences in brain responses for uncanny robots in parietal cortex, on both sides of the brain, specifically in the areas that connect the part of the brain’s visual cortex that processes bodily movements with the section of the motor cortex thought to contain mirror neurons. The researchers say they saw, in essence, evidence of mismatch or perceptual conflict.[17] The brain "lit up" when the human-like appearance of the android and its robotic motion "didn’t compute". Ayşe Pınar Saygın, an assistant professor from UCSD, says "The brain doesn’t seem selectively tuned to either biological appearance or biological motion per se. What it seems to be doing is looking for its expectations to be met – for appearance and motion to be congruent."[26][27][28]

Viewer perception of facial expression and speech and the uncanny valley in realistic, human-like characters intended for video games and film is being investigated by Tinwell et al., 2011.[29] Consideration is also given by Tinwell et al. (2010) as to how the uncanny may be exaggerated for antipathetic characters in survival horror games.[30] Building on the body of work already undertaken in android science, this research intends to build a conceptual framework of the uncanny valley using 3D characters generated in a real-time gaming engine analysing how cross-modal factors of facial expression, and speech may exaggerate the uncanny. Tinwell et al., 2011[31] have also introduced the notion of an unscalable uncanny wall that suggests that a viewer’s discernment for detecting imperfections in realism will keep pace with new technologies in simulating realism.

In computer animation[edit]

A number of films that use computer-generated imagery to show characters have been described by reviewers as giving a feeling of revulsion or "creepiness" as a result of the characters looking too realistic.  Examples include:

  • According to roboticist Dario Floreano, the animated baby in Pixar's groundbreaking 1988 short film Tin Toy provoked negative audience reactions, which first led the film industry to take the concept of the uncanny valley seriously.[32][33]
  • Several reviewers of the 2004 animated film The Polar Express called its animation eerie.  CNN.com reviewer Paul Clinton wrote, "Those human characters in the film come across as downright... well, creepy.  So The Polar Express is at best disconcerting, and at worst, a wee bit horrifying." [34]  The term "eerie" was used by reviewers Kurt Loder[35] and Manohla Dargis,[36] among others. Newsday reviewer John Anderson called the film's characters "creepy" and "dead-eyed", and wrote that "The Polar Express is a zombie train." [37]  Animation director Ward Jenkins wrote an online analysis describing how changes to the Polar Express characters' appearance, especially to their eyes and eyebrows, could have avoided what he considered a feeling of deadness in their faces.[38]
  • In a review of the 2007 animated film Beowulf, New York Times technology writer David Gallagher wrote that the film failed the uncanny valley test, stating that the film's villain, the monster Grendel, was "only slightly scarier" than the "closeups of our hero Beowulf’s face... allowing viewers to admire every hair in his 3-D digital stubble." [3]
  • In the 2010 film The Last Airbender, the character Appa, the flying bison, has been called "uncanny".  Geekosystem's Susana Polo found the character "really quite creepy", noting "that prey animals (like bison) have eyes on the sides of their heads, and so moving them to the front without changing rest of the facial structure tips us right into the uncanny valley".[39]

By contrast, at least one film, the 2011 The Adventures of Tintin, was praised by reviewers for avoiding the uncanny valley despite its animated characters' realism.  Critic Dana Stevens wrote, "With the possible exception of the title character, the animated cast of Tintin narrowly escapes entrapment in the so-called 'uncanny valley.'" [40] Wired Magazine editor Kevin Kelly wrote of the film, "we have passed beyond the uncanny valley into the plains of hyperreality." [41]

Design principles[edit]

A number of design principles have been proposed for avoiding the uncanny valley:

  • Design elements should match in human realism. A robot may look uncanny when human and nonhuman elements are mixed.[42] For example, both a robot with a synthetic voice or a human being with a human voice have been found to be less eerie than a robot with a human voice or a human being with a synthetic voice.[6] For a robot to give a more positive impression, its degree of human realism in appearance should also match its degree of human realism in behavior.[43] If an animated character looks more human than its movement, this gives a negative impression.[44] Human neuroimaging studies also indicate matching appearance and motion kinematics are important.[17][45][46]
  • Reducing conflict and uncertainty by matching appearance, behavior, and ability. In terms of performance, if a robot looks too appliance-like, people will expect little from it; if it looks too human, people will expect too much from it.[43] A highly human-like appearance leads to an expectation that certain behaviors will be present, such as humanlike motion dynamics. This likely operates at a sub-conscious level and may have a biological basis. Neuroscientists have noted "when the brain's expectations are not met, the brain...generates a 'prediction error'. As human-like artificial agents become more commonplace, perhaps our perceptual systems will be re-tuned to accommodate these new social partners. Or perhaps, we will decide "it is not a good idea to make [robots] so clearly in our image after all."[17][46][47]
  • Human facial proportions and photorealistic texture should only be used together. A photorealistic human texture demands human facial proportions, or the computer generated character can fall into the uncanny valley. Abnormal facial proportions, including those typically used by artists to enhance attractiveness (e.g., larger eyes), can look eerie with a photorealistic human texture. Avoiding a photorealistic texture can permit more leeway.[48]

Criticism[edit]

A number of criticisms have been raised concerning whether the uncanny valley exists as a unified phenomenon amenable to scientific scrutiny:

  • Good design can lift human-looking entities out of the valley. David Hanson has criticized Mori's hypothesis that entities approaching human appearance will necessarily be evaluated negatively.[49] He has shown that the uncanny valley that Karl MacDorman and Hiroshi Ishiguro[50] generated – by having participants rate photographs that morphed from humanoid robots to android robots to human beings – could be flattened out by adding neotenous, cartoonish features to the entities that had formerly fallen into the valley.[49]
  • The uncanny appears at any degree of human likeness. Hanson has also pointed out that uncanny entities may appear anywhere in a spectrum ranging from the abstract (e.g., MIT's robot Lazlo) to the perfectly human (e.g., cosmetically atypical people).[49] Capgras syndrome is a relatively rare condition in which the sufferer believes that people (or, in some cases, things) have been replaced with duplicates. These duplicates are rationally accepted to be identical in physical properties, but the irrational belief is held that the "true" entity has been replaced with something else. Some sufferers of Capgras syndrome claim that the duplicate is a robot. Ellis and Lewis argue that the syndrome arises from an intact system for overt recognition coupled with a damaged system for covert recognition, which leads to conflict over an individual being identifiable but not familiar in any emotional sense.[51] This supports the view that the uncanny valley could arise due to issues of categorical perception that are particular to the manner in which the brain processes information.[46][52]
  • The uncanny valley is a heterogeneous group of phenomena. Phenomena labeled as being in the uncanny valley can be diverse, involve different sense modalities, and have multiple, possibly overlapping causes, which can range from evolved or learned circuits for early face perception[48][53] to culturally-shared psychological constructs.[54] People's cultural backgrounds may have a considerable influence on how androids are perceived with respect to the uncanny valley.[55]
  • The uncanny valley may be generational. Younger generations, more used to CGI, robots, and such, may be less likely to be affected by this hypothesized issue.[56]

Similar effects[edit]

An effect similar to the uncanny valley was noted by Charles Darwin in 1839:

The expression of this [Trigonocephalus] snake’s face was hideous and fierce; the pupil consisted of a vertical slit in a mottled and coppery iris; the jaws were broad at the base, and the nose terminated in a triangular projection. I do not think I ever saw anything more ugly, excepting, perhaps, some of the vampire bats. I imagine this repulsive aspect originates from the features being placed in positions, with respect to each other, somewhat proportional to the human face; and thus we obtain a scale of hideousness.

—Charles Darwin, The Voyage of the Beagle[57]

A similar "uncanny valley" effect could, according to the ethical-futurist writer Jamais Cascio, show up when humans begin modifying themselves with transhuman enhancements (cf. body modification), which aim to improve the abilities of the human body beyond what would normally be possible, be it eyesight, muscle strength, or cognition.[58] So long as these enhancements remain within a perceived norm of human behavior, a negative reaction is unlikely, but once individuals supplant normal human variety, revulsion can be expected. However, according to this theory, once such technologies gain further distance from human norms, "transhuman" individuals would cease to be judged on human levels and instead be regarded as separate entities altogether (this point is what has been dubbed "posthuman"), and it is here that acceptance would rise once again out of the uncanny valley.[58] Another example comes from "pageant retouching" photos, especially of children, which some find disturbingly doll-like.[59]

A study by Princeton University researchers involving macaques demonstrates that the uncanny valley phenomenon is not limited to humans.[60]

Use in the media[edit]

In the 2008 30 Rock episode "Succession", Frank Rossitano explains the uncanny valley concept, using a graph and Star Wars examples, to try to convince Tracy Jordan that his dream of creating a pornographic video game is impossible. He also references the computer-animated film The Polar Express.[61]

The 1977 Doctor Who serial "The Robots of Death" describes a mental illness called "Grimwade's Syndrome" or "robophobia": a condition where the lack of body language from humanoid robots provokes in certain people the feeling that they are "surrounded by walking, talking dead men."

See also[edit]

Notes[edit]

  1. ^ "The Truth About Robotic's Uncanny Valley - Human-Like Robots and the Uncanny Valley". Popular Mechanics. 2010-01-20. Retrieved 2011-03-20. 
  2. ^ When fantasy is just too close for comfort - The Age, June 10, 2007
  3. ^ a b Digital Actors in ‘Beowulf’ Are Just Uncanny - New York Times, November 14, 2007
  4. ^ Kawaguchi, Judit (10 March 2011). "Robocon founder Dr. Masahiro Mori". Words To Live By. Japan Times. p. 11. Archived from the original on 2011-03-13. Retrieved 2014-08-14. "Mori's influence on the world of robotics is immeasurable. His classic hypothesis, "The Uncanny Valley," published in 1970, is still a key work defining robotic design." 
  5. ^ Jentsch, E. (25 Aug. 1906). Zur Psychologie des Unheimlichen, Psychiatrisch-Neurologische Wochenschrift 8(22), 195-198.
  6. ^ a b Mitchell et al., 2011.
  7. ^ Misselhorn, 2009
  8. ^ Freud, S. (1919/2003). The uncanny [das unheimliche] (D. McLintock, Trans.). New York: Penguin.
  9. ^ a b Mori, M. (1970/2012). The uncanny valley (K. F. MacDorman & N. Kageki, Trans.). IEEE Robotics & Automation Magazine, 19(2), 98–100. doi:10.1109/MRA.2012.2192811
  10. ^ MacDorman, 2005.
  11. ^ Green, MacDorman, Ho, Koch, 2008.
  12. ^ a b Rhodes, G. & Zebrowitz, L. A. (eds) (2002). Facial Attractiveness: Evolutionary, Cognitive, and Social Perspectives, Ablex Publishing.
  13. ^ MacDorman & Ishiguro, 2006, p. 313.
  14. ^ MacDorman, Green, Ho, & Koch, 2009, p. 696.
  15. ^ Ramey, 2005.
  16. ^ MacDorman & Ishiguro, 2006, p. 303.
  17. ^ a b c d e Saygin, A.P. (2011 (2012)). "The Thing That Should Not Be: Predictive Coding and the Uncanny Valley in Perceiving Human and Humanoid Robot Actions.". Social Cognitive Affective Neuroscience 7: 413–22. doi:10.1093/scan/nsr025.  Check date values in: |date= (help)
  18. ^ UCSD News. "Your Brain on Androids". 
  19. ^ MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C., 2009.
  20. ^ Yalom, Irvin D. (1980) "Existential Psychotherapy", Basic Books, Inc., Publishers, New York
  21. ^ Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), doi: 10.1016/j.chb.2012.11.021.
  22. ^ Moore, R. K. (2012). A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena. Nature Scientific Reports, 2, doi:10.1038/srep00864.
  23. ^ by Kitta MacPherson (2009-10-13). "Monkey visual behavior falls into the uncanny valley". Princeton University. Retrieved 2011-03-20. 
  24. ^ "Science Exploring the uncanny valley of how brains react to humanoids". 
  25. ^ Ramsey, Doug (2010-05-13). "Nineteen Projects Awarded Inaugural Calit2 Strategic Research Opportunities Grants". UCSD. Retrieved 2011-03-20. 
  26. ^ Kiderra, Inga. "YOUR BRAIN ON ANDROIDS". UCSD. 
  27. ^ Robbins, Gary. "UCSD exploring why robots creep people out". San Diego Union Tribune. 
  28. ^ Palmer, Chris. "Exploring "The thing that should not be"". Calit2. 
  29. ^ Tinwell, A., et al. (2011). "Facial expression of emotion and perception of the Uncanny Valley in virtual characters". Computers in Human Behavior. 
  30. ^ Tinwell, A., et al. (2010). "Uncanny Behaviour in Survival Horror Games". Journal of Gaming and Virtual Worlds. 
  31. ^ Tinwell, A. et al. (2011). "The Uncanny Wall". International Journal of Arts and Technology. 
  32. ^ Dario Floreano. "Bio-Mimetic Robotics". 
  33. ^ EPFL. http://moodle.epfl.ch/mod/resource/view.php?inpopup=true&id=41121
  34. ^ "Polar Express a creepy ride". CNN.com. Nov 10, 2004. Retrieved Nov 21, 2011. 
  35. ^ Loder, Kurt (November 10, 2004). "'The Polar Express' Is All Too Human". MTV. 
  36. ^ Dargis, Manohla (November 10, 2004). "Do You Hear Sleigh Bells? Nah, Just Tom Hanks and Some Train". The New York Times. 
  37. ^ Anderson, John (November 10, 2004). "'Polar Express' derails in zombie land". Newsday. 
  38. ^ The Polar Express: A Virtual Train Wreck (conclusion), Ward Jenkins, Ward-O-Matic blog, December 18, 2004
  39. ^ Polo, Susana (June 20, 2010). "New Airbender TV Spot: Appa's Creepy Face". Geekosystem. Retrieved December 11, 2012. 
  40. ^ Stevens, Dana. "Tintin, So So". Slate. Retrieved 25 March 2012. 
  41. ^ Kelly, Kevin. "Beyond the Uncanny Valley". The Technium. Retrieved 25 March 2012. 
  42. ^ Ho, MacDorman, Pramono, 2008.
  43. ^ a b Goetz, Kiesler, & Powers, 2003.
  44. ^ Vinayagamoorthy, Steed, & Slater, 2005.
  45. ^ Saygin, A.P., Chaminade, T., Ishiguro, H. (2010) The Perception of Humans and Robots: Uncanny Hills in Parietal Cortex. Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 2716-2720).
  46. ^ a b c Saygin et al., 2011.
  47. ^ Gaylord, Chris. "Uncanny Valley: Will we ever learn to live with artificial humans?". Christian Science Monitor. 
  48. ^ a b MacDorman, Green, Ho, & Koch, 2009.
  49. ^ a b c David Hanson, Andrew Olney, Ismar A. Pereira & Marge Zielke (2005). Upending the Uncanny Valley. PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, 20, p. 1728-1729.
  50. ^ MacDorman & Ishiguro, 2006, p. 305.
  51. ^ Ellis, H., & Lewis, M. (2001). Capgras delusion: A window on face recognition. Trends in Cognitive Science, 5(4), 149-156.
  52. ^ Pollick, F. In Search of the Uncanny Valley. Analog communication: Evolution, brain mechanisms, dynamics, simulation. Cambridge, MA: MIT Press: The Vienna Series in Theoretical Biology (2009)
  53. ^ MacDorman & Ishiguro, 2006
  54. ^ MacDorman, Vasudevan & Ho, 2008.
  55. ^ Bartneck Kanda, Ishiguro, & Hagita, 2007.
  56. ^ 9/03/13 7:30am Today 7:30am. "Is the "uncanny valley" a myth?". Io9.com. Retrieved 2013-09-04. 
  57. ^ Charles Darwin. The Voyage of the Beagle . New York: Modern Library. 2001. p. 87.
  58. ^ a b Jamais Cascio, The Second Uncanny Valley
  59. ^ viz. "Pageant retouching". University of Texas. Retrieved 2011-03-20. 
  60. ^ Shawn A. Steckenfinger and Asif A. Ghazanfar. Monkey visual behavior falls into the uncanny valley
  61. ^ Michael Neal (April 25, 2008). "Succession". Yahoo! TV. 

References[edit]

Bartneck, C., Kanda, T., Ishiguro, H., & Hagita, N. (2007). Is the Uncanny Valley an Uncanny Cliff? Proceedings of the 16th IEEE, RO-MAN 2007, Jeju, Korea, pp. 368–373. doi:10.1109/ROMAN.2007.4415111html
Burleigh, T. J., Schoenherr, J. R., & Lacroix, G. L. (2013). Does the uncanny valley exist? An empirical test of the relationship between eeriness and the human likeness of digitally created faces. Computers in Human Behavior, 29(3), 759-771, doi: 10.1016/j.chb.2012.11.021.
Chaminade, T., Hodgins, J. & Kawato, M. (2007). Anthropomorphism influences perception of computer-animated characters' actions. Social Cognitive and Affective Neuroscience, 2(3), 206-216.
Cheetham, M., Suter, P., & Jancke, L. (2011). The human likeness dimension of the "uncanny valley hypothesis": behavioral and functional MRI findings. Front Hum Neurosci 5, 126.
Goetz, J., Kiesler, S., & Powers, A. (2003). Matching robot appearance and behavior to tasks to improve human-robot cooperation. Proceedings of the Twelfth IEEE International Workshop on Robot and Human Interactive Communication. Lisbon, Portugal.
Green, R. D., MacDorman, K. F., Ho, C.-C., & Vasudevan, S. K. (2008). Sensitivity to the proportions of faces that vary in human likeness. Computers in Human Behavior, 24(5), 2456–2474.
Ho, C.-C., MacDorman, K. F., & Pramono, Z. A. D. (2008). Human emotion and the uncanny valley: A GLM, MDS, and ISOMAP analysis of robot video ratings. Proceedings of the Third ACM/IEEE International Conference on Human-Robot Interaction. March 11–14. Amsterdam.
Ishiguro, H. (2005). Android science: Toward a new cross-disciplinary framework. CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, 2005, pp. 1–6.
MacDorman, K. F. (2005). Androids as an experimental apparatus: Why is there an uncanny valley and can we exploit it? CogSci-2005 Workshop: Toward Social Mechanisms of Android Science, 106-118. (An English translation of Mori's "The Uncanny Valley" made by Karl MacDorman and Takashi Minato appears in Appendix B of the paper.)
MacDorman, K. F. (2006). Subjective ratings of robot video clips for human likeness, familiarity, and eeriness: An exploration of the uncanny valley. ICCS/CogSci-2006 Long Symposium: Toward Social Mechanisms of Android Science. July 26, 2006. Vancouver, Canada.
MacDorman, K. F. & Ishiguro, H. (2006). The uncanny advantage of using androids in cognitive science research. Interaction Studies, 7(3), 297-337.
MacDorman, K. F., Vasudevan, S. K., & Ho, C.-C. (2009). Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI & Society, 23(4), 485-510.
MacDorman, K. F., Green, R. D., Ho, C.-C., & Koch, C. (2009). Too real for comfort: Uncanny responses to computer generated faces. Computers in Human Behavior, 25, 695-710.
Misselhorn, C. (2009). Empathy with inanimate objects and the uncanny valley. Minds and Machines, 19(3), 345-359.
Mitchell, W. J., Szerszen, Sr., K. A., Lu, A. S., Schermerhorn, P. W., Scheutz, M., & MacDorman, K. F. (2011). A mismatch in the human realism of face and voice produces an uncanny valley. i-Perception, 2(1), 10–12.
Moore, R. K. (2012). A Bayesian explanation of the ‘Uncanny Valley’ effect and related psychological phenomena. Nature Scientific Reports, 2, doi:10.1038/srep00864.
Mori, M. (1970/2012). The uncanny valley (K. F. MacDorman & N. Kageki, Trans.). IEEE Robotics & Automation Magazine, 19(2), 98–100. doi:10.1109/MRA.2012.2192811 See also http://spectrum.ieee.org/automaton/robotics/humanoids/an-uncanny-mind-masahiro-mori-on-the-uncanny-valley
Mori, M. (1970). Bukimi no tani. Energy, 7(4), 33–35. (Originally in Japanese)
Mori, M. (2005). On the Uncanny Valley. Proceedings of the Humanoids-2005 workshop: Views of the Uncanny Valley. 5 December 2005, Tsukuba, Japan.
Pollick, F. E. (forthcoming). In search of the uncanny valley. In Grammer, K. & Juette, A. (Eds.), Analog communication: Evolution, brain mechanisms, dynamics, simulation. The Vienna Series in Theoretical Biology. Cambridge, Mass.: The MIT Press.
Ramey, C.H. (2005). The uncanny valley of similarities concerning abortion, baldness, heaps of sand, and humanlike robots. In Proceedings of the Views of the Uncanny Valley Workshop, IEEE-RAS International Conference on Humanoid Robots.
Saygin, A.P., Chaminade, T., Ishiguro, H., Driver, J. & Frith, C. (2011) The thing that should not be: Predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social Cognitive Affective Neuroscience, 6(4).
Saygin, A.P., Chaminade, T., Ishiguro, H. (2010) The Perception of Humans and Robots: Uncanny Hills in Parietal Cortex. In S. Ohlsson & R. Catrambone (Eds.), Proceedings of the 32nd Annual Conference of the Cognitive Science Society (pp. 2716–2720). Austin, TX: Cognitive Science Society.
Seyama, J., & Nagayama, R. S. (2007). The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16(4), 337-351.
Tinwell, A., Grimshaw, M., Abdel Nabi, D., & Williams, A. (2011) Facial expression of emotion and perception of the Uncanny Valley in virtual characters. Computers in Human Behavior, 27(2), pp. 741-749.
Tinwell, A., Grimshaw, M., & Williams, A. (2010) Uncanny Behaviour in Survival Horror Games. Journal of Gaming and Virtual Worlds, 2(1), pp. 3-25.
Tinwell, A., Grimshaw, M., & Williams, A. (2011) The Uncanny Wall. International Journal of Arts and Technology, 4(3), pp. 326-341.
Vinayagamoorthy, V. Steed, A. & Slater, M. (2005). Building Characters: Lessons Drawn from Virtual Environments. Toward Social Mechanisms of Android Science: A CogSci 2005 Workshop. July 25–26, Stresa, Italy, pp. 119–126.

External links[edit]

  • Zysk, W., Filkov, R., Feldmann, S. (2013). Bridging the uncanny valley – From 3D humanoid Characters to Virtual Tutors. The Second International Conference on E-Learning and E-Technologies in Education, ICEEE2013, Lodz University of Technology, Sept. 23-25, 2013. ISBN 978-1-4673-5093-8
  • Your Brain on Androids UCSD news release about human brain and the uncanny valley.
  • Massimo Negrotti Study on the reality of artificial objects.
  • Held in Tsukuba, Japan, near Tokyo on December 5, 2005:
Humanoids-2005 Workshop
Views on the Uncanny Valley