Three Faces of Function: How Robot Design Choices Shape Human Trust and Acceptance

Robotics companies are having to make choices about their robots’ features. Are they humanlike? Are they abstract? Does it even have a face? Every design choice tells a story about its intended relationship with humans. From the deliberately non-facial screen displays of industrial robots to the childlike proportions of companion robots, not to mention the uncannily realistic features of entertainment androids, these design decisions reflect fundamentally different approaches. Different approaches to earning human trust and acceptance. Different uses. Different expectations. Which approaches actually work, and when, will contribute significantly to future commercial success and our relationship to robots in work and in the home.
The Industrial Approach: Trust Through Transparency
Figure AI: The Humanoid Toaster?
Figure AI's approach to robot faces seems to reflect a pretty utilitarian philosophy. These robots are being designed to achieve tasks: to ‘bring a general purpose humanoid to life’, with the emphasis on purpose. This has undoubtedly influenced the company’s attitude to facial design.
Figure's humanoids don't have a prominent "face" i.e. they do not have the fundamental elements humans use to recognise faces (two eyes, a nose, and a mouth arranged in the characteristic human pattern). Instead, the robot has a face-shaped screen in which graphic shapes display and oscillate. A deliberate avoidance of human facial features. This conscious design choice positions the robot as a sophisticated tool rather than a social entity. This robot is there to complete tasks, not to be your friend. In a similar way in which you trust a microwave, toaster or automated hoover to do its job you are being encouraged to consider Figure as clearly a machine to complete tasks. It shouldn’t surprise you with unexpected behaviours or emotional responses. It does its job. And it is clearly a machine.
The Trust-Intelligence Trade-off
There are interesting trade-offs in human-robot interaction. Studies show that robots perceived as highly human-like might be more likely to be trusted by individuals (e.g.Cominelli et al. 2021), BUT anthropomorphism can also lead to higher expectations and greater disappointment when robots fail (eg PMC study on cognitive anthropomorphism).
Significantly, research also suggests that in some conditions, technological entities can be perceived as rational actors that, without hidden motivations and agendas, make more sensible and unselfish decisions than humans (Oksanen et al. 2020). This finding supports Figure's design philosophy: by clearly signaling their mechanical nature, these robots may actually gain trust by avoiding the complex emotional expectations that come with human-like appearance.
The trade-off appears to be that while obviously mechanical robots may be perceived as less intelligent or socially capable, they could be trusted more consistently for specific tasks. Users don't expect them to disagree, make jokes, or have opinions. They simply expect reliable performance.
Applications and Acceptance
Figure's robots are designed for industrial and home-help applications where task completion is paramount. The minimal facial design serves several purposes:
- Reduces anthropomorphic expectations: Users don't expect human-like reasoning or emotional responses
- Maintains focus on functionality: The robot's purpose is clearly functional rather than social
- Maintains professional distance: Appropriate for environments where emotional attachment isn't desired
- Minimises uncanny valley effects: By avoiding human-like features entirely, it sidesteps the potential psychological discomfort of near-human appearance
This approach may be particularly effective in contexts where trust is critical but relationship-building is secondary. By clearly signaling their artificial nature through minimal facial design, Figure's robots might help users maintain appropriate trust levels: trusting the robot's technical capabilities while avoiding any over-trust that can occur when robots appear more human-like.
The Companion Approach: Trust Through Emotional Design
Cartwheel: Engineering Affection
Cartwheel Robotics has taken the opposite approach, deliberately designing their robots to trigger positive emotional responses through carefully chosen physical characteristics. Their prototype "Yogi" embodies what founder Scott LaValley calls "toddler proportions”. Rounded lines, a large head relative to body size, and even "a little chubby" appearance.
This design strategy draws directly from ethological research on "baby schema", identified by ethologist Konrad Lorenz. These features - large head, big eyes, rounded features, and smaller body proportions - reliably trigger caregiving responses in humans across cultures (Lorenz, 1943).
The Psychology of Childlike Design
The scientific evidence supporting Cartwheel's approach appears significant. Research has shown that people experience high levels of trustworthiness toward robots with baby schema features, particularly large eyes (Song, Luximon, and Luximon (2021)).
The childlike proportions serve multiple psychological functions:
- Reduces perceived threat: Small, rounded forms signal harmlessness
- Triggers nurturing responses: Evolutionary mechanisms designed to protect human children extend to child-like robots
- Builds emotional bonds: The appearance encourages users to form protective, caring relationships
- Increases forgiveness: Research shows humans are more forgiving of robots after they make mistakes when the robots appear childlike or vulnerable
Design for Companionship
Every aspect of Cartwheel's design reinforces the companionship goal:
Proportional choices: The robot is sized like a child, not an adult, making it non-threatening and approachable. LaValley explicitly notes: "I don't see a robot when I see Yogi; I see a character."
Material and texture choices: Soft, rounded edges and warm colors contribute to the friendly appearance, contrasting sharply with the angular, metallic aesthetics typical of industrial robots.
Facial features: While not attempting photorealistic human features, the face incorporates recognisable elements (eyes, expressions) arranged in ways that read as friendly and approachable rather than alien or mechanical.
This approach recognises that for companion robots, building an emotional connection may be more important than projecting competence or efficiency. The design accepts that some people might initially perceive the robot as less capable, but gains compensating advantages in acceptance and engagement.
The Realistic Approach: Trying to Cross the Uncanny Valley
Hanson Robotics and Ex-Robots: The Quest for True Likeness
The third design philosophy pursues the most technically challenging and psychologically complex goal: creating robots that look and act genuinely human. Companies like Hanson Robotics (with Sophia) and China's Ex-Robots represent this approach, using advanced materials and dozens of motors to create facial expressions that try to mimic human appearance and behavior.
Hanson's Sophia features over 30 motors that create what researchers identify as 62 "human-like" facial expressions using proprietary Frubber skin material. Ex-Robots' creations use silicone masks with tiny motors throughout the head to achieve different expressions.
The Uncanny Valley Challenge
This approach faces the most significant psychological hurdle: the uncanny valley. While Hanson Robotics' research claims to "strongly contravene the 'uncanny valley' theory" (Hanson, D., Olney, A., Pereira, I., & Zielke, M. (2005), critics including Facebook's AI head Yann LeCun have been pretty scathing e.g. calling robots like Sophia "BS” puppets (!).
And whilst the scientific evidence on the uncanny valley remains mixed there is certainly evidence that should make us wary. For example, studies show that perceived life-likeness decreases significantly for android faces within 100-500 milliseconds of viewing, suggesting our brains rapidly detect and reject near-human but imperfect faces (Wang et al., 2020). And other research suggests the uncanny valley may be more pronounced in certain cultures or age groups, with responses varying significantly between individuals.
Applications and Limitations
The hyper-realistic approach is primarily being pursued for specific applications:
Entertainment and media: Hanson’s Sophia has become a global media personality, appearing on major talk shows and earning UN titles, demonstrating that uncanny valley effects may be less problematic in performance contexts.
Healthcare companionship: Projects like LovingAI explore whether AI agents can "offer an experience of unconditional love to humans," addressing deep psychological needs for connection.
Psychological research: Ex-Robots envisions applications in "psychological counseling and health," including "auxiliary treatment and preliminary screening for emotional and psychological disorders".
However, the approach faces significant limitations:
- Uncanny valley risks: Even sophisticated robots may trigger discomfort in many users
- Inflated expectations: Does human-like appearance creates expectations for human-level intelligence and reasoning?
- Ethical concerns: Questions can arise about deception, emotional manipulation, and appropriate boundaries for artificial relationships
Design Implications and Evidence
The Trust-Capability Spectrum
So, what can we learn from the current approaches out there? The evidence suggests that robot face design exists on a spectrum with potential distinct trade-offs:
Mechanical/Functional Design (e.g. Figure):
- Advantages: Higher task-focused trust, clearer expectations, reduced uncanny effects
- Disadvantages: Limited social connection, potentially perceived as less intelligent
- Potentially best for: Industrial applications, task-focused interactions, professional environments or home environments in a functional capacity
Stylised character/Friendly Design (Cartwheel):
- Advantages: Strong emotional bonds, high forgiveness for errors, perceived as safe and approachable
- Disadvantages: May be seen as less competent for serious tasks
- Potentially best for: Home companionship, care, educational applications, therapeutic contexts
Realistic/Human-like Design (Hanson/Ex-Robots):
- Advantages: Potential for deep emotional connections, effective for entertainment and media
- Disadvantages: High risk of uncanny valley effects, inflated expectations, ethical concerns
- Potentially cest for: Research platforms, entertainment, specialised therapeutic applications
Let’s Not Forget Cultural and Contextual Factors
Before leaving the specifics and rounding up we should not forget that research reveals significant cultural differences in robot acceptance, with Japanese participants showing less discomfort with human-like robots compared to American participants. This might reflect broader cultural attitudes toward technology and anthropomorphism.
These cultural differences suggest that optimal robot face design may vary significantly across different markets and applications, with no universal "best" approach. How this might change over time as robots become more “normal” is unknown.
The Future of Robot Face Design
The evidence points toward increasing sophistication in matching robot design to specific applications and user needs. Rather than pursuing a single approach to robot faces, successful developers appear to be:
- Clearly defining intended relationships: Industrial tools, emotional companions, or curiosities require different design approaches
- Understanding user expectations: Managing the gap between appearance and capability
- Considering cultural contexts: Adapting designs for different cultural attitudes toward anthropomorphism
- Accepting trade-offs: Recognising that no single design optimizes for all desired outcomes
Conclusion: Design as Destiny?
The choice of how to design a robot's face is ultimately a choice about what kind of relationship that robot will have with humans. Figure's screen-based approach optimises for reliable task performance and clear expectations. Cartwheel's childlike design prioritises emotional connection and long-term companionship. Hanson and Ex-Robots' reach for realistic faces aim for entertaining human-robot bonds, accepting significant risks to potentially achieve different relationships between humans and artificial beings.
The evidence suggests that all three approaches might succeed, but only when the design choices align with the robot's intended function and the user's needs and expectations. The future of robotics may not be about finding the "perfect" robot face, but about developing the wisdom to choose the right face for each specific human need.
As we potentially stand on the threshold of widespread adoption, these design choices will shape not just the success of individual products, but the core relationships between humans and humanoid robots. Whether we end up with trustworthy tools, beloved companions, or uncanny artificial beings depends largely on the design decisions being made today. One face at a time.