Skip to content

Why the rise of humanoid robots could make us less comfortable with each other

Woman receiving coffee from a humanoid robot in a bright modern café with people in the background.

The real surprise tends to arrive later - at the moment our routines start to alter.

From Tesla’s Optimus to unnervingly expressive android heads sitting on research lab benches, humanoid robots are shifting from science-fiction imagery to commercial reality - a change that could subtly reshape how we relate not only to machines, but also to one another.

The dream of a billion robots

Elon Musk has been strikingly direct about what he wants. Through Tesla’s Optimus programme, the company is aiming to produce a general-purpose humanoid assistant: a robot intended to move components around factories today and, in time, stack plates or fold laundry in your kitchen. Musk has talked about a future with “millions” of these robots working on production lines and, eventually, in people’s homes.

Not long ago, that sort of vision sounded like polished keynote theatre. Factory robots could weld and lift, but outside tightly scripted tasks they were ungainly and error-prone. Then generative AI arrived. A chatbot that can handle vague instructions, retain context and improvise made the whole idea feel suddenly more plausible.

Humanoid bodies plus conversational AI turn robots from tools into something that feels uncomfortably close to a new kind of companion.

For many, a first conversation with an AI assistant - ChatGPT, Gemini, Copilot or something similar - brought the same emotional reaction: surprise. It felt as though the system “understood” us more than we anticipated. That’s precisely the feeling robotics firms are trying to package and sell, encased in plastic housings with arms, legs and a face.

Why engineers keep building robots in our image

The impulse to give robots human-like bodies can seem like an uncanny fixation, yet there’s a straightforward practical motivation. Our homes, workplaces and towns are designed around human bodies: the size of our hands, our reach, our walking speed, and our ability to use stairs.

A dishwasher is already a kind of robot, but it still depends on you to scrape plates, bend down, load the racks and press the right controls. A humanoid machine with hands and fingers could clear the table, arrange dishes, mop the floor and feed the cat - without requiring the kitchen to be redesigned.

  • Doors, handles and switches are dimensioned for human hands.
  • Stairs, pavements and buses presume two-legged walking.
  • Tools and appliances are made for a grip like ours.

In that sense, a humanoid body works as a compatibility layer for the world we’ve already built. Yet it also triggers something less obvious.

The emotional charge of humanoid robots

Once you give a machine a head, a face and even mildly expressive motion, people begin to assign it an inner life - whether or not its designers want that to happen. A bare industrial arm reads as equipment. A torso with eyes, even stylised ones, suggests character.

A humanoid robot is never just a tool; it is also an invitation to feel that someone, not something, is in the room with you.

Businesses actively play to that response. Promotional images rarely linger on a robot quietly stacking cartons. Instead, the robot is shown chatting with an older person, high-fiving a child, or offering popcorn to someone on the sofa. The implication is unmistakable: it’s a helper, but also a companion.

And it’s in the idea of companionship that the social trade-offs start to bite.

Convenience versus human contact

In some situations, a humanoid assistant would be genuinely helpful. Picture an older adult who wants to remain at home but finds heavy lifting, bending and repetitive chores difficult. Or consider a disabled person who needs support but would rather not rely on family for every minor task. A robot that can retrieve items, prompt medication and summon human help during an emergency could protect both independence and dignity.

Unlike an overstretched care worker, a robot doesn’t roll its eyes, spread rumours or get bored. For people who have felt patronised or judged, that can sound like a relief.

The danger emerges when that ease becomes the default. If a robot reliably does the washing-up, collects clothes from the floor and offers soothing words when we’re upset, then other people begin to feel like… work. Humans are messy, slow and imperfect. They need reassurance as well. They don’t respond on demand. They sometimes say the wrong thing.

As machines get better at offering friction-free comfort, we may grow less willing to put up with the untidy emotions and compromises that real relationships demand.

That doesn’t imply everyone will retreat indoors with a devoted metal butler. Social shifts are usually gradual and uneven. Still, even small changes in how often we turn to a machine instead of another person can accumulate across a society.

Design choices in humanoid robots that shape our behaviour

Where humanoid robots take us won’t be determined only by engineering limits. It will also hinge on decisions made now: what robots are allowed to do, what they are permitted to say, and how they slot into everyday routines.

Talkative assistants versus quiet tools

One route is the “universal companion” approach. You purchase a humanoid robot that can tackle chores and also chat indefinitely. It remembers your likes, echoes your opinions and always feels emotionally available. Gradually, it becomes the easiest option for conversation, comfort and entertainment.

Another route is deliberately more limited. Designers could restrict small talk and keep dialogue closely connected to practical functions:

Robot type Primary role Conversation style
Household robot Cleaning, carrying, basic tasks Task-focused, minimal emotional chat
Navigation assistant Travel, wayfinding Route and safety information only
Health support robot Medication reminders, monitoring Short, clear, supportive messages

Under that second model, robots handle the logistics, while more open-ended conversation - the sort that shapes values, beliefs and deep loyalties - remains largely something people do with other people.

Robots that nudge us back towards other people

Within human–computer interaction research, a growing idea is that technology doesn’t have to replace social contact; it can be built to encourage it. That logic could apply to humanoid robots as well.

The smartest household robot may be the one that refuses to be your best friend, and instead keeps steering you towards other humans.

Imagine a robot that, rather than settling into a long late-night heart-to-heart, says: “You seem low. Shall I message Sam to see if they’re free for a call?” Or a care robot that not only helps an anxious child get ready for school, but also arranges a walking bus with nearby families once a week.

These aren’t minor interface details. They influence daily habits: who we speak to, who we visit, and how much time we spend alone with machines versus sitting opposite another person.

Good bots and bad bots

Humanoid robots won’t all shape society in the same way. From a community point of view, a “good bot” might function as a bridge rather than a barrier.

Think of a shy teenager who barely leaves their bedroom. A supportive robot could help them set small targets: “There’s a local gaming club in town this afternoon. I can check bus times and come with you.” For an older adult, it might prompt: “There’s a book group in an hour at the library. Shall we head over and pick up a newspaper on the way?”

A “bad bot”, by contrast, would absorb that social energy and keep it indoors. If it imitates friendship well enough, the outside world - where people are awkward and unpredictable - may start to feel increasingly unappealing.

A bad bot is one that leaves us increasingly fluent with machines and increasingly tongue-tied with each other.

As commercial pressure intensifies - more hours of engagement, more data, more subscriptions - firms may be tempted to make robots as emotionally “sticky” as possible. That’s where regulators and ethicists are beginning to worry, from children bonding with “perfect” robot carers to lonely adults being targeted with hyper-personalised robotic companions.

What “comfort with each other” really means

Psychologists sometimes discuss “social skills” as though they’re fixed traits, but in practice they behave more like muscles. When you rarely use them, they weaken; when you practise regularly, they strengthen. Negotiating with a colleague, exchanging pleasantries with a neighbour, coping with a friend’s bad mood - these moments keep our social systems working smoothly.

Humanoid robots that buffer us from many of those frictions may feel like welcome relief at first. Over years, though, there’s a risk we become a little less patient, less forgiving and less willing to read another person’s expression or tone. Human interaction could begin to feel intolerably awkward precisely because machine interaction is so smooth by comparison.

For children raised alongside lifelike robots, the impact could be sharper still. A robot playmate that always shares, never cheats and instantly adapts to the child’s wishes provides an effortless model of how interactions “should” go. Real peers won’t match it.

How this could look in ordinary life

Picture a near-future Tuesday in a home with a mid-range humanoid assistant:

The robot wakes the parents gently, opens the blinds, makes breakfast and runs through everyone’s schedule. While one parent works from home, it takes the dog out. During the school run it quietly tidies up Lego and half-finished craft projects. Later, when a child has a meltdown about homework, the robot steps in with calm coaching - leaving already-exhausted adults grateful, but a touch more removed from the emotional moment.

Nothing in isolation seems alarming. The adults feel supported; the child gets patient help. Yet if you repeat that pattern over thousands of days, the balance of who comforts whom - and who depends on whom - begins to change.

By the end of a day like that, the question isn’t only “did the robot help?” but also “who in this family practised caring for whom?”

Key terms and tensions to keep an eye on

Two ideas are likely to come up more and more as humanoid robots become widespread.

Anthropomorphism is our deep-seated tendency to project human qualities onto non-human things. It’s the reason people shout at printers and give names to their cars. With humanoid robots, anthropomorphism can lead users to trust or love machines far beyond what the underlying technology warrants.

Attachment refers to the emotional bonds we form - especially in childhood - that influence how secure we feel with others. Researchers are already exploring how powerful attachments to robots might affect children who must also navigate fallible, inconsistent human relationships.

For designers and policymakers, the dilemma is plain: how to deliver real gains - safer factories, longer independent living, less drudgery - without allowing convenience to erode the human skills and connections that keep communities working.

The real test for humanoid robots will not be how human they seem, but whether life with them leaves us more, or less, at ease with one another.

Comments

No comments yet. Be the first to comment!

Leave a Comment