Humans are the most socially advanced of all species and use this specialty in day-to-day situations. However, it’s not clear what skills a robot must acquire in order to have appropriate interaction. There are two types of robots: 1st controlled by others and 2nd autonomous robots, that acts on the basis of its own decisions, and are not controlled by a human (Matari´c, 2007). Duffy (2003) states that a human-centric machine such as a social robot requires human traits as these will provide a comprehensible and predictable interface in social situations. Traditionally, autonomous robots have been used in areas where little human interaction is required and employed in roles such as sweeping minefields, search and rescue and exploring other planets (Breazel, 2004). However, there has recently been an interest in how these robots can interact socially with humans. According to Bartneck and Forlizzi (2004) the definition of a social robot is: “an autonomous or semi-autonomous robot that interacts and communicates with humans by following the behavioral norms expected by the people with whom the robot is intended to act”.
What Is It to Be Social?
Social environment refers to the environment developed by humans different from the natural environment; society as a whole, especially in its relation to the individual. Duffy et al. (1999) defines a social agent as one who is capable of “interactive, communicative behavior”. They also state that social intelligence requires more than interaction combined with intelligence. The relationships must form and develop in social agents, in order to have social intelligence. The high levels of communication and cooperation needed for humans and robots to work together require that robots have a sound understanding of social convention. This would enable greater acceptance of them into human societies and would be a greater attribute than a realistic human appearance. A robot can be believable if it appears to have social intelligence (Bates, 1994). Breazeal argues that a robot is socially intelligent if it adheres to a human’s social model of it, even if the robot is less sophisticated than a human. For example, dogs have social intelligence but are less refined than humans. The important aspect of a robot is how socially competent it is, not what it looks like.
Save your time!
We can take care of your essay
- Proper editing and formatting
- Free revision, title page, and bibliography
- Flexible prices and money-back guarantee
Place an order
Many experts believe that social robots should reflect humans as closely as possible. Work has been conducted in order to determine the most human-like characteristics of robots. DiSalvo et al. (2002) analyzed 48 robots and surveyed people about how human-like robots were. They focused principally on the head of the robot as this is suggested to be the principal area of human-robot interaction (reflecting human‐human interaction). Presence of facial features such as nose, eyelids and a mouth were all found to significantly increase the perception of humanness, as were the dimensions of the head and the number of features. However, most of the robots were not rated as being very human-like.
How Do Autonomous Robots Learn?
In order for autonomous robots to learn about their environment, they must be able to comprehend and act on feedback around them. This is how they will be able to adapt their behavior in order to interact competently. One example of a robot which takes information from its social interactions in order to produce more socially acceptable behaviors is called Data (Knight et al., 2011). Data is a joke‐telling robot which learns through observing audience reactions which jokes get the most laughs and adapts its joke‐telling accordingly. Breazeal and Scassellati (2002) suggest that robots could learn via imitation of humans, similar to how an infant learns. For instance, Robota is a robot doll who can learn associations between keystrokes and what gestures they correspond to. Robota can sense and mimic a limited number of gestures of a human teacher. The teacher presses a key to correspond with each gesture. After learning these associations, a new series of keystrokes can be presented to Robota who will then perform the associated series of actions.
How Humans Develop Emotions and How It Helps to Build Emotional Robots?
For the development of robots, the first important step in addressing the problem of robot emotions is understanding how humans develop emotions. Around the age of two, when toddlers start to speak, they begin to learn the different emotional names for their internal states. For instance, the word ‘happy’, refers to a feeling of pleasure and contentment in the way things are going; a general sense of enjoyment of and enthusiasm for life. As we get older, we use these behaviors to express our internal states and to recognize emotion in others. All of the emotional expression and perception happens quickly, involuntarily and subconsciously, conveying a great deal of information in a concise way.
The idea behind developmental robotics is to create robots that learn behaviors the same way human children do. Then, the robot is exposed to an environment to stimulate the training of that model, for example, through interactions with a human caregiver. One such example is Kismet, the first robot designed to explicitly engage people in natural and expressive face-to-face conversation. Interactions with Kismet tend to take the form of humans playing the part of a caregiver while Kismet acts like an infant. Kismet is inspired by the way a human infant learns and interacts with its carer (Breazel, 2000). Infants show emotions to convey their emotional state and this can regulate the care-takers behavior in order to satisfy the drives of the infant. Kismet can show the six primary emotions (joy, surprise, sorrow, fear, disgust and anger). The range of emotional expressions available to Kismet is limited, but they are convincing enough to generate sympathy among the humans who interact with it. Breazeal invites human parents to play with Kismet on a daily basis. When left alone, Kismet looks sad, but when it detects a human face it smiles, inviting attention. If the carer moves too fast, a look of fear warns that something is wrong. Human parents who play with Kismet cannot help but respond sympathetically to these simple forms of emotional behavior.
Why Give Emotions to Robots?
Giving emotions to robots could be useful for a variety of reasons. For instance, it would be much easier and enjoyable to interact with emotional robots than with today’s un-emotional machines. Imagine if your robot can recognize what emotional state you were in each time you work with it, perhaps by reading your facial expression. And suppose, the robot detects that you are in bad mood, rather than simply helping you in your daily activities, the robot might tell you a joke to cheer you up. It might be much more productive to work with robots that are emotionally intelligent to understand our mood than with today’s dumb machines.
Why Is Internal Motivation Required in Robots?
In psychology, an activity is characterized as internally motivated when there is no apparent reward except the activity itself. People seek and engage in such activities for their own sake and not because they lead to an extrinsic reward. In such cases, the person seems to derive enjoyment directly from the practice of the activity (Kaplan and Oudeyer, 2007). Children seem intrinsically motivated to manipulate, to explore, to test, to learn and they look for activities and situations that provide such learning opportunities. One such example is Robin (a contraction of ‘Robot Infant’), the autonomous robot toddler to support self-management in children with diabetes (Cañamero and Lewis, 2017). Robin helps children improve their confidence and skills in managing their own diabetes. Robin has internal needs and motivation for things, for example, he gets hungry and searches for food around him. He also gets tired and wants to sleep. Whenever, Robin feels lonely, he wants human to either hug or touch him. Children can play with Robin, helps to satisfy his social and physical needs, and also helps him to learn the way they learn new things. So, a robot with internal motivation is able to autonomously explore its environment not to fulfil predefined tasks but driven by an incentive to search for situations where learning happens efficiently.
Conclusion
Robots are on the verge of becoming social and are finding new roles in society, such as helping to develop the social skills of autistic children and even helping older people in health monitoring or assisting them in walking. Conveying emotions is a useful skill for robots to influence the behavior of humans and in order to help the robots learn about social situations. If a human and a robot are working together and the human feels bad and the robot doesn’t feel it and continues in the same way, the human will feel frustrated. The robot should have understood the emotion that human feels bad and have internal motivation to cheer him up. We don’t just ask each other ‘What is this or what is that’, but ‘What do you think and how do you feel about it’. This is the beauty of human interaction. Anyone can get a definition of the relationship or a marriage on Wikipedia, but if I ask how you feel about it, that is what’s valuable to me.