Question:
Do computers have their own personality and emotions?
2006-11-27 22:39:10 UTC
Today's modern PC can have become faster and more efficent in today's modern age. Computers now have to abiltiy to process information even when the computer is shut down. Do you suppose today's computers have a "limited" personality with emotions? Can our normal computers of today actually get mad at the user and do things back as a way to retaliate?
Twelve answers:
2006-11-27 22:40:56 UTC
they are electrical. not emotional. you have seen too many movies
Mike M
2006-11-28 01:11:49 UTC
I agree with most of the other answers here: they definitely have a "personality", in the sense that a rickety old car might develop a "personality". Emotions govern how we act and respond to one another as humans, and at this point there is no artificial intelligence program I've heard of that can accurately and comprehensively recreate human-like emotional responses.



That said, though, I do believe that as AI research and development continues, it's almost inevitable that such a point may be reached. I don't know if you've seen or read "Chobits", but there may very well be a day when computers are humanoid machines which interact with "biological" humans just like we interact with each other, but possess the added logical, recollective, and communicative abilities of a computer.
redcoat7121
2006-11-27 22:42:19 UTC
No, not really. Some are junk and others work well. Just like the old saying, garbage in - garbage out, if you have good working hardware and the software "you" need then the computer will provide for your needs. Now, get enough bad sectors or lost clusters then your computer is going to start giving you a hard time...Keep it maintained and your computer will be as "happy" as you will be! Cheers....
supermonkey081
2006-11-27 22:41:26 UTC
If so then I think my computer must be suicidal because it has given up on life. =( Haha. Good theory though.
Oscar
2006-11-27 22:48:31 UTC
Computer Emotions and Mental Software*



Randolph M. Nesse

The University of Michigan

Whether computers can experience emotions or not is

murky matter that I will leave to others willing to brave the

darkness at the intersection of epistemology and consciousness.

Whether computers have special states that correspond functionaily

to emotions in organisms is, however, an important and

tractable question. Asking it in the light of evolutionary theory

can help us to understand the functions and dysfunctions of

human emotions.

To meet the challenges of different tasks, many machines

must assume different states, one corresponding to each task.

A combination telescope/microscope that has just been used to

examined an insect must be adjusted before it can focus on a

distant bird. A Swiss army knife that has just been used to open

a can must have blades folded in and out before it can whittle a

branch As computers developed, people quickly realized that

a general purpose calculating machine was not very useful.

Computers were, therefore, either built for one purpose (like

switching telephone calls) or were designed with the capacity

for taking on several states, each of which facilitated carrying

out a different task. It now seems natural to use one program

for word processing, another for drawing, another for doing

statistics, and yetanother for communicating. Each program

sets the input &vices, the processing and memory characteristics

of the machine, and the appearance of the screen, in

ways that facilitate carrying out a specific kind of task

Many of the benefits from the capacity to run different

programs come from concentrating limited processing capacity

on one task, but even infinite processing resources would not

make separate programs obsolete. If a screen displays a spreadsheet

grid, it cannot simultaneously provide a blank slate for

drawing. If a port is sending data to a printer, it cannot simultaneously

access the internet Each specialized task requires

different settings, if only because of the constraints of communication

with the operator and other devices.

The human mind is a product of natural selection that

controls physiology and behavior in ways that maximize the

individual’s inclusive Darwinian fitness. Individuals whose

brains are wired in ways that help them to cope effectively with

various threats and opportunities have a selective advantage,

while the genes of other individuals are gradually eliminated.

These threats and opportunities are somewhat consistent. Certain

situations that contain substantial fitness challenges have

occurred so often in the course of evolution that natural selection

has shaped specialized states that adjust various aspects of the

organism to cope especially well with these challenges.

*Direct correspondence to Dr. Randolph M. Nesse, The

University of Michigan C-440 Med-Inn Bldg., Ann Arbor, MI

48109-0840 (e-mail nesse@umich.edu).

Some of these states are routine. Lack of nutrition arouses

hunger, excess heat, sweating; cold, shivering; and tissue damage,

pain Other states ate more complex and involve a whole

suite of adaptations; we call these states emotions. For instance,

attack by a predator arouses the emotion of panic. In this state

rapid heart beat, sweating, and fast breathing facilitate flight,

aversive anxiety motivates the wish to flee, and cognitive preoccupation

with escape plots a route. After successful escape,

specialized learning motivates avoidance of the situation in

which the attack took place.

While escape from a predator is an exemplar for anxiety,

we humans have faced many other dangers, some of. which have

shaped other subtypes of anxiety. A potential loss of status or

friends arouses social anxiety. Stranger anxiety is useful, especially

for children. The dangers posed by wasps and snakes are

avoided, thanks to phobic fear. The dangers our children face

are mitigated by our fears for their safety. Internal wishes to do

something that violates a social or internal norm arouses a more

insidious anxiety that deters us from actions that would threaten

valuable relationships. Other situations have shaped aversive

emotions different from anxiety. Loss of a loved one causes

grief. A threat to a mate’s fidelity amuses sexual jealousy.

Efforts that are not paying off amuse frustration Lack of

opportunity causes boredom. The very aversiveness of these

emotions is part of their utility.

Fortunately, our ancestors also experienced opportunities,

so we also have capacities for positive emotions. Encountering

a food bonanza, finding a better shelter, winning a

competition, completing a task, having sex, watching a child

succeed-such events amuse positive feelings of joy, pride,

satisfaction and pleasure. Why do we have capacities for more

aversive than positive emotions? It is because our environment

has contained more kinds of threats than opportunities. There

are no neutral emotions because natural selection shaped special

states only to cope with threats or opportunities.

Our emotions are pleasurable or aversive, but a computer

has no preference for which program it runs. Why not? Leaving

aside the problem of self-awareness, the computer is fundamentally

different from the mind in that the computer only

carries out tasks specified by the designer, while the mind has

been shaped to carry out many specialized tasks in the service

of another larger goal, maximizing the individual’s inclusive

fitness. Computers cannot reproduce, so this saves them from

the competitions that cause most human pleasure and suffering.

It is conceivable, however, to imagine a computer that would

“prefer” some tasks to others. Imagine a computer with something

like a governor, a device that monitors the temperature of

ist CPU and balks when ever the excess workload might shorten

its lifespan If computers were shaped by selection for those

models with long life-spans, such a device might evolve and

might provide a crude analog of the aversive character of some

human emotions. Mom likely, however, no computer would

have such a characteristic. The decrement in performance at

crucial times would be unacceptable. Designers would demand

higher performance, despite the shortened life of the computer,

so the governor would be eliminated Sadly, this is exactly why

we humans and other organisms age and die. Our bodies could

be made to go on and on, but the competition for reproduction

shapes mechanisms that give slight advantages now, despite

inevitable costs later. There are no perfect machines, mechanical

or biological, just bundles of compromises.

This brings us to emotional disorders. If a variety of

different situations arouse emotions that adjust our minds and

bodies to cope with various challenges, then much emotional

discomfort is normal and useful. Thus, using drugs to interfere

with these bad feelings might well compromise the individual’s

ability to cope (although a signal detection analysis shows that

inexpensive emotions will often be amused by falste alarms,

even by a normal regulation system). Evolutionarily novel

circumstances can arouse emotions in situations where they are

useless or harmful Pornography, for instance, arouses desires

that make people dissatisfied with their daily lives, Riding in

an airplane arouses anxiety for many people, but the anxiety

does nothing to increase safety.

In my practice of psychiatry, patients often say, “My

emotions are so conflicted!” I used to think that there was some

single core to an individual’s emotions, some central truth that,

if discovered, would resolve such conflicts. Sometimes there

is. And people often experience distortions of their emotions

that psychotherapy can set straight. In many cases, however,

the original statement is exactly right--conflicting emotions are

being aroused simultaneously. She loves him, but also fears

him. He wants to make her happy, but becomes jealous whenever

she is with another man She wants the new job but she

also wants to continue the friendship with her coworker. He

experiences desire and guilt. She wants to live up to her ideals,

but she also would like to kill- the person who hurt her child.

Many such conflicts are exactly what they seem-differ- ferent

urges competing in our minds as we squirm and suffer

Sometimes such conflicts become so extreme that the whole

system crashes. Excess and conflicting demands cycle into an

positive feedback loop of arousal or a state of frozen despair.

Novel aspects of our environment are especially prone to push

the system into abnormal states. These states are mediated by

neurochemical changes, but those changes am, in many cases,

secondary man&stations of higher level conflicts Our minds

fail just as computers do because of software problems, hardware

problems, or complex interactions between the two.

While cybernetic ideas were first applied to psychiatric problems

decades ago, the link with evolution was not yet made.

Now that we have a rudimentary sense of what the emotions are

for and how they evolved, we have a new opportunity to better

understand emotions and emotional disorders.

REFERENCES

Marks, I.M. and R.M. Nesse (in press) “Fear and Fitness: An

Evolutionary Analysis of Anxiety Disorders.” Ethology and

Sociobiology.

R. M. Nesse / EMOTIONS AND SOFTWAREI 3 7

Nesse, R. M. (1990). “Evolutionary Explanations of Emotions.”

Human Nature 1(3):261-289.

Nesse, R. M. (1991). "Psychiatry and Sociobiology.” In M.

Maxwell, ed., The Sociobiological Imagination. S.U.N.Y.

Press.

Nesse, R M. (1987). “An Evolutionary Perspective on Panic

Disorder and Agoraphobia, Ethology and Sociobiology

8:735-835.

Williams, G.W. and R.M. Nesse, R. M. (1991). “The Dawn of

Darwinian Medicine.” Quart. Rev. Biol. 66( 1): l-22.
night hawk
2006-11-27 22:43:28 UTC
no it all about the a.i. in the computer and how its program trust me computers do not feel or act anyway unless program to
kevin
2006-11-27 22:47:13 UTC
No but probably some day.
2006-11-27 22:42:19 UTC
Based on how they function, one could argue that they have personality.

Emotions, No!
♥Riley's Mom♥
2006-11-27 22:46:13 UTC
I doubt it
2006-11-27 22:40:33 UTC
Personality, yes.



Emotions, no.
Ben
2006-11-27 22:45:06 UTC
no not really...
sriram
2006-11-27 22:56:06 UTC
Well,

Integrating Models of Personality and Emotions into

Lifelike Characters

Elisabeth André, Martin Klesen, Patrick Gebhard, Steve Allen, and Thomas Rist

DFKI GmbH, Stuhlsatzenhausweg 3, D-66123 Saarbrücken, Germany,

{andre, klesen, gebhard, allen, rist}@dfki.de

Abstract. A growing number of research projects in academia and industry

have recently started to develop lifelike agents as a new metaphor for highly

personalised human-machine communication. A strong argument in favour of

using such characters in the interface is the fact that they make humancomputer

interaction more enjoyable and allow for communication styles

common in human-human dialogue. Our earlier work in this area has

concentrated on the development of animated presenters that show, explain, and

verbally comment textual and graphical output on a window-based interface.

Even though first empirical studies have been very encouraging and revealed a

strong affective impact of our Personas [23], they also suggest that simply

embodying an interface agent is insufficient. To come across as believable, an

agent needs to incorporate a deeper model of personality and emotions, and in

particular directly connect these two concepts.

Introduction

The German Research Centre for Artificial Intelligence (DFKI) recently started

three new projects to advance our understanding of the fundamental technology

required to drive the social behaviour of agents. This initiative has been timed to catch

the current wave of research and commercial interest in the field of lifelike characters

(see [1]) and affective user interfaces (see [24]).

The Presence project uses an internal model of the agent’s (and possibly the user’s)

affective state to guide the conversational dialogue between agent and user. The

second project features an Inhabited Market Place in which personality traits are used

to modify the character roles of virtual actors in interactive presentations. The i3-ese

project Puppet promotes the idea of a virtual puppet theatre as an interactive learning

environment to support the development of a child’s emotional intelligence skills.

Although all three projects rely on a more or less similar approach towards

modelling emotions and personality traits, there are variations with regard to the

underlying user-agent(s) relationship(s), the factors that influence an agent’s

emotional state, and the way how emotions and personality traits are made

observable. The following sections provide short overviews of the three projects and

discuss their affective nature in more detail.

2 Integrating Models of Personality and Emotions into Lifelike Characters

Basic Concepts

One of the first challenges we must face when attempting to use affect within our

architectures, is to recognise the fact that the term does not refer to a well-defined

class of phenomena clearly distinguishable from other mental and behavioural events.

Affect is used within the literature to describe the class of motivational control states

which result from valenced reactions to objects and events - these include emotions,

mood, and arousal. Therefore the only generalisation we can really make about affect

is that it must contain at least the two attributes of activation and valence. The

different classes of affective states can further be differentiated by duration, focus,

intensity, and expression / effect - emotions tend to be closely associated with a

specific event or object and have a short duration, whereas mood is more diffuse and

of longer duration. For the purposes of this paper, we will define personality as “the

complex of characteristics that distinguishes an individual or a nation or group;

especially the totality of an individual’s behavioural and emotional characteristics”,

and emotion as “affect that interrupts and redirects attention (usually with

accompanying arousal)” [21].

Although there is no consensus in the nature or meaning of affect, existing theories

and models of personality and emotion can still play an useful role in enhancing useragent

interaction - even though they do not capture the affective phenomena in its

entirety. As a starting point for our work, we have taken the Five Factor Model (FFM)

[13] of personality, and the Cognitive Structure of Emotions model (OCC - Ortony,

Clore and Collins) [15]. These models are readily amenable to the intentional stance,

and so ideally suited to the task of creating concrete representations / models of

personality and emotions with which to enhance the illusion of believability in

computer characters.

Emotions: The OCC model of emotions provides a classification scheme for

common emotion labels based on a valence reaction to events and objects in the light

of agent Goals, Standards, and Attitudes. The OCC model is a model of causation,

and will be used within both Presence and Puppet to determine the affective state of

the character in response to events in the environment (see also [6] and [18]).

Personality: The FFM is a purely descriptive model, with the five dimensions

(Extraversion, Agreeableness, Conscientiousness, Neuroticism, and Openness) being

derived from a factor analysis of a large number of self- and peer reports on

personality-relevant adjectives. The descriptive nature of the FFM gives us an explicit

model of the character’s personality, and in turn, allows us to concentrate on using the

affective interface to directly express those traits (which offers the interesting

possibility of attempting to recreate the character’s personality traits from an analysis

of the emergent social interaction). Furthermore, as we are focusing on social

interactions, we can concentrate on the traits of extraversion (Sociable vs.

misanthropic; Outgoing vs. introverted; Confidence vs. timidness) and agreeableness

(Friendliness vs. indifference to others; A docile vs. hostile nature; Compliance vs.

hostile non-compliance) - although we will also use neuroticism (Adjustment vs.

anxiety; Level of emotional stability; Dependence vs. independence) to control the

influence of emotions within our characters.

In addition to generating affective states, we must also express them in a manner

easily interpretable to the user (which in the case of the Puppet project will be young

children). Personality and emotions can be conveyed in various ways. According to

Integrating Models of Personality and Emotions into Lifelike Characters 3

empirical studies, extravert characters use more direct and powerful phrases than

introvert characters [8], speak louder and faster [20] and use more expansive gestures

[9]. Furthermore, the rendering of dialogue acts depends on an agent’s emotional

state. Effective means of conveying a character’s emotions include acoustic

realisation, body gestures and facial expressions (see [5]). While these studies seem

directly applicable to anthropomorphic agents like the Presence Persona, it is not clear

to what extent they apply to animals with anthropomorphic features such as the

characters in the Virtual Puppet theatre.

In all three projects, personality and emotions are used as filters to constrain the

decision process when selecting and instantiating the agent’s behaviour. For instance,

we might define specific behaviours for extravert characters in a certain emotional

state. However, there are other (affective) states we would like to convey that are not

simply the result of an affective appraisal (as in the OCC model), or easily derived

from personality traits - i.e. fatigue, boredom, and hunger. To model these states, we

will mimic our character’s active body state with motivational drive mechanisms to

provide the affective input signals.

The Role of Affect in Presence

The Presence project will use lifelike characters as virtual receptionists /

infotainers / accompanying guides for visitors to the German Research Centre for

Artificial Intelligence (DFKI). Here we will explore the hypothesis that using an

explicit affective model (of both agent and user) to guide the presentation strategies

used in the human-agent conversational dialogue will (a) create a more natural and

intuitive user interface (by tailoring the conversation to an individual person); (b)

provide the user with an engaging and enjoyable experience; and (c) enhance the

believability of virtual characters.

The Presence project addresses a number of specific problem areas: (a) flexible

integration of multiple input (speech, mouse, keyboard and touch-screen) and output

(text, pictures, videos and speech) devices. The architecture must be intelligent

enough to adapt to the different affective modes of communication in the different

application domains - i.e. no speech inflection in the remote domain; (b) the

development of a high-level descriptive language for character definition, based on

personality traits to allow easy customisation of the agent; (c) the combination of

computational models of personality and emotion with planning techniques to guide

the interaction of a lifelike character presenting material to visitors both locally and /

or remotely over the world wide web; and (d) explore the possibility of tailoring the

agent-user interaction to an individual user by inferring the user’s affective state (see

also [3]).

Application Domain

One of the major design considerations of the project is the requirement for a

flexible architecture capable of coping with multiple application domains. Domains

are defined by the scope of information they cover and the general interaction

4 Integrating Models of Personality and Emotions into Lifelike Characters

behaviour of the lifelike character. Domains are furthermore decomposed into

hierarchically structured dialog topics (i.e. Welcome/Small-talk, Helpdesk, Info/Tour)

which guide the conversational thread. Our domains are:

· Receptionist (Entrance Hall): Here the Presence system will run on an infoterminal

within the DFKI entrance hall, and will welcome visitors, business

partners, and student to the institute. The virtual receptionist will answer questions

on a wide range of dialogue topics covering news, research projects, and people

within the DFKI. The receptionist will also be capable of initiating conversations

and informally introducing the various dialogue topics. The actual conversational

thread will be guided by both the user responses, and the modelled affective state

of the agent.

Fig. 1. Screenshot of the Presence Prototype.

Integrating Models of Personality and Emotions into Lifelike Characters 5

· Infotainment (Remote): The remote infotainment domain will utilise the same

underlying conversational modes as the local receptionist domain. However, the

constraints imposed by the restricted bandwidth of the internet will place an

additional requirement on the system to make best use of the available input and

output resources. Our system must therefore be intelligent enough to cope with a

varying communication channel with the user, i.e. one in which the affective

channel of speech inflection may no longer be available.

· Guide (Portable): Within the portable application domain, the system’s primary

role will be to guide the user through the building, i.e. the user can ask the system

how to reach a lab or office. However, instead of remaining a passive guide, the

Persona system will take advantage of the infra-red channel of a palm computer to

provide a low bandwidth link to the server - thus allowing the system to update the

user with DFKI internal news, or to signal the beginning of lectures, meetings or

talks. The portable domain will provide a real challenge to convey affective

information in such an impoverished environment.

Emotions and Personality

The Persona system model extends the PPP animated presentation agent

architecture developed at DFKI [2] with a number of new features - most notably

enhanced input and output modalities for affective communication, and an Affective

Reasoning Engine for affective state recognition and generation. In line with recent

research in affective computing [4], [16] and [22], we use two affective information

processing channels (see Fig. 2). Primary emotions (i.e. being startled, frozen with

terror, or sexually stimulated) are generated using simple reactive heuristics, whereas

Secondary emotions are generated by the deliberative Affective Reasoning Engine

according to the OCC model – Sloman introduces the additional class of Tertiary

emotions as secondary emotions which reduce self control, but these will not be

implemented in our initial prototype.

Fig. 2. Conceptual view of the Presence system which deals with affective states.

Emotions and personality traits weave an intricate web within the Persona system.

Emotions are primarily used to determine the Persona’s short-term affective state -

expressed through the system’s affective output channels as gestures and speech

inflection. However, emotions are also able to directly influence the choice of phrase

Request for Reaction

Dialogue Goals

Reactive Module

Deliberative Module

2

1

Input

Classification

&

Primary

Appraisal

1 reactive affective response (reflexes) 2 goal driven affective response (planned reaction)

Outputgeneration

Mind

Model

World

Model

Plan

Library

Dialogue

History

Control Sequence

Control Sequence

Action Events

(User, System,

Body States)

Presentation Data

Animations

6 Integrating Models of Personality and Emotions into Lifelike Characters

within the conversational dialogue, and even dynamically adjust (within a pre-set

range) the Persona’s personality trait values. Likewise the Persona’s personality traits:

(a) help to bias the motivational profile of the agent - and thus determine the

importance of the agent’s high-level goals (to which events are compared during the

emotion generation process); and (b) steer the conversational dialogue goals -

extravert characters are more likely to initiate small-talk.

The Persona’s affective reasoning process is roughly modelled on the “Emotion

Process” described in [7]. The Input Classification and Primary Appraisal component

classify incoming action events and appraise them with respect to agent concerns

stored within the mind model and the dialogue history. After classification, filtered

events are passed to the behaviour module as response requests through the two

parallel affective information processing channels. The reactive module handles the

Persona’s immediate reactions to user or system events, whereas the deliberative

module produces more controlled reactions (in particular, it is responsible for

determining the contents and the structure of the dialogue contributions).

The rendering of the Persona’s actions is performed by the Output Generation

module which generates and co-ordinates speech, facial expressions and body

gestures. So that the output generator can account for the Persona’s affective state, the

deliberative and reactive modules annotate the actions to be executed with appropriate

mark-ups (in addition to personality traits and emotion label terms - i.e. happy, sad,

fear - we will also use the general emotional dimensions of Valence and Arousal).

To model the Persona’s personality, we use the social dimensions of extraversion,

agreeableness, and neuroticism. We will initially model the Persona as an extravert,

agreeable and emotionally-balanced character, as [11] point out, people tend to prefer

others based on the match and mismatch to their own personality (even though the

exact relationship is still unclear and empirical studies have led to conflicting results).

Among other things, this means that the Persona will tend to take the initiative in a

dialogue, will be co-operative and will remain patient if the user asks the same

question over and over again (although the later case could indicate that the Persona is

failing in her goal to be helpful).

We currently focus on goal-based emotions - whereby events are evaluated with

respect to their desirability for the user’s and/or the Persona’s goals. We will also

attempt to infer the user’s affective state and use it to create a more sympathetic

character. This can be done indirectly by monitoring system deficiencies, such as

errors of the speech recognition component or the inaccessibility of information

servers. Alternatively we can try to derive it directly from the syntactic and semantic

form (use of affective phrases) and the acoustic realisation (talking speed, volume

etc.) of the user’s utterances.

The Role of Affect in The Inhabited Market Place

The objective of the Inhabited Market Place is to investigate sketches, given by a

team of lifelike characters, as a new form of sales presentation. The basic idea is to

communicate information by means of simulated dialogues that are observed by an

audience. The purpose of this project is not to implement a more or less complete

model of personality for characters, such as a seller and a customer. Rather, the

Integrating Models of Personality and Emotions into Lifelike Characters 7

demonstration system has been designed as a test-bed for experimenting with various

personalities and roles.

Application Domain

As suggested by the name, the inhabited market place is a virtual place in which

seller agents provide product information to potential buyer agents. For the graphical

realisation of the emerging sales dialogues, we use the Microsoft Agent package [14]

that includes a programmable interface to four predefined characters: Genie, Robby,

Peedy and Merlin. To enable experiments with different character settings, the user

has the possibility of choosing three out of the four characters and assigning roles to

them. For instance, he or she may have Merlin appear in the role of a seller or buyer.

Furthermore, he or she may assign to each character certain preferences and interests

(see Fig. 3).

Fig. 3. Dialog for character settings.

The system has two operating modes. In the first mode, the system (or a human

author) chooses the appropriate character settings for an audience. The second mode

allows the audience to test various character settings itself. Figure 4 shows a dialogue

between Merlin as a car seller and Genie and Robby as buyers. Genie has uttered

some concerns about the high running costs which Merlin tries to play down. From

the point of view of the system, the presentation goal is to provide the observer – who

is assumed to be the real customer - with facts about a certain car. However, the

presentation is not just a mere enumeration of the plain facts about the car. Rather, the

facts are presented along with an evaluation under consideration of the observer’s

interest profile.

8 Integrating Models of Personality and Emotions into Lifelike Characters

Emotions and Personality

As in the other two projects, we follow a communication-theoretic approach and

view the generation of simulated dialogues as a plan-based activity. We are

investigating two approaches: a centralised approach in which the system acts as a

screen writer who produces a script for the actors of a play; and a distributed approach

in which the single agents have their own goals which they try to achieve.

Fig. 4. Car sales dialogue example.

We follow the FFM, but model only the dimensions of extraversion and

agreeableness which seem to be the most relevant dimension for social interaction

(see [11]). In contrast to Presence, the Inhabited Market Place does not start from a

fixed set of personality features, but allows the user to design the agents’ personality

him- or herself. We decided to model the same emotional dimensions as in the

Presence project, namely Valence and Arousal.

In the Presence project, the agent addresses the user directly as if it were a face-toface

conversation between human beings. There is no direct communication between

the user and the agents in the Inhabited Market Place. Consequently, an agent’s

emotional state is not influenced by the user, but by the behaviour of other artificial

agents, and in particular by their dialogue contributions. To account for this, the

speech acts of the dialogue participants are evaluated by the characters in terms of

their role, personality traits and individual goals. The goals in particular determine the

Integrating Models of Personality and Emotions into Lifelike Characters 9

desirability of events, for example, a buyer will be displeased if he is told that a

relevant attribute of a car (e.g. power windows) is missing for a dimension that is

important to him (e.g. comfort). In contrast to Presence, we do not explicitly model

the user’s emotional state. While the agent in the Presence project aims at positively

influencing the user’s emotional state, the agents in the Inhabited Market Place do not

necessarily care for each other, i.e. an agent may intentionally make a dialogue

contribution which has a negative impact on the affective state of another agent.

Furthermore, the emotional responses of different agents to the same event may

differ.

Personality in the Inhabited Market Place is essentially conveyed by the choice of

dialogue acts and the semantic and syntactic structure of an utterance. Emotions in

this scenario are expressed by facial expressions and the syntactic structure of an

utterance. Since the Microsoft Agent Programming tool does not allow for

intonational mark-ups, we do not convey emotions by acoustic realisation in this

scenario.

First informal system tests with the Inhabited Market Place revealed some

interesting aspects as to the importance of the matching of personality traits and

surface characteristics. Subjects tended to believe that Merlin was more interested in

comfort and safety while they expected that Robby was more interested in the

technical details of a car. This also confirms our earlier empirical studies that showed

that reusing the look and voice of characters for different roles is only possible to a

certain extent.

The Role of Affect in Puppet

The objective of the Puppet project is to develop and investigate the value of a new

virtual reality environment, the Virtual Puppet Theatre, (VPT), based on a theoretical

framework of “learning through externalisation” [19]. Deploying user-controlled

avatars and synthetic characters in the child’s own play production, the children have

to distinguish and master multiple roles in their interaction with the system, e.g. that

of a director, an actor and an audience with the main activities producing, enacting

and reflecting respectively. Within this process the children should gain a basic

understanding on how different emotional states can change or modify a character’s

behaviour and how physical and verbal actions in social interaction can induce

emotions in others. These emotional intelligence skills are important for us with

respect to the early learning goals: “social role decentring” and theory of mind. Our

approach is similar to [10] which allows children to direct a puppet’s mood, actions

and utterances in interactive story-making and to [12] where children may induce

some changes in their characters emotional state besides selecting a characters

actions.

10 Integrating Models of Personality and Emotions into Lifelike Characters

Application Domain

For our first prototype (VPT1) developed for children at the age of 5-6, we decided

to model a farmyard as a co-habited virtual world, in which the child’s avatar (e.g. the

farmer) and a set of synthetic characters (pigs, cows, etc.) can interact with each

other. Figure 5 shows a screenshot of an early version using different drawings for

each character, which will be transformed to full 3D models at a later stage. Our

characters are designed to exhibit both physical and verbal behaviour. We do not try

to model “real” animals but make them more cartoon-like instead.

Fig. 5. 2D Prototype of the farmyard scenario.

For the communication between the avatar and a character we will use a simple

speech-act based dialogue model and a set of pre-recorded utterances. The agents are

equipped with virtual sensors and effectors which connect them to the 3D virtual

environment and controlled by an agent architecture that integrates deliberative (goaldriven)

and reactive (data-driven) planning. To foster the above mentioned emotional

skills we provide two distinct sets of interfaces which can be used by the child to

control a character’s behaviour. A body control interface which gives full control over

the movement of the selected character and a mind control interface which allows to

change the character’s emotional state thus biasing the behaviour in some direction

without specifying the actual motion pattern. Similar to the system described by [10]

Integrating Models of Personality and Emotions into Lifelike Characters 11

we separate the high-level behaviour planning and affective reasoning (the “mind”)

from the animation planning and control modules (the “body”). The first is done by

the agent architecture as described in the next section and the latter lies within the

responsibility of the 3D virtual environment.

Emotions and Personality

The agent architecture used for the high-level behaviour planning and affective

reasoning consists of a knowledge base, a plan library, an interpreter and an intention

structure. The knowledge base is a database that contains the world model (the

“beliefs”) of a character. The plan library is a collection of plans that an agent can use

to achieve its goals and the intention structure is an internal model of the current goals

(the “desires”) and instantiated plans (the “intentions”) of that agent. Within this

architecture we can have multiple active goals and multiple plans for each goal.

Conflict resolution, plan selection, instantiation and execution are handled by the

interpreter (see Fig. 6).

Fig. 6. Agent architecture for VPT1.

There are two types of input which can influence an agent’s behaviour planning

and affective reasoning: percepts and events from the virtual environment; and user

input from the mind control interface. The first is used to update the agent’s world

model (which also influences the affective reasoning process) and the second to

directly change its affective state encoded in the mind model. To increase the lifelikeness

of a character we also introduce body states (fatigue, boredom, hunger)

which are represented within the knowledge base and regularly updated by the

interpreter. The body states act as motivational drives that impel the agent into action

by activating the appropriate behaviour (sleeping, playing, eating and drinking),

which is then carried out in a character-specific way (if a dog is hungry it will go to

the farmhouse whereas a cow will start grazing). The output of the planning processes

is an action specification for the virtual environment. It contains appropriate mark-ups

(e.g. for the facial expression) taking into account the current emotional state.

As a starting point for the mind model capturing the personality and affective states

of our characters we use the OCC model. In Puppet it is particularly important that we

can express these states in a manner easily interpretable by young children. We

therefore decided to model the emotion types Anger, Fear, Happiness and Sadness

Plan Library

Intention Structure

Knowledge Base

Mind

Model

World

Model

Body

States

User Input from the Mind

Control Interface

Percepts and Events from

the Virtual Environment Action Specification for the

Virtual Environment

Behaviour Planning

Affective Reasoning

Interpreter

12 Integrating Models of Personality and Emotions into Lifelike Characters

based on evidence suggesting their universality [5] and the fact that there are

distinctive facial expressions which can be interpreted properly by children of age 4-8

[17]. Emotions are primarily conveyed by facial expressions and the selection of

appropriate sounds (a cat will purr if it is happy and hiss if it is angry). They are either

computed by emotion generating rules according to the OCC model or directly

manipulated by the child through the mind control interface (see Figure 6).

As in the other two projects described in this document we adopt the FFM and

reduce it to the dimensions extroversion and agreeableness because they determine to

a large extent how an agent will behave in social interactions ([11]). In addition we

specify for each character a set of preferences (e.g. the dog likes bones) and long term

goals. Most characteristics are tailored for each character to give them unique pseudo

personalities. This means that we can only partially rely on the same high-level

behaviour to convey personality features (e.g. greet another character and start

playing if you are extrovert and agreeable) and that we have to devise characterspecific

ones otherwise.

Puppet offers a variety of different user-agent(s) relationship(s). In “enacting

mode” the child uses an avatar to interact with other characters in the scene. This is

similar but not identical to Presence where the user interacts with the Persona through

a set of input devices. The second mode, the child playing the role of an audience by

observing the interaction of two or more autonomous agents has its equivalent in the

Inhabited Market Place where the user observes a dialogue performed by a team of

characters. However there is a third distinct user-agent relationship in Puppet, namely

that of the child being a director, i.e. controlling the behaviour of all characters in the

scene. This is similar to make-believe play with physical puppets during childhood in

which the child takes on a series of roles. The difference is that the animals in our

scenario are semi-autonomous, i.e. they take directions (e.g. the child can force the

puppets to do or say something or change its affective states) that bias but not

completely specify their behaviour. How an agent will (re)act in a specific situation

also depends on its internal body states and personality features. This mode could

provide valuable insights because we can observe when and how the children change

the emotional state of a character, something that is not so easy to infer in

conventional make-believe play.

The psychologists from Sussex will evaluate a group of 5-6 year old children

before and after they played with the VPT. We hope that their findings will validate

our assumption that the children’s emotional intelligence skills were improved by

constructing simple models of the virtual puppets minds. It will be also interesting to

see how the ability to take the subjective view of different characters (as an actor) and

to direct their behaviour (in the role of a director) will increase their understanding of

the dynamics in social interactions, especially how emotions influence these

interactions.

Integrating Models of Personality and Emotions into Lifelike Characters 13

Conclusions

The three projects described in this paper use personality and emotions to

emphasise different aspects of the affective agent-user interface: (a) Presence uses

affect to enhance the believability of a virtual character and produce a more natural

conversational manner; (b) the Inhabited Market Place uses affect to tailor the roles

of actors in a virtual market place; and (c) Puppet uses affect to teach children how

the different emotional states can change or modify a character’s behaviour and how

physical and verbal actions in social interactions can induce emotions in others.

As this broad range of application areas demonstrate, affect has an important role

to play in user-agent interactions. However, affective user interfaces are still in their

infancy, and much work is still needed to bring all the pieces of the jigsaw together.

To use affect effectively, it must be an all-encompassing component of the system -

from graphic design to system architecture to application contents.

Acknowledgements


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...