Affect Control Processes (Bayesian Affect Control Theory)
Overview

Affect Control Theory (ACT) arises from a tradition of symbolic interactionism in sociology. Bayesian Affect Control Theory (or BayesACT for short) generalises ACT by introducing explicit notions of uncertainty and utility. BayesACT accounts for the dynamic fluctuation of identity meanings for self and other during interactions, elucidates how people infer and adjust meanings through social experience, and shows how stable patterns of interaction can emerge from individuals' uncertain perceptions of identities. BayesACT may be used in an active inference framework, giving policies of action in which social prescriptions are anticipatory, and both guide (create), and are guided by, sensory inputs. BayesACT has been applied in an intelligent tutoring system, a social dilemma game player, an assistant for persons with Alzheimer's disease, and in sentiment analysis. We've got more projects on the go, check back for updates!

See also:

Videos
Demonstrative Videos for the 2021 paper: Citizens Madmen and Children.
You can watch a series of video lectures introducing Affect Control Theory and Bayesact here.
Watch my talk at the Vector Institute on affective computing - see the talk video and see a shorter interview video
Also see the instructional and simulation videos below.
You can watch Areej Alhothali's talk at NAACL/HLT 2015 on sentiment analysis using ACT.
You can watch the 2017 ACT Conference videos here or here.
Code
Papers
More details can be found by reading the following papers:
Older content below here (may be out of date)
Instructional Videos
The following is a playlist of four videos showing how to use the Bayesact interactive simulator. The first is a short introductory video. The second is a rather long video showing a bayesact simulation that agrees with an interact simulation. The third and fourth videos show simulations of an agent that does not know the affective identity of the person it is interacting with. More detailed description of the three videos is below the video frame.

Here is a description of the videos in the playlist. You can skip to the one you want by clicking on "playlist" at the top left of the video frame above.
  1. This screencast shows a basic simulation of a 'tutor' and 'student' in Bayesact and gives an overview of what the output is.
  2. This screencast shows an example of using the interact java applet alongside the bayesact python simulator. The bayesact simulator is set up in such a way as to mimic as closely as possible the computations of interact. As bayesact doesn't take any shortcuts or make approximations, this requires using a large number of samples (10,000) and have a very small observation noise. As well, he first 5 minutes of this video shows how to set up a basic simulation in interact.
  3. Simulation of a Bayesact agent with affective identity of "tutor" interacting with a "student", but the bayesact agent does not know this affective identity to start with. Through interactions with the student, the bayesact "tutor" learns that this agent is something like a "student. Interact is used to simulate the actions of the student. It takes bayesact only 2 iterations to figure out the student's identity, as these two identities are fairly close.
  4. Simulation of a bayesact agent with identity "salesman" interacting with another agent (the "client") who is a "robber", but the bayesact agent does not know this. Through interactions with the robber, the bayesact "salesman" learns that this agent is something like a "robber". Interact is used to simulate the actions of the robber". It takes about 8 iterations for the bayesact agent to figure this one out, as the two identities are fairly dissimilar (will normally result in high deflection interactions).
Simulation Videos
The following videos show simulations of BayesAct agents. These were done using the bayesactsim.py program distributed with the bayesact code above (see therein for instructions on replicating these simulations). Each simulation has two agents interacting with each other: an agent and a client. In each video, you will see the following:
  • Magenta squares+red triangle: agent self identity (triangle is the mean)
  • Cyan squares + blue triangle: client self identity
  • Red squares: agent's estimate of clients identity
  • Blue squares: client's estimate of agent's identity
Additionally, in some videos, you will see labels giving a nearby label from the affect control databases. These are:
  • Magenta label: most common label for agent self identity
  • Cyan label: most common label for client self identity
  • Red Label: most common label for agent's estimate of client's identity
  • Blue label: most common label for client's estimate of agent's identity
Ideally, the blue label should match the magenta label and the red label should match the cyan label
Fixed identities (unknown to each other)
This is two agents with rather fixed ideas about their own identities trying to figure out the identity of the other. 10 experiments with different ids for agent and client. 150 steps per experiment with 500 samples.

This example has:
  • beta_a=beta_c=0.001
  • beta_a (proposal) = 0.01
  • beta_c (proposal) = 0.1
  • average id deflection for agent is 0.03\pm0.04 and for client is 0.04\pm0.03

Effects of Environment noise (fixed identities)
For two identities "lady" and "shoplifter", with no environment noise:

For two identities "lady" and "shoplifter", with environment noise: zero-mean Gaussian noise with variance 0.5:

For two identities "lady" and "shoplifter", with environment noise: zero-mean Gaussian noise with variance 1.0:

For two identities "lady" and "shoplifter", with environment noise: zero-mean Gaussian noise with variance 5.0:

For two identities "tutor" and "student", with environment noise: zero-mean Gaussian noise with variance 0.5:

For two identities "tutor" and "student", with environment noise: zero-mean Gaussian noise with variance 1.0:

Clueless agent
Now, the client knows its identity (magenta squares with red triangle as the mean), but does not know the identity of the agent (red squares are its estimate). The agent doesn't know anything (blue squares are its estimate of the client's identity, cyan squares+blue triangle are its estimate of its own identity). In some cases, we see the two agents learning each other's identities, so the agent actually decides on an identity for itself!)

  • client
    • alpha=0.1
    • gamma_value=1.0,
    • beta_value_agent=0.001,
    • beta_value_client=0.001,
    • beta_client_init=2.0 (does not know agent identity at start)
    • beta_agent_init=0.01 (knows its own identity at start)
  • agent:
    • alpha_value=0.1,
    • gamma_value=1.0,
    • beta_value_agent=0.001,
    • beta_value_client=0.001,
    • beta_client_init=2.0 (clueless at start)
    • beta_agent_init=2.0
Shape Shifter
Finally, the client shifts identities at a speed of 0.25, but remains stationary at each target location for 40 steps. agent id was [0.32,0.42,0.64] while client ids were [-1.54,-0.38,0.13] and [1.31,-2.75,-0.09]