Making a virtual agent interact with a human participant can be seen as a control problem, where the human player (HP) is the reference system, and the virtual player (VP) is the system to be controlled.
The position of the human player is detected by a motion sensor, such that her/his position and velocity at the next time step ($y$ and $\dot{y}$, respectively) can be estimated. Then, they enter a feedback control loop.
The controller is able to generate the control input $u$ according to the mismatch between position and velocity of HP and VP and to a desired motor signature $\dot{\sigma}$ which enables the VP to exhibit the kinematic features of a given human player (not necessarily that employed as reference system). Then, $u$ is sent as input to the system modelling the dynamics of the virtual agent, such that its position and velocity ($x$ and $\dot{x}$, respectively) can finally be evaluated.

Resized PNG graphic


Specifically, let $x \in R$ be the state variable representing the position of the virtual player (VP). The system describing its behavior is given by:

$\ddot{x} \left(t\right) = f\left(x\left(t\right), \dot{x}\left(t\right) \right) + u\left(t\right)$

where $f$ represents the inner dynamics of the VP when not connected to any other agent, $\dot{x}$ and $\ddot{x}$ represent velocity and acceleration of the VP, and $u$ is the control signal that models its coupling with another agent.

Inner dynamics

  1. Harmonic oscillator, a linear system given by:

    $f\left(x,\dot{x}\right)=-\left( a\dot{x}+bx \right)$

    where $a$ and $b$ represent viscous damping coefficient and spring constant, respectively.

  2. HKB equation, a nonlinear system given by:

    $f\left(x,\dot{x}\right)=-\left( a x^2+\beta \dot{x}^2 - \gamma \right)\dot{x} - \omega^2 x$

    where $\alpha, \beta, \gamma$ characterize the damping coefficient, while $\omega$ is related to the oscillation frequency, respectively.

  3. Double integrator, a system without inner dynamics:

    $f\left(x,\dot{x}\right)=0$

Control signal

  1. PD control, a linear control law given by

    $u=K_{p}\left(y-x\right) + K_{\sigma} \left( \dot{\sigma} - \dot{x} \right)$

    where $y$ is the position of the other agent coupled to the VP, $\dot{\sigma}$ is its desired motor signature (velocity trajectory), and $K_p$ and $K_{\sigma}$ are two control gains.

  2. Adaptive control, a nonlinear control law.
    • When the VP acts as a follower, it is given by

      $u=\left[\psi + \chi \left( x-y \right)^2 \right]\left( \dot{x} -\dot{y} \right) - C e^{-\delta \left( \dot{x} - \dot{y} \right)^2} \left(x-y\right)$

      with
      $\dot{\psi} = -\frac{1}{\psi}\left[ \left( x-y \right)\left( \dot{x}-\dot{y} \right) + \left( x-y \right)^2 \right]$

      $\dot{\chi} = -\frac{1}{\chi} \left( \dot{x}-\dot{y} \right) \left[ f\left(x, \dot{x} \right) + u \right]$

      where $y$ and $\dot{y}$ are position and velocity of the other agent coupled to the VP, $C$ and $\delta$ are control parameters, and $\psi$ and $\chi$ are adaptive parameters.

    • When the VP acts as a leader, it is given by

      $u=\lambda \left( \left[\psi + \chi \left( x-\sigma \right)^2 \right] \left( \dot{x} -\dot{\sigma} \right) - C e^{-\delta \left( \dot{x} - \dot{\sigma} \right)^2} \left(x-\sigma\right) \right)$ $+ \left(1- \lambda\right) K \left( y-x \right)$

      where $\lambda:= e^{-\delta \left| x-y \right|}$, $K$ is a control parameter, $\sigma$ and $\dot{\sigma}$ are desired position and velocity profiles (motor signature) that allow the VP to generate spontaneous motion, and all the other quantities have been previously defined.
 

Group information

Department of Electrical Engineering and Information Technology
University of Naples Federico II
Via Claudio 21, 80125, Naples, Italy

Department of Engineering Mathematics
University of Bristol, Merchant Venturers Building, Woodland Road, Clifton, BS8 1UB, UK