Using RVO2-3D Library

Structure of RVO2-3D Library

A program performing an RVO2-3D Library simulation has the following global structure.

#include <RVO.h>

std::vector<RVO::Vector3> goals;

int main()
{
  // Create a new simulator instance.
  RVO::RVOSimulator* sim = new RVO::RVOSimulator();

  // Set up the scenario.
  setupScenario(sim);

  // Perform (and manipulate) the simulation.
  do {
    updateVisualization(sim);
    setPreferredVelocities(sim);
    sim->doStep();
  } while (!reachedGoal(sim));

  delete sim;
}

In order to use RVO2-3D Library, the user needs to include RVO.h. The first step is then to create an instance of RVO::RVOSimulator. Then, the process consists of two stages. The first stage is specifying the simulation scenario and its parameters. In the above example program, this is done in the method setupScenario(...), which we will discuss below. The second stage is the actual performing of the simulation.

In the above example program, simulation steps are taken until all the agents have reached some predefined goals. Prior to each simulation step, we set the preferred velocity for each agent, i.e. the velocity the agent would have taken if there were no other agents around, in the method setPreferredVelocities(...). The simulator computes the actual velocities of the agents and attempts to follow the preferred velocities as closely as possible while guaranteeing collision avoidance at the same time. During the simulation, the user may want to retrieve information from the simulation for instance to visualize the simulation. In the above example program, this is done in the method updateVisualization(...), which we will discuss below. It is also possible to manipulate the simulation during the simulation, for instance by changing positions, radii, velocities, etc. of the agents.

Setting up the Simulation Scenario

A scenario that is to be simulated can be set up as follows. A scenario consists of a set of agents that can be manually specified. Agents may be added anytime before or during the simulation. The user may also want to define goal positions of the agents, or a roadmap to guide the agents around obstacles. This is not done in RVO2-3D Library, but needs to be taken care of in the user's external application.

The following example creates a scenario with eight agents exchanging positions.

void setupScenario(RVO::RVOSimulator* sim) {
  // Specify global time step of the simulation.
  sim->setTimeStep(0.25f);

  // Specify default parameters for agents that are subsequently added.
  sim->setAgentDefaults(15.0f, 10, 10.0f, 2.0f, 2.0f);

  // Add agents, specifying their start position.
  sim->addAgent(RVO::Vector3(-50.0f, -50.0f, -50.0f));
  sim->addAgent(RVO::Vector3(50.0f, -50.0f, -50.0f));
  sim->addAgent(RVO::Vector3(50.0f, 50.0f, -50.0f));
  sim->addAgent(RVO::Vector3(-50.0f, 50.0f, -50.0f));
  sim->addAgent(RVO::Vector3(-50.0f, -50.0f, 50.0f));
  sim->addAgent(RVO::Vector3(50.0f, -50.0f, 50.0f));
  sim->addAgent(RVO::Vector3(50.0f, 50.0f, 50.0f));
  sim->addAgent(RVO::Vector3(-50.0f, 50.0f, 50.0f));

  // Create goals (simulator is unaware of these).
  for (size_t i = 0; i < sim->getNumAgents(); ++i) {
    goals.push_back(-sim->getAgentPosition(i));
  }
}

See the documentation on RVO::RVOSimulator for a full overview of the functionality to specify scenarios.

Retrieving Information from the Simulation

During the simulation, the user can extract information from the simulation for instance for visualization purposes, or to determine termination conditions of the simulation. In the example program above, visualization is done in the updateVisualization(...) method. Below we give an example that simply writes the positions of each agent in each time step to the standard output. The termination condition is checked in the reachedGoal(...) method. Here we give an example that returns true if all agents are within one radius of their goals.

void updateVisualization(RVO::RVOSimulator* sim) {
  // Output the current global time.
  std::cout << sim->getGlobalTime() << " ";

  // Output the position for all the agents.
  for (size_t i = 0; i < sim->getNumAgents(); ++i) {
    std::cout << sim->getAgentPosition(i) << " ";
  }

  std::cout << std::endl;
}
bool reachedGoal(RVO::RVOSimulator* sim) {
  // Check whether all agents have arrived at their goals.
  for (size_t i = 0; i < sim->getNumAgents(); ++i) {
    if (absSq(goals[i] - sim->getAgentPosition(i)) > sim->getAgentRadius(i) * sim->getAgentRadius(i)) {
      // Agent is further away from its goal than one radius.
      return false;
    }
  }
  return true;
}

Using similar functions as the ones used in this example, the user can access information about other parameters of the agents, as well as the global parameters, and the obstacles. See the documentation of the class RVO::RVOSimulator for an exhaustive list of public functions for retrieving simulation information.

Manipulating the Simulation

During the simulation, the user can manipulate the simulation, for instance by changing the global parameters, or changing the parameters of the agents (potentially causing abrupt different behavior). It is also possible to give the agents a new position, which make them jump through the scene. New agents can be added to the simulation at any time.

See the documentation of the class RVO::RVOSimulator for an exhaustive list of public functions for manipulating the simulation.

To provide global guidance to the agents, the preferred velocities of the agents can be changed ahead of each simulation step. In the above example program, this happens in the method setPreferredVelocities(...). Here we give an example that simply sets the preferred velocity to the unit vector towards the agent's goal for each agent (i.e., the preferred speed is 1.0).

void setPreferredVelocities(RVO::RVOSimulator* sim) {
  // Set the preferred velocity for each agent.
  for (size_t i = 0; i < sim->getNumAgents(); ++i) {
    if (absSq(goals[i] - sim->getAgentPosition(i)) < sim->getAgentRadius(i) * sim->getAgentRadius(i) ) {
      // Agent is within one radius of its goal, set preferred velocity to zero
      sim->setAgentPrefVelocity(i, RVO::Vector3());
    } else {
      // Agent is far away from its goal, set preferred velocity as unit vector towards agent's goal.
      sim->setAgentPrefVelocity(i, normalize(goals[i] - sim->getAgentPosition(i)));
    }
  }
}

Example Programs

RVO2-3D Library is accompanied by one example program, which can be found in the $RVO_ROOT/examples directory. The example is named Sphere, and contains the following demonstration scenario:

Sphere A scenario in which 812 agents, initially positioned evenly distributed on a sphere, move to the antipodal position on the sphere.