0:12
After describing what are agent and multi-system agent,
we'll now say some words about how to implement them in a computer system.
Of course, Agent Based Models,
multi-agent systems are interesting only when implemented in the computer
because there's no way of solving them analytically, or so on.
So there's several ways of course of implementing it, in fact you can use
a whole new approach in the language you want a example in almost all languages.
It's always possible to implement them.
But there are some better match than other and they think that for agent and
Object-Oriented definition can be really good match.
Because this kind of correspondence of equivalence between the concept of
agent based, modeling of agents, and the concept of object in object oriented.
1:15
For example, you have an immodity of several type of agent.
For example, okay, if you have ant model, you can have an ant with different castes,
like a worker ant, the soldier ant, the queen and so on.
So you can see the class in the object-oriented
programming as a kind of agent.
And then every class has to be instantiated.
So you can have, you can see the instance of this class is simply the agent.
And you have one instance per agent.
If you need to spawn an agent during the model, you just instantiate the class.
And really important concept of object-oriented,
is the encapsulation with private and public members.
And here it's interesting,
because the private member will be doing internal state of the agent.
The memories, the part of it that makes it kind of black box from the out of sight.
And the behavior will be a public method or a collection of public method.
It's how the agent will react to the stimulus.
So here in [INAUDIBLE] Java, an example of an agent.
So, you have a class for agents and here you have
a sort of Java because of course you have to assign them using a constructor.
But here you have an ID, so it's really important, it's good advice
that every one of your agents has a different ID.
So that will allow you to debug, to track, to extract a single trajectory.
2:58
But then we have private member, we describe the state.
And then you have a behavior function.
That is even a perception, root and an action.
And of course for every model, the perception.
What is a perception?
What is an action?
Would be different, it really depends of what you are trying to model.
3:19
And given that, we can now try to see how to relate every pieces together.
Because again, if I go back, it's quite easy to define that for every agent.
Then of course, you, knowledge in the field will
help you to feel what's in the dot and to choose whether the state will be.
But it's quite easy to do that part.
And then you will have to put all the object together to make them evolve and
then we have two basic approach.
We can update the system in an asynchronous or
synchronous way, like for cellular automata, for example.
4:11
the general form of the code, a kind of universal loop, universal update loop.
For a sync update and synchronous update.
So first, asynchronous.
Asynchronous means that at every time iteration,
I will take the agent one after the other and update it.
And once I finished update everybody I will jump to the next iteration.
So if we go a little bit in details.
So first we need to initialize a system.
So here we have initial function which returns a list of agents.
And then an initial environment.
4:48
The time is the timing, the initial time, and
we will define a max time as a stop condition.
Could be other stop conditions, of course, but here,
we will continue to iterate until the time reach the max time.
At each iteration, we will consider every agent individually.
First, we'll compute the perception that the agent we receive.
Knowing the environment and the list of all the other agents.
We'll call it p, then we can use the agent to compute this behavior and
you wait on an action.
And finally, we update the environment using this action.
5:30
So the environment will be updated, and we will go to the next agent and
we compute the perception for next agent and so on.
When we have seen all the agent, when we have iterated over all the agent, we can
increment time, can be often continuous time, but other solution are possible.
And if the time is still smaller than the max allowed time,
we'll concentrate again to see all the agents.
That approach seems natural, but we have to pay attention here.
Because here we consider always the agents in order and that can introduce a bias
where some agents are favored because their decision will take place before.
So one of the ways to over come that is, instead of taking the listing the agent
always in the same order, we shuffle the agent list at each iteration.
But the problem is that some phenomena are really synchronous
because they are operating at the same time.
And the, another way to implement it is using a synchronous of that scheme.
So here the code looks simpler, but it isn't necessarily.
6:43
Here we initialize again the system and receive a list of agent and
environment for the time is the same.
We'll loop until we reach a small [INAUDIBLE].
The difference is that every step is done on all the agents at the same time.
For example, here we will compute
the same step, but agent per agent.
Here, first of all, we will compute all perception, so for
that we have the environment, the agent, and we receive a list of perception.
Which has the same length than the list of agents and every element on this list is
what the agent at this list index will perceive of the environment.
Once we have all the perceptions, we can use this function or behaviors.
7:32
That will call the behavior function of every agent on this perception,
and return a list of action with the action that all the agents wants
to effect.
Yeah, to effect an environment.
So now we have to update the environment.
And the environment will be updated using the least of all actions.
So if there are conflicts because two agent want to do the same thing,
like going at the same special position,
you get a conflict that will be handled at this level.
So we can use our rule to explain all this conflict must be handled.
And then we increment the time in the same way as we done in the synchronous update.
The interesting part about synchronous update also,
is that it is easier to prioritize.
Because you know that, for example, all behaviors, what you have to do is,
you have a list of perception, a list of agent.
You have to take both lists to put them side, every element aside.
And then use the call the function behavior,
and then every agent on the corresponding perception.
And that you can do in parallel because they won't depend on each other.
8:51
It's the same for computing the perceptions.
You can do that in parallel, but
if you have to handle every agent one after the other,
you cannot because you will have conflict and you will have to end on them.
9:04
So the synchronous update is interesting for that, but
again in lots of systems where action is slow or uncommon,
it is not realistic to put in that everything happened at the same time.
You have to choose depending on the problem that you have.
9:22
I think really interesting is that you have to choose
If you want to Lagrangian or Eulerian approach.
So usually people use a Lagrangian.
It mean that every agent has a location it's aware, in some sense,
of its location.
9:41
And we can then compute for every agent since we know the location of every agent,
for example, we can compute if the two agent are in contact or
if two agents are inside a communication radius.
What every agent can see of their environment.
A typical representation for that is a list of agent's.
Where every agent has an idea in positioning in the to d plus or
the other variable state that it could have.
And so that's quite common to see that.
The problem is that it can be really slow, because naively,
if you have to compute I don't know, web interaction topology,
knowing the communication radius of every agent.
You will have to consider all pair of
agent to understand if they are close enough to communicate.
It means something of the order of n squared and
that can be really bad if you have huge number of agents.
But we can have something which is sub quadratic by using
specialized data structure.
So for example, k-d tree, which will recursively split the space
11:02
among all the direction.
Usually, it cost O(n log n) to
build the kd tree out of end point.
But once you have one you can in log and modify it.
So basic log in which is quite small, and do all the rand search,
like and finding all the agent which are in the are of agent of interest in
o of n, the square root of n which helps you a lot in large system.
There are other interesting structure like r trees and
I once spent time in that but if you ever realistic system with lots of agent and
you need to be fast, you have to think about the that you will use.
An alternative is to use a Eulerian approach,
in that case the environment is a regular grid of cells,
a bit like is really similar, but
every cell contains a list of agents It's a list of agent like a showed you earlier.
The agent don't have to know their exact position, because they can pretend that
they are roughly at the same spots every cell is a spot and all the agents.
It's like every cell is a well stirred mixture.
And so it's fast to compute the interaction communication before,
because you can say, okay first that's every agent can communicate with every
other agent in the same place that they are in contact.
But for
longer communication agents you can include the neighboring cells and so on.
Movement is easy.
You just move and then jump to the adjusting cell in the correct direction.
And it allows to nice parallism.
It allows to compute the interaction really fast.
To couple it with other model, like cellular automata, finite differences.
And of course you will lose some special
spatial precision because in this approach, the agent is
unnecessarily has a discrete position instead of having a continuous one.
So the physique may not be correct if you have a dynamic.
Of course you can combine both approaches.
The variable time is really interesting to consider it also.
You don't have to use continuous time, because most of the time,
14:19
every time an event happens you will find the concerned agent,
give the event as a percept to the agent.
And the agent reacted by changing the states, and pushing new events.
And so you can use EBM inside the discrete event system without any problem.
14:42
So we discussed about some concern.
About how to implement agent based modules.
Of course, if you want to understand better how to do it,
the best way is to try to implement a system yourself.
Or to consider an existing agent bis module framework and
try to look at the source code to see how they did it.