A brief introduction on the principles of particle swarm optimizaton by Rajorshi Mukherjee. This presentation has been compiled from various sources (not my own work) and proper references have been made in the bibliography section for further reading. This presentation was made as a presentation for submission for our college subject Soft Computing.
3.
Particle Swarm Optimization (P.S.O.) also known as
Swarm Intelligence is an algorithm developed by
James Kennedy and Russell Eberhart.
It is a robust stochastic optimization technique based
on the movement and intelligence of swarms. PSO
applies the concept of social interaction for problem
solving.
Introduction
4.
There are a number of particles in this algorithm
which move around in space to search for the best or
optimum value. These particles are provided with
initial velocities and certain constants and values at
the beginning.
Each particle of the system has a certain velocity and
learning constants. It then moves in the space,
randomly and then adjusts according to the
experience collected from other particles.
Introduction contd..
5.
Each particle is swarming for the optimum.
Each particle is moving and hence has a velocity.
Each particle remembers the position it was in where
it had best results so far (its personal best).
But this would not be enough, particles would need
help in figuring out where to search.
Introductioncontd…
6.
The main motivation of this form of algorithm came
from real life examples.
Swarming is a natural phenomenon.
Swarming helps to get work done better.
Details
7.
In real life we take the example of birds as observed
by Craig Reynolds and proposed in 1995.
He observed three main properties that the birds
behaved.
Separation – Each bird is an individual particle and
do not collide with other birds.
Alignment – They move in the same general
direction.
Cohesion – They do not move away from the flock
and try to stick together.
Details contd…
8.
In this method each particle in space keeps track of
their position, and also of their neighbors. This
knowledge is used further to know a better position
(optimization of solution), this method combines
self-experiences and social experiences.
In every iteration each particle keeps track of their
personal best attempt, known as pbest.
Details contd…
9.
In every iteration each particle also keeps track of
their neighboring best and global best performance
in finding the optimum solution. This is known as
gbest.
The main concept of PSO lies in the essence that each
particle in the space is accelerated towards the pbest
and gbest locations with a random weighted
acceleration in each iteration.
Details contd…
10.
Each particle adjusts its travelling speed dynamically
corresponding to the flying experiences of itself and
its colleagues.
Each particle modifies its position according to :
Its current position
Its current velocity
The distance between its current position and pbest.
The distance between its current position and gbest.
Details contd…
11.
The PSO algorithm may be written in pseudo code as
follows.
Algorithm Parameters
A: Population of events.
pi: position of agent ai in solution space.
f: Objective function.
vi: Velocity of agents ai.
V(ai) : Neighborhood of agent ai (fixed)
Algorithm
13.
gbest = best p in P;
For each particle p in P do
vi = vi + c1*rand()*(pbest - p) + c2*rand()*(gbest - p);
p=p+v;
end
end algorithm
Algorithm contd…
14.
Particle update rule
p=p+v
with
v=v+c1*rand()*(pbest-p)+c2*rand()(gbest-p)
where
p: particle position
v: path direction
c1: cognitive learning constant
c2: social learning constant (Continued on Next Slide)
Algorithm contd…
15.
pbest: best position of the particle.
gbest: best position of the swarm
rand: a random variable
Algorithm contd…
16.
Here c1 is the cognitive learning rate and its value
determines the importance of how much necessary it
is to learn from own experience.
C2 is the social learning constant. This parameter is
places the importance of learning through experience
of others.
Vi is the velocity of a particle of the swarm, this
value is very important, too high a value will make
the system unstable. Too low a value will make the
algorithm very slow.
Algorithm contd…
17.
1. First we create a population of agents or particles to
make a swarm, uniformly distributed over a space
X.
2. Evaluate the position of each particle according to
objective function.
3. If a particle’s current position is better than its initial
position, then update it.
4. Determine best particle (according to particle’s
previous best position)
Algorithm contd…
18.
5. Update particles position
6. Move particles to new positions.
7. Goto step 2 until stopping criteria are met.
Algorithm contd…
20.
Nature is the best teacher.
Ant colonies thrive due to this phenomenon.
Birds also use swarm intelligence to survive.
Fishes also exhibit this kind of behavior.
PSO is used in computer science to optimize certain
functions.
Real Life applications
21.
These are some functions used for optimization.
Real Life applications
contd…
Griewank Rastrigin
Rosenbrock
22.
In all we can use particle swarm optimization for
finding optimum solutions to problems.
Constraints to be kept in mind are that velocity
should have and optimum value, as too less will be
too slow, and if the velocity is too high then the
method will become unstable.
Conclusion
23.
A presentation on Swarm Intelligence – from Natural
to Artificial Systems. - Ukradnuté kde sa dalo, a
adaptované
A presentation on PSO by Maurice Clerk.
The Particle Swarm Optimization Algorithm by
Andry Pinto, Hugo Alves, Inês Domingues, Luís
Rocha, Susana Cruz
Bibliography