SlideShare a Scribd company logo
1 of 31
Download to read offline
Mathware & Soft Computing 7 (2000), 1-15
1
Adaptive Search Heuristics for
The Generalized Assignment Problem
Helena Ramalhinho Lourenço1
, Daniel Serra 2
1
DEE, Universitat Pompeu Fabra, Barcelona, Spain
2
DEE, Universitat Pompeu Fabra, Barcelona, Spain
{helena.ramalhinho, daniel.serra}@econ.upf.es
Abstract
The Generalized Assignment Problem consists of assigning a set of tasks to
a set of agents with minimum cost. Each agent has a limited amount of a
single resource and each task must be assigned to one and only one agent,
requiring a certain amount of the agent’s resource. We present the application
of a MAX-MIN Ant System (MMAS) and a greedy randomized adaptive
search procedure (GRASP) to the generalized assignment problem based on
hybrid approaches. The MMAS heuristic can be seen as an adaptive sampling
algorithm that takes into consideration the experience gathered in earlier
iterations of the algorithm. Moreover, the latter heuristic is combined with
local search and tabu search heuristics to improve the search. Several
neighborhoods are studied, including one based on ejection chains that
produces good moves without increasing the computational effort. We present
computational results of a comparative analysis of the two adaptive
heuristics, followed by concluding remarks and ideas on future research in
generalized assignment related problems.
Keywords: Generalized assignment, local search, GRASP, tabu search, ant
colony optimization.
.
1 Introduction
The Generalized Assignment Problem (GAP) considers the minimum cost assignment of
n jobs to m agents such that each job is assigned to one and only one agent subject to
capacity constraints on the agents. The GAP has applications in areas like computer and
communication networks, location problems, vehicle routing and machine scheduling.
Our initial interest in this problem arose from two real applications in the area of health
H. Lourenço & D. Serra
2
services, resource assignment problems and pure integer capacitated plant location
problems.
The aim of this paper is to present hybrid algorithms consisting of adaptive construction
heuristics and subsequent application of local search to solve the GAP, and the
respective computational results. The basic elements are extract from a specific Ant
Colony Optimization algorithm, MAX-MIN Ant System, Stützle (1997,1998a,1998b),
Stützle and Hoos (1999), and GRASP algorithm, Feo and Resende (1995), Resende and
Ribeiro (2001). These heuristics can be embedded in a general framework with three
steps. In the first step, a solution is constructed by a randomized construction heuristic;
in the second step, a local search is applied to improve this initial solution, a descent
local search and tabu search are proposed; the last step consists in updating a set of
parameters which guide the construction process. The three steps are repeated until a
stopping criterion is verified. The choices made in each step lead to different heuristic
methods.
The paper is organized as follows. First, we present the GAP and a review of the
methods proposed to solve it. Second, we describe the general framework of the adaptive
construction heuristics. Third, we focus on local search methods, describing the descent
local search and the tabu search. Fourth, the computational experiments are described
with the objective of evaluating the proposed framework and its components. Finally, we
conclude with general remarks on this work and directions of future research.
2 The generalized assignment problem
The GAP consists in assigning at a minimum total cost a set of tasks to a set of agents,
with limited resource capacity. Each task must be assigned to one and only one agent
requiring a certain amount of the agent’s resource. For an extended review of this well-
known problem we refer to Martello and Toth (1990), Cattrysse and Van Wassenhove
(1992) and Chu and Beasley (1997). Fisher, Jaikumar and Van Wassenhove (1986)
proved that the problem is NP-complete. Moreover, the problem of deciding if there
exits a feasible solution is NP-hard, Sahni and Gonzalez (1976). Osman (1995) presented
a survey of many real-life applications.
The Generalized Assignment Problem can be formulated as an integer program, as
presented below. We use the following notation: I is the set of tasks (i=1,..., n) ; J is the
set of agents (j=1,..., m) ; bj = resource capacity of agent j ; aij = resource needed if task i
is assigned to agent j; cij = cost of task i if assigned to agent j . The variables are: xij = 1,
if task i is assigned to agent j; 0, otherwise. Assume that j
ij b
a ≤ and j
n
i
ij b
a >
∑
=1
.
Title of the paper 3
{ } ,...,
1
;
,...,
1
,
1
,
0
)
4
(
,...,
1
,
1
)
3
(
,...,
1
,
)
2
(
.
.
)
(
min
)
1
(
1
1
1 1
m
j
n
i
x
n
i
x
m
j
b
x
a
t
s
x
c
x
f
ij
m
j
ij
j
n
i
ij
ij
m
j
n
i
ij
ij
=
=
∈
=
=
=
≤
=
∑
∑
∑∑
=
=
= =
Constraints (2) are the resource capacity of the agents, constraints (3) guarantee that each
task is assigned to one agent, and constraints (4) are the integrality constraints.
Several exact algorithms for the GAP have been proposed by Ross and Soland (1975),
Martello and Toth (1990), Fisher, Jaikumar and Van Wassenhove (1986), Guinard and
Roseenwein (1989), and Karabakal, Bean, and Lohmann (1992). Recently, Savelsbergh
(1997) presented a branch-and-cut algorithm that employs both column generation and
branch-and-bound to obtain the optimal solution to a set partitioning formulation of the
GAP. The author mentioned that problems with 20 agents and 50 jobs can be solved
from 210 to 1160 seconds. Also, several heuristics have been proposed to solve the
GAP. Amini and Racer (1994) presented a variable-depth-search heuristic motivated by
the work of Lin and Kernighan (1973) on the Traveling Salesman Problem. They offered
a rigorous statistical analysis of the solution methods. Trick (1992) proposed an LP
approximation algorithm. Cattrysse, Salomon and Van Wassenhove (1994) formulated
the problem as a Set Partitioning Problem and proposed a heuristic based on column
generation techniques. Osman (1995) presented a comparison of the algorithms based on
tabu search and simulated annealing techniques. Wilson (1997a) presented a genetic
algorithm to restore feasibility to a set of near-optimal solutions, and then to improve the
best solution found by local search. Wilson also presented a dimple dual algorithm for
the GAP, Wilson (1997b). Chu and Beasley (1997) also presented a genetic algorithm
for the GAP that tries to improve feasibility and optimality simultaneously. Laguna,
Kelly, González-Velarde, and Glover (1995) proposed a tabu search algorithm based on
ejection chains to an extension of the GAP, the multilevel generalized assignment
problem. A variable depth search algorithm has been recently presented by Yagiura,
Yamaguchi, and Ibaraki (1999) and Yagiura, Yamaguchi, and Ibaraki (1998).
H. Lourenço & D. Serra
4
3 Adaptive search heuristic
This section presents the general framework and the main aspects of the adaptive
heuristics proposed to solve the GAP. These adaptive heuristics are based on two
metaheuristic approaches to solve combinatorial optimization problems: t
he Greedy
Randomized Adaptive Search Procedure (GRASP), Feo and Resende (1995), Resende
and Ribeiro (2001), and the Max-Min Ant System (MMAS), Stützle
(1997,1998a,1998b), Stützle and Hoos (1999), a specific algorithm falling into the
framework of the Ant C
olony Optimization metaheuristic, Dorigo and Di Caro (1999).
Both techniques include a step where a local search method is applied. The local search
methods are presented in the next section, since they present some innovative aspects.
General Framework
The adaptive heuristics proposed to solve the GAP can be described in a general
framework comprising three steps, that are repeated iteratively until some stopping
criteria is met. In step 1 a solution is generated using a randomized construction
heuristic; in step 2, a local search is applied to the solution constructed in step 1, and
finally in step 3, if there exist parameters, they are updated following a certain criteria.
The objective of the above method is to put together in the same framework recently
proposed metaheuristics which consist in, first, generating a starting solution for a local
search using some randomized adaptive construction heuristic, then possibly updating
some parameters based on the incumbent solution and continuing this process until some
stopping criterion is met. Randomized adaptive construction heuristics are used because,
with a simple construction heuristics, only one initial solution can be generated and
randomization of the construction heuristic allows more different initial solutions. The
same objective could, in principle, be obtained by restarting a local search from random
initial solutions; yet, computational experience on a wide variety of combinatorial
optimization problems has shown that, using random solutions do not result in very good
performance. It is also outperformed by randomized construction heuristics (RCH)
which take into account the contribution to the objective function value. The RCH
guides their solution construction using information recently seen in good solutions,
Glover (2000).
Other metaheuristics like Iterated Local Search (ILS), Lourenço, Martin and Stützle
(2001), Scatter Search, Glover(1998), and Memetic Algorithms (MA), Moscato (1999)
fit into the above framework. They generate new initial solutions for a subsequent
improvement by local search. Yet, the main differences from GRASP and ACO
algorithms are as follows: in the case of the ILS, the initial solutions are generated by
modifying a current solution through a kick move; In the case of Memetic algorithms
and Scatter Search, the initial solutions are obtained by manipulating a population of
previously seen solutions.
Title of the paper 5
In particular, the two approaches considered in the present work, are the greedy
randomized adaptive search procedure (GRASP) and a specific version of the Ant
Colony Optimization Algorithms, known as MAX-MIN Ant Systems. Both of these
methods will be discuss in detail in the next sections.
The first step of the proposed framework is based on a greedy heuristics, which
constructs a solution as follows: at each iteration, an unassigned task is chosen following
a priority function; next, an agent is chosen amount the agents that can perform this task,
and the task is assigned to it. This procedure is repeated until all tasks have been
assigned to an agent. We propose two heuristics for this first step: A Greedy
Randomized Adaptive Heuristic (GRAH) and an Ant System Heuristic (ASH). The main
difference between the basic greedy heuristic and the two heuristics, GRAH and ASH, is
the method of selecting the agent to whom the previously chosen task is assigned. For
the basic greedy heuristic, the choice is deterministic and based on the cost function. For
the GRAH, the choice is a probabilistic bias according to an adaptive priority function,
which does not depend on the solutions seen in previous iterations of the general
framework but only on the choices done in the previous iterations of the GRAH. For the
ASH, the choice is also a probabilistic bias based on desirability values, which are
adapted in a reinforcement style of learning during the run of the algorithm and reflect
prior performance.
For the second step, two local search algorithms are proposed, a descent local search and
a tabu search approach. The third step is only applied if there are parameters that have to
be updated at the end of each iteration, which is the case for the ant system heuristic.
Each of the proposed metaheuristics differs in the way choices are made at each step of
the above framework.
We will admit unfeasible solutions with respect to capacity constraints, i.e. the total
resource required by the tasks assigned to some agents may exceed their capacity.
Unfeasible solutions will be penalized into the objective function. The main reason for
admitting unfeasible solutions is that, for some solutions close to the optimal one, the
capacity of the agents will be almost completely taken up, therefore any neighbor
obtained by interchanging or reassigning tasks will be unfeasible or a “bad” neighbor.
Allowing these extra solutions usually provides escape routes out of local optima. This
approach is quite common in implementations of metaheuristics, and is often very
effective, Glover and Laguna (1997).
The penalty function is chosen as
follows: ∑ ∑
∑∑
= =
= = 









−
+
=
m
j
n
i
j
ij
ij
m
j
n
i
ij
ij b
x
a
x
c
x
f
1 1
1 1
,
0
max
)
(
' α where 0
>
α is a parameter,
representing the cost of using one unit of extra capacity. If a solution is not feasible, the
second term will be positive and therefore the search will look for a feasible solution.
H. Lourenço & D. Serra
6
The parameter α can be increased during the run to penalize unfeasible solutions, and
drive the search to feasible ones.
Greedy Randomized Adaptive Procedure
The first method proposed to solve the GAP is based on the greedy randomized adaptive
search procedure (GRASP). GRASP is an iterative randomized sampling technique, with
two phases. The first phase consists of a greedy randomized adaptive heuristic that
constructs an initial solution. It is called adaptive because the greedy function takes into
account previous decisions made during the construction of a solution when considering
the next choice. The second phase consists of an improvement phase, which usually
corresponds to a local search method. As can be seen, the GRASP fits into the general
framework proposed in the previous section. GRASP has been successfully applied to
many combinatorial optimization problems, Resende and Ribeiro (2001). For a list of
several GRASP applications see the web site of Resende:
http://www.research.att.com/~mgcr/.
In this section, we will only describe the greedy randomized adaptive heuristic, i.e. the
first phase. The local search will be explained in the next section.
At each iteration of the Greedy Randomized Adaptive Heuristic (GRAH), one task is
assigned to an agent. The heuristic finishes when all tasks have been assigned. The
GRAH can be described as follows:
1. Let m
j
Sj ,
,
1K
=
∀
∅
= . ( j
S is the set of tasks assigned to agent j)
2. Construct the restricted candidate list (RCL) of agents for each task, i
L , such that
{ }
max
: c
c
j
L ij
i ≤
= , cmax is a parameter that limits the dimension of the RCL (if
ij
j
i c
c ,
max max ∀
= all agents will be included).
3. Consider any order of the tasks, for example n
i ,...,
1
= . Let 1
=
i .
4. While (not all tasks have been assigned) repeat:
4.1. Choose randomly an agent j* from i
L . Each agent in the list i
L has the
following probability of being chosen: i
L
l
a
b
a
b
ij L
j
p
i
il
l
ij
j
∈
=
∑
∈
, , which depends
on the resource of agent j and the resource needed by task i.
4.2. If there is available capacity, task i is assigned to agent {}
i
S
S
j j
j ∪
= *
*
:
* .
Otherwise, the task is assigned to the first agent with spare capacity. If all
agents are fully occupied, the assignment is random.
4.3. Let i=i+1 and if ∑
∈
>
*
*
*
j
S
i
j
ij b
a remove j* from any list. Repeat step 4 until all
tasks are assigned. (Note that the capacity constraint can be violated).
Title of the paper 7
5. Let otherwise.
,
0
;
if
1 =
∈
= ij
j
ij x
S
i
x Calculate the value of the penalty function
for the solution, f’(x).
In this first step of the framework, we are looking for a feasible solution. Therefore, we
are interested in assigning a task to an agent if this task uses a small amount of the
agent’s resource, i.e.
j
ij
b
a
is small. For each task, we order the agents in decreasing
order of pij. The task can be assigned to any agent following the probability function pij,
if there is available capacity. If not, the task is assigned to the first agent in the above
order with spare capacity. If all agents are fully occupied, an agent is randomly selected
with uniform probability and the task is assigned to this agent. Note that the solution
obtained can be unfeasible with respect to the capacity constraints. And also, that the
construction heuristics does not take into considerations previous seen solutions during
the run of the general framework, only the choices done previous within the greedy
heuristic.
The Ant Colony Optimization
The GRASP procedure described in the previous section is a multi-start local search, but,
instead of considering a random initial solution, a greedy randomized heuristic is used to
try to find better initial solutions than random ones. A different way of generating initial
solutions is followed by the Ant Colony Optimization (ACO) paradigm, Colorni, Dorigo
and Maniezzo (1991a,1991b), Dorigo, Maniezzo, and Colorni (1996), Dorigo and Di
Caro (1999). ACO is a cooperative algorithm inspired by the foraging behavior of real
ants. Ants lay down in some quantity of an aromatic substance, known as pheromone, on
their way to food and on their way back to the nest. Ants choose to follow a pheromone
trail with a probability proportional to the pheromone trail intensity. Research on real
ants has shown that such a trail following behavior allows the ants to identify shortest
path between a food source and their nest, Goss, Aron, Deneubourg, and Pasteels (1989).
ACO is inspired by this behavior and tries to adapt it to the solution of combinatorial
optimization problems. The ACO approach associates pheromone trails to features of the
solutions of a combinatorial problem, and can be seen as a kind of adaptive memory of
the previous solutions. Solutions are iteratively constructed in a randomized fashion
biased by the pheromone trails left by the previous ants and possible available
information based on the problem data. In the specific ACO algorithms we are
concerned with the way the pheromone trails are updated after the construction of a
solution, by ensuring that the best features will have a more intensive pheromone. Note
that the ACO metaheuristic proposed in Dorigo and Di Caro (1999) allows other features
like on-line pheromone update during solution construction, but for this paper we do not
need these features. The first ACO algorithm was an Ant System which was applied to
the TSP, Colorni, Dorigo and Maniezzo (1991a,1991b), Dorigo, Maniezzo, and Colorni
(1996). Subsequently several improvements of Ant System have been proposed and
currently one of the best performing ACO algorithms is MAX-MIN Ant System, Stützle
H. Lourenço & D. Serra
8
(1997). For more information, see the web page:
http://iridia.ulb.ac.be/dorigo/ACO/ACO.html.
The MAX-MIN Ant system (MMAS) is an adaptive sampling algorithm that takes into
consideration the experience gathered in earlier iterations of the algorithm. Moreover,
combining MMAS with local search, Stützle and Hoos (1999), Stützle (1997, 1998a)
were able to find very good solutions to the Traveling Salesman Problem, the Quadratic
Assignment Problem and the Flow-Shop Scheduling Problem. Next, we will propose a
MMAS approach with an additional Local Search for the GAP, which can be seen as an
improvement over the Ant Colony Optimization and is currently the best performance
variants of the ACO algorithms for a wide variety of combinatorial optimization
problems, Stützle (1998a).
The pheromone trails, ij
τ , for the GAP represent the desirability of assigning a task i to
an agent j. Initially, let
ij
ij
c
1
=
τ (for GAP with a maximization objective, we set
ij
ij c
=
τ ). The cheaper the assignment of the task i to agent j is, the more desired is the
assignment.
First of all, let us explain how to construct a solution using only one ant. As suggested
by Stützle and Hoos (1999), the selection rule for the assignment of a task to an agent
can be described as follows: with some probability p0 the best choice is made,
otherwise the assignment is done according a probability function. We designated this
heuristic by Ant System Heuristic (
ASH). The tasks are assigned to the agents in a
greedy way, like in the Greedy Randomized Adaptive Heuristic (except for step 4.1)
where the assignment is done biased by the τij . A task i is assigned to a particular agent
j in the following way:
4.1.1. With probability 0
p , choose the agent j* with maximal value of τij .
4.1.2 With probability 0
1 p
− , choose the agent j* according to the
following probability distribution:







∈
= ∑
∈
otherwise
0
if i
L
l
il
ij
ij
L
j
p
i
τ
τ
.
The assignment obtained by the ASH constitutes the first step in the general framework,
followed by a local search that tries to improve this initial solution. Afterwards, in the
third step of the general framework, the pheromone trails are updated using the current
local optimal solution in the following way:
ij
old
ij
new
ij τ
ρτ
τ ∆
+
= ,
where 1
0
, <
≤ ρ
ρ , is the persistence of the trail, i.e. ρ
−
1 represents the evaporation.
The amount of pheromone deposited by the current local optimal solution is:
Title of the paper 9


 ×
=
∆
otherwise
0,
solution
in the
agent
to
assigned
is
task
if j
i
Q
max
ij
τ
τ
where



=
feasible.
is
solution
the
if
,
05
.
0
;
unfeasible
is
solution
the
if
,
01
.
0
Q
The values of parameter Q were set by preliminary tests following the recommendations
of Stützle and Hoos (1999), and they proved to be robust. Note that, if the solution is
feasible, the pheromone trail has a bigger increment, trying to give larger probability to
feasible assignments. Moreover, τ τ τ
min max , ,
≤ ≤ ∀
ij i j , so these limits must be
imposed if the updated pheromone falls outside the above interval. Also, the amount of
pheromone deposit does not depend on the solution’s objective function, which is
different from others ACO applications, Dorigo, Di Caro (1999), but we have found this
update rule very effective. Next, we present the local search methods for the second step
of the general framework proposed in chapter 3.
4 Local search methods
In order to derive a local search method, it is necessary to define a neighborhood
structure, that associates a set of solutions N(x) with each solution x. The neighborhood
is usually obtained by specific modifications on x, called moves.
The local search starts with an initial solution and searches the neighborhood for the
solution with the lowest cost. Then, this neighbor solution would replace the current
solution if it has a lower cost than the current solution. The search continues until a
stopping criterion is verified. The algorithm returns the best solution found with respect
to the cost function.
Neighborhoods
We present two neighborhoods for the GAP, a simple shift neighborhood where a task is
reassigned to a new agent and an ejection chain neighborhood where more than one task
is reassigned to new agents. The shift neighborhood is a special case of the λ-generation
mechanism proposed by Osman (1995). The ejection chain neighborhood is based on the
work of Laguna, Kelly, González-Velarde, and Glover (1995). Other complex
neighborhoods with very good results have been proposed for this problem, we refer to
Yagiura, Yamaguchi, and Ibaraki (1999) and Yagiura, Yamaguchi, and Ibaraki (1998).
The move in the shift neighborhood consists in removing a task from one agent and
assigning it to another agent. The size of the neighborhood is n(m-1). Note that we
should start by considering removing tasks from overloaded agents. The task can be
reassigned to an agent with available capacity. The shift neighborhood can be obtained
by the following procedure, where flag indicates if a neighbor solution with better value
H. Lourenço & D. Serra
10
of the penalty function was found, (flag=true) or not (flag=false) (flag is initially assign
to false in the local search heuristic, see next page).
Neighborhood (x, flag)
1. Arrange the agents in decreasing order by the amount of capacity over the maximum
allowable. Let j=1.
2. Consider any order of the tasks assigned to agent j. Let i=1 (the first task assigned
to j).
3. Remove i from the set of tasks assigned to j, {}
i
S
S j
j −
= .
4. Assign i to another, not yet considered, agent starting with the last agent in the list
obtained in step 1, (neighbor x’ of x).
5. Calculate the value of the penalty function for x’, f’(x’). If f’(x’)<=f’(x), let x=x’,
flag=true and stop.
6. Let i = i +1 and repeat steps 3 and 4, until all tasks of j have been considered.
7. Let j=j+1, and repeat from step 2, until all agents have been considered.
In preliminary tests, this neighborhood was able to obtain feasible solutions when
starting from an unfeasible one. However, the objective function values of these feasible
solutions were not very good. This led us to define a more complex neighborhood, which
we present next.
The ejection chain neighborhood is a variable depth procedure, which moves more than
one task from the current agent to a new agent. Ejection chains were introduced by
Glover (1992), and have been applied to several problems, including an extension of the
GAP, Laguna, Kelly, González-Velarde, and Glover (1995). This second neighborhood
structure is more complex than the shift neighborhood, but leads to a more powerful and
efficient search without increasing significantly the computation time.
The ejection chain neighborhood can be obtained by the application of the following two
types of moves:
Move A: Remove a task i from an agent (j), then assign task i to a different agent (w)
as in the shift neighborhood.
Move B: Remove a task i from an agent (j), then assign task i to a different agent (w).
After, remove a task k from agent w and assign task k to another agent
(different from w, but it can agent j).
The ejection chain neighborhood of a solution can be obtained in a similar way as the
shift neighborhood, but a type B move is only applied if the type A move was
unsuccessful. Note also that the number of neighbors is of order O(n2
m2
). As described a
move in the ejection chain neighborhood is like doing two shift moves at once, one move
after another one, hence, it is significantly larger than the shift neighborhood.
Descent Local Search
We designed the descent local search method with a first improvement strategy. This
means that, the first cost improvement neighbor solution found becomes the new current
Title of the paper 11
solution. This method stops at a locally optimal solution with respect to the chosen
neighborhood structure. This can be a main drawback, since it is unable to overcome a
locally optimal solution. The tabu search method presented in the next section has a
different acceptance strategy from the local search and other features designed to avoid
being trapped at a bad local optimum.
The main steps of the first-descent local search are:
1. Obtain an initial solution x (for example, using the GRAH). Let flag=false;
2. Neighborhood(x,flag);
3. If flag=false, stop (a local optimum was found), otherwise repeat step 2.
If the heuristic finishes with an unfeasible solution, apply a local search with the
following neighborhood: interchange tasks between agents considering only moves that
reduce the extra used capacity. After a feasible solution is obtained, we can apply the
same local search considering only feasible solutions, i.e. verifying the following
conditions:
Let )
,
( k
i and )
,
( l
j be pairs of tasks and agents respectively, such that 1
=
= kl
ij x
x . If
kl
ij
kj
il c
c
c
c +
<
+ , j
kj
ij
n
s
sj
sj b
a
a
x
a ≤
+
−
∑
=1
and l
il
kl
n
s
sl
sl b
a
a
x
a ≤
+
−
∑
=1
, then let
1
=
= kj
il x
x and 0
=
= kl
ij x
x .
We are now in a position to present the GRASP method, which consists of one of the
approaches proposed to solve the GAP:
1. While a stopping criterion is not satisfied:
1.1. Construct a solution (x) using the Greedy Randomized Adaptive Heuristic. In
the first iteration initialize the best solution, xb, as x.
1.2. Apply descent local search(x)
1.3. If x is feasible, and )
(x
f
(x)
f b
'
' < let x
xb = .
2. Return the best solution found, xb.
The next method for the GAP, also based on the general framework presented above is
the MAX-MIN Ant System procedure with Local Search (MMAS), and can be described
as follows:
1. Initialize the pheromone trails and parameters
2. While (termination condition not met)
2.1. Construct a solution x using the Ant System Heuristic. In the first iteration
initialize the best solution, xb, as x.
2.2. Apply descent local search(x).
2.3. Update the pheromone trails using the current solution x.
2.4. If x is feasible, and )
(x
f
(x)
f b
'
' < let x
xb = .
3. Return the best solution found, xb.
H. Lourenço & D. Serra
12
In both methods, the stopping criterion applied consists of a maximum number of
iterations.
Note that the main difference in the above methods is the way the initial solutions are
constructed, i.e. the first step in the general framework of the adaptive construction
heuristic. The GRASP follows a random approach by means of a random sampling over
solutions constructed in a greedy fashion, which can be seen as a diversification strategy.
Meanwhile, the MMAS constructs the initial solutions using adaptive and cooperative
memory, which can be considered as an intensification strategy.
Tabu Search
Tabu search was originally proposed by Glover (1986), and since then it has been
subject to extensive studies and applied to several optimization problems with great
success. Tabu search can be described as an intelligent search that uses memory to drive
the search out of locally optimal solutions and to find good results. For a survey of tabu
search see Glover and Laguna (1997).
Our reason for applying tabu search to the GAP was the excellent results obtained and
by Osman (1995) and Laguna, Kelly, González-Velarde, and Glover (1995) to GAP and
a multilevel generalized assignment problem, respectively. They also presented
computational results for the GAP and were always able to obtain the optimal solution
for test problems with 5 agents and 25 task in very short time.
The basic ingredients of the applied tabu search are: the neighborhood, the tabu list and
its size, the aspiration criteria and the stopping criteria. We will now describe these
aspects in detail for the GAP.
The tabu search can be briefly described as follows:
1. Generate an initial solution x.
2. While the stopping criteria is not met do:
2.1. Generate the candidate list of moves/neighbors;
2.2. Choose the best neighbor not tabu or verifying the aspiration criteria, x’;
2.3. Update the current solution, x=x’.
3. Output the best solution found.
It can be easily observed that the ejection chain neighborhood has a large number of
neighbors. Some of them lead to very bad solutions. Therefore, to avoid spending a large
amount of time evaluating the neighborhood, a candidate list strategy is used to restrict
the number of solutions examined at each iteration of the tabu search. The candidate list
strategy implemented uses context information to limit the search to those moves that are
more likely to improve the current solution. The problem-specific candidate list strategy
can be described as follows: a task or a pair of tasks are considered for moving if one of
Title of the paper 13
the following situations occurs. Let x be the current solution and x’ be a neighbor
solution of x:
• A task u is considered for moving from an agent p to an agent k if uk
up c
c > ;
• A pair of tasks u, l are considered for moving if lp
uk
lk
up c
c
c
c +
>
+ ;
• A task u is considered for moving from an agent p to an agent k if
∑
=
>
n
i
p
ip
ip b
x
a
1
and ∑
=
≤
n
i
k
ik
ik b
x
a
1
' , where 1
'
,
0
'
;
,
' =
=
≠
∀
= uk
up
ik
ik x
x
p
u
i
x
x .
• A pair of tasks u, l are considered for moving if ∑
=
>
n
i
p
ip
ip b
x
a
1
, ∑
=
≤
n
i
k
ik
ik b
x
a
1
' and
∑
=
≤
n
i
q
iq
iq b
x
a
1
' , where 1
'
,
0
'
,
1
'
,
0
'
;
,
,
;
,
' =
=
=
=
≠
≠
∀
= lq
lk
uk
up
ij
ij x
x
x
x
q
p
k
j
l
u
i
x
x .
A tabu attribute is related to the move of a task from one agent to another agent, i.e.
suppose a task i is assigned to an agent j in the current solution and this task is
reassigned to agent k, then for a certain number of iterations it is forbidden to assign task
i to agent j. The tabu list was implemented as a matrix m
n × , such that the entry (i,j)
contains the iteration number where the task i was removed from agent j, therefore in the
next “tabu tenure” iterations the move “assign task i to agent j” is tabu.
An aspiration criterion is considered which overrules the tabu status of a move, if it leads
to a new best solution. Finally, the tabu search stops after a maximum number of
iterations.
As we have done for the descent local search, now we have two more methods for the
GAP, GRASP/TABU and MMAS/TABU that can be described as before, but instead of
using the simp le descent local search method, the tabu search is applied instead.
5 Computational experiment
In this section, we will present the computational experiments and the results obtained.
We have followed the guidelines proposed by Barr, Golden, Kelly, Resende, and Stewart
(1995). The computational experiment was designed with two main objectives:
• To gain understanding of the behavior of the different proposed methods, based on
the general framework of the adaptive construction heuristics.
• To compare the two methods proposed for the first step of the general framework:
greedy randomized adaptive heuristic versus the ant system heuristic.
We would like to focus that these objectives are the most important ones, since with the
computational experiment we would like to contribute to the understanding of the
algorithms that follow on the framework of the adaptive search heuristics. For example,
H. Lourenço & D. Serra
14
we would like to analyze if the consideration of information from previous solutions on
the construction phase has an impact on the results, i.e. compare the GRAH with the
ASH methods. Another questions that we would like to study are: the impact of
considering or not a complete neighborhood in the local search phase, the benefits of
using the tabu search method versus a local search; and the effect of using unfeasible
solutions. We believe that by analyzing these questions we contribute to the advance of
the knowledge of the field of heuristics in general, and adaptive search heuristics in
particular by giving guidelines on the development of algorithms to real-life applications
based on the GAP model and other combinatorial optimization methods.
A secondary objective is to compare the best methods described in this work with other
techniques and methodology proposed to solve the GAP, but the emphasis of the paper is
not to realize a competition on algorithms, since as mentioned by Hooker (1995)
“competitive testing tells us which algorithms are faster but not why”. As mentioned
before, here, we try to ask which components make the algorithms faster and better
(therefore the “why question” that Hooker talks about).
All described methods were coded in Fortran, and were tested on a set of problems
ranging from 5 agents/15 jobs to 10 agents/60 jobs. The test problems are available from
the OR library (http://www.ms.ic.ac.uk/info.html) and have also been used by other
authors in their computational experiments, Osman (1995), Cattrysse, Salomon and Van
Wassenhove (1994), Chu and Beasley (1997). The set of test problems can be divided in
two groups: easy (gap 1 to gap 6) and difficult (gap7 to gap12). This set of problems is
in the maximization form of GAP, so we have converted them into minimization form.
All numerical tests were carried on a PC-Pentium II with 166 MHz and 16MB RAM.
Recently, a set of problems with a larger number of tasks and agents has been published.
However, in our preliminary testing, the answer of the questions posed before did not
change by the use of this larger size set of test problems, and the testing would be more
limited because of the larger running times. We believe the use of this larger set would
be more adequate for a competition type paper.
The performance measures considered are:
• Solution quality measured as the percentage deviation from the optimal solution.
• Computation time: total running time and the time to find the best solution.
The factors that can influence the behavior of a method and their results are:
• Problem specific: number of agents (m); number of tasks (n); resource capacity of
the agents (aj), cost of the overloaded capacity (α).
• First Step: greedy randomized adaptive heuristic; ant system heuristic.
• Second Step: neighborhood, search strategy (descent local search and tabu search).
• Stopping criteria: number of total iterations (NTI) and the number of iterations of
the tabu search (NITB).
Title of the paper 15
• Other parameters: size of the RCL, size of the tabu list (STL), ant system parameters
( min
τ , max
τ , ρ and 0
p ).
If we want to consider all the above factors, the exp erimentation would be quite
extensive. Thus, to minimize the computational effort some of the above factors are
chosen a priori based on previous experiments for the GAP or on preliminary
computational results. The following parameters values for MAX-MIN ant system were
found to give good performance in preliminary runs: ij
j
i
τ
τ
,
min min
1
.
0
∀
×
= and
ij
j
i
n τ
τ
,
max max
∀
×
= , 75
.
0
=
ρ and 8
.
0
0 ×
= −
n
m
n
p . We have followed the guidelines of
Stützle and Hoos (1999) to obtain the values for these parameters. These values are quite
robust, and small changes did not affect the final results.
Comparison between different approaches
The main issue for these initial tests is to understand the behavior of the different
methods based on the same general framework and on different approaches. The
adaptive search heuristics considered are the following ones:
MMAS : ant system heuristic and descent local search with ejection chain
neighborhood.
GRASP: greedy randomized adaptive heuristic and descent local search with ejection
chain neighborhood. This version is a GRASP method.
ASH+TS: ant system heuristic and tabu search with restricted ejection chain
neighborhood.
GRAH+TS: greedy randomized adaptive heuristic and tabu search with restricted
neighborhood ejection chains.
ASH+LS+TS: ant system heuristic, descent local search with shift neighborhood, and
tabu search with restricted neighborhood ejection chains.
GRAH+LS+TS: greedy randomized adaptive heuristic, descent local search with shift
neighborhood, and tabu search with restricted ejection chains neighborhood.
ASH+LS+CTS: ant system heuristic, descent local search with shift neighborhood, and
tabu search with ejection chains neighborhood (search in the complete neighborhood,
best-improvement).
In the three last methods, before applying the tabu search method, we apply a simple
descent local search method with shift neighborhood. Most of the solutions obtained in
the first step are unfeasible, and the descent local search with shift neighborhood usually
finds a feasible one in a short time. Therefore, if a local search method is applied before
the tabu search, this last one will start from a better solution. The comparison between
the TS alone and the LS+TS will permit us to evaluate the performance of the TS with
respect to the initial solution of the tabu method. With the last method, ASH+LS+CTS,
we will try to analyze the effect of using or not a restricted candidate list.
H. Lourenço & D. Serra
16
In this experiment, the following factors are prefixed: NTI =30, NITB=200, α=50,
STL=10. These values were set in preliminary tests, and they were set to control the
running time. Also, since all the above methods performed very well on the easy test
problems, we will present the results for the subset of large size problems, gap7 (8
agents, 40 jobs) to gap12 (10 agents, 60 jobs). For each test problem, we have performed
5 runs of each of the methods.
In Table 1 we present the average percentage deviation from optimal of 5 runs for each
of the heuristics proposed. First of all, we observe that the best results were obtained by
the ASH+TS, GRAH+TS, ASH+LS+TS and GRAH+LS+TS, i.e. when the tabu search
was used in the second step of the general framework. Also, the combination of the LS
and TS improve the results, since the tabu search starts with a better solution and finds
rapidly a good locally optimal solution. MMAS and GRASP obtained the worst results
and they often got stuck at a bad locally optimal solution. Therefore, using a tabu
approach enabled to continue the search for good solutions. When a tabu search
considered the complete neighborhood, ASH+LS+CTS, the quality of the solution did
not improve. So, the use of restricted candidate lists plays an important role in the
search, and helps to find good solutions in significantly less time, as can be observed at
the computation times given in Tables 3 and 4. It can also be seen that the proposed
heuristic performs very well, finding the optimal solution in many instances. For those in
which the heuristics failed to reach the optimal one, the solutions obtained are very close
to optimality. In Table 2 we present the average solution quality for each set of the test
problems. It will be seen that the ASH+LS+TS performs better than the remaining
heuristics, obtaining the optimal in all runs for all test problems in 4 of the 6 groups.
In Table 3 and 4 we present the average total CPU-time and the average time to find the
best solution for the 6 test problems. For all heuristics, the average and total CPU-time
increases with the ratio
m
n
, and also with the number of tasks. For the same number of
global iterations of the general framework, the ant system heuristic (MMAS, ASH+TS,
ASH+LS+TS) always takes less time than the greedy randomized adaptive heuristic
(GRASP, GRAH+LS, GRAH+LS+TS). The computation time to find the best solution is
significantly lower than the total running time, and again the ant system heuristic finds
the best solution faster. However, there is a small difference between the ASH+TS,
ASH+LS+TS and GRAH+LS+TS with respect to the running times to find the best
solution. The explanation for this behavior is that the tabu search with the ejection chain
neighborhood efficiently finds good solutions.
To better understand the behavior of the various heuristics we present two figures where
we compare the tradeoff between solution quality and computational effort. For
simplicity we present the average results for the test problem gap8, Figure 1, and gap10,
Figure 2. It can be easily observed that the heuristics that obtain better results in terms of
solution quality and computational time are the ASH+LS+TS, ASH+TS and
Title of the paper 17
GRAH+LS+TS in approximately this order, since these ones dominate the remaining
ones. If we had to choose only one, our choice will be the ASH+LS+TS because it
obtains the best solution within a reasonable computational time.
Comparison between the adaptive approaches
A second issue that we would like to answer is related to the different approaches
proposed for the first step. The greedy randomized adaptive heuristic is based on the use
of randomization to obtain initial solutions in a greedy fashion and so diversify the
search for a good solution. The other approach, based on the ant system, uses
information on the good solutions visited in previous iterations to construct a solution,
which also follows a greedy approach. We wanted to see if there is any difference
between these two approaches for the GAP. Therefore, all the factors were kept constant,
except for the two different heuristics proposed for the first step. We present the average
results for the 6 test problems when the GRAH and the ASH were used in the first step
of the general framework, and combined with the local search with ejection chain
neighborhood, Figure 3, or with local search and tabu search, Figure 4. The results are
presented by showing the tradeoff between solution quality and computational time. One
can see that when the ant system heuristic is used in the first step the method obtains
better solutions in less time for mo st of the test problems.
The explanation of the difference between the ant system and the greedy randomized
adaptive heuristics is the quality of the solution obtained by these greedy heuristics. We
have observed that the solutions obtained by GRAH are very different and do not follow
a pattern. However, for the ASH the solutions obtained in the first iterations are worse or
of the same value as those obtained by the GRAH. But, as the search continues, the ant
system heuristic is able to obtain good solutions, which means less running time by the
local search method in the second step of the general framework. The behavior is
explained by the way the ASH is designed, since good solutions seen in previous
iterations are taken into account when defining the probability function for the greedy
heuristic.
To exemplify the behavior of the greedy heuristics we present in Figure 5, the value of
the penalty function for the initial solution obtained by the GRAH and the ASH for the
instance gap9-2, using the LS+TS as the second step.
H. Lourenço & D. Serra
18
Computational Results
Finally, in this last section, we show the performance of our best methods for all the test
problems in Table 5 and compare them with other methods proposed to solve the GAP,
the metaheuristics by Osman (1995), TS6 and TS1, and Chu and Beasley (1997), Gaa
and Gab. All the results for these methods were taken from this last work. Chu and
Beasley (1997) run their experiment on a Silicon Graphics Indigo (R400, 100 MHz). The
fair comparison between methods is quite difficult since it depends on coding skill,
machine type and speed, parameter setting, etc. The experiment was design to evaluate
the contribution to good results of components of the general framework, and not to have
a comparison between algorithms. Table 5 is presented here only to give an idea of the
performance of the methods proposed with respect to which has been done in the area.
We can observe that on average the ASH+LS+TS performed better than other
approaches for these instances. This method obtains the optimal solution in all runs for
all test problems in gap12, which no other previously proposed method was able to
match. The average running time of the genetic algorithm approach for these problems
was 300 CPU seconds, Chu and Beasley (1997), and for the ASH+LS+TS was less than
150 seconds. But our main objective here is not to declare a winning method but to
understand their differences in solving different test problems. From the results obtained,
the adaptive search heuristics proposed can obtain better or equal results for the GAP
than other methods in the literature and in shorter running times.
6 Conclusions
The main contribution of this work is the application of adaptive search heuristics to the
generalized assignment problem, based on GRASP and Ant Colony Optimization. The
general framework has also some innovative aspects like the combination of the MAX-
MIN Ant Systems (MMAS) and GRASP with Tabu Search techniques, and the use of
ejection chain neighborhoods.
Our computational testing showed that the hybrid approach, based on ideas from MMAS
and GRASP, combined with tabu search leads to good results within reasonable times,
and outperforms MMAS or GRASP alone. From the results, we can conclude that the
success of ASH+LS+TS relies on the combination of the tabu search and the ant system
heuristics. The fact that the ASH takes into account information of previous iterations of
the general algorithms is one of the main aspects to the success of the ASH+LS+TS.
Also, the ejection chain neighborhood and the restricted candidate list strategy play an
important role in driving the search to good solutions. We can also conclude that the ant
system heuristic presented outperform the greedy randomized adaptive heuristics both in
terms of solution quality and total running time. The results compare favorably with
existing methods, both in terms of time taken and quality of the solution.
Title of the paper 19
Further developments of this work are related to the application of the adaptive search
methods to extensions of the GAP, a Resource Assignment Problem and a Pure Integer
Capacitated Plant Location. More work is also being done in solving more difficult
problems using a sophisticated tabu search and diversification strategies and an ant
system with more ants. Moreover, as future research, we plan to apply the adaptive
search heuristics based on the general framework to develop solution methods for other
combinatorial optimization problems.
Acknowledgments- The research was funded by Ministerio de Ciencia y Tecnología,
Spain (BEC2000-1027). The authors are grateful to Thomas Stützle and the anonymous
referees whose comments have improved this paper. Also, the authors thank the
Fundación BBVA-CRES for the prize given to this work.
References
[1] Amini, M.M, and Racer, M. (1994). A rigorous comparison of alternative
solution methods for the generalized assignment problem, Mgmt Sci 40:
868-890.
[2] Barr, R.S., Golden, B.L., Kelly, J.P., Resende, M.G.C. and Stewart, Jr.
W,R. (1995). Designing and Reporting on Computational Experiments with
Heuristics Methods, J Heuristics, 1: 9-32.
[3] Cattrysse, D.G. and Van Wassenhove, L.N. (1992). A survey of algorithms
for the generalized assignment problem, Eur J of Opl Res 60: 260-272.
[4] Cattrysse, D.G., Salomon, M. and Van Wassenhove, L.N. (1994). A set
partitioning heuristic for the generalized assignment problem, Eur J of Opl
Res 72: 167-174.
[5] Chu, P.C. and Beasley, J.E. (1997). A genetic algorithm for the generalised
assignment problem, Comp Opns Res 24: 17-23.
[6] Colorni, A., Dorigo, M. and Maniezzo, V. (1991a). Distributed
Optimization by Ant Colonies, Proceeding of ECAL91 - European
Conference on Artificial Life: Elsevier Publishing, Paris, France, 134-142.
[7] Colorni, A., Dorigo, M. and Maniezzo, V. (1991b). The Ant System:
Optimization by a Colony of Cooperating Agents, IEEE Transactions on
Systems, Man, and Cybernetics -Part B, 26, 1, 29-41.
[8] Dorigo, M. and Di Caro, G. (1999). The ant colony optimization meta-
heuristic. In D. Corne, M. Dorigo, and F. Glover (eds), New Ideas in
Optimization, McGraw-Hill.
[9] Dorigo, M., Maniezzo, V. and Colorni, A. (1996), The ant system:
Optimization by a colony of cooperating agents, IEEE Transactions on
Systems, Man, and Cybernetics – Part B, 26(1): 29-42.
H. Lourenço & D. Serra
20
[10] Feo, T.A. and Resende, M.G.C. (1995). Greedy randomized a
daptive search
heuristic, Journal of Global Optimization 6: 109-133.
[11] Fisher, M., Jaikumar, R. and Van Wassenhove, L. (1986). A multiplier
adjustment method for the generalized assignment problem, Mgmt Sci 32:
1095-1103.
[12] Glover, F. (1986). Future paths for integer programming and links to
artificial intelligence, Comp Opns Res 5: 533-549.
[13] Glover, F. (1992). Ejection Chains, Reference Structures and Alternating
Path Methods for Traveling Salesman Problem, University of Colorado.
Shortened version published i
n Discrete Applied Mathematics, 65: 223:253
(1996).
[14] Glover, F. and Laguna, M. (1997). Tabu Search, Kluwer Academic
Publishers: Norwell, Massachusetts.
[15] Glover, F. (1998). A Template for Scatter Search and Path Relinking, in
Artificial Evolution, Lecture Notes in Computer Science 1363, J
-K. Hao, E.
Lutton, E. Ronald, M. Schoenauer and D. Snyers (Eds.), Springer-Verlag,
13-54.
[16] Glover, F. (2000). Multi-start and strategic oscillation methods – Principles
to exploit adaptive memory, in Computing Tools for Modeling,
Optimization and Simulation, edited by Manuel Laguna and José Luis
González Velarde, Kluwer Academic Publishers, 1-38.
[17] Goss, S., Aron, S., Deneubourg, J.L., and Pasteels, J.M. (1989). Self-
organized shortcuts in the Argentine Ant, Naturwissenschaften, 79: 579-
581.
[18] Guignard, M. and Rosenwein, M. (1989). An improved dual-based
algorithm to the knapsack problem, Eur J Opnl Res 27: 313-323.
[19] Hooker, J.N. (1995). Testing heuristics. We have it all wrong. J Heuristics,
1: 33-42.
[20] Karabakal, N., Bean, J.C. and Lohmann, J.R. (1992). A steepest descent
multiplier adjustment method for the generalized assignment problem.
Report 92-11, University of Michigan, Ann Arbor, MI.
[21] Laguna, M., Kelly, J.P., González-Velarde, J.L. and Glover, F. (1995).
Tabu search for the m
ultilevel generalized assignment problem, Eur J Oper
Res 82: 176-189.
[22] Lin, S. and Kernighan, B.W. (1973), An efficient heuristic algorithm for the
traveling salesman problem, Operations Research 21: 498-516.
[23] Lourenço, H.R., Martin, O. and Stützle, T. (2001). Iterated Local Search.
To appear in State-of-the-Art Handbook of Metaheuristics, F. Glover and
G. Kochenberger, eds. Kluwer Academic Publishers.
[24] Martello, S. and Toth, P. (1990). Knapsack Problems: Algorithms and
Computer Implementations, Wiley: New York.
Title of the paper 21
[25] Moscato, P. (1999). Memetic Algorithms: a short introduction, in New
Ideas in Optimization, edited by D. Corne, F. Glover, and M. Dorigo,
McGraw-Hill, 219-234.
[26] Osman, I.H. (1995). Heuristics for the generalized assignment problem:
simulated annealing and tabu search approaches, OR Spektrum 17: 211-
225.
[27] Resende, M.G.C. and Ribeiro, C.C. (2001). Greedy Randomized Adaptive
Search Procedures. To appear in State-of-the-Art Handbook of
Metaheuristics, F. Glover and G. Kochenberger, eds. Kluwer Academic
Publishers.
[28] Ross, G.T. and Soland, P.M. (1975). A branch and bound based algorithm
for the generalized assignment problem, Math Prog 8: 91-103.
[29] Sahni, S. and Gonzalez, T. (1976). P-Complete Approximation Problems. J
ACM, 23: 555-565.
[30] Savelsbergh, M. (1997). A Branch-And-Cut Algorithm for the Generalized
Assignment Problem, Opns Res 45: 6, 831- 841.
[31] Stützle, T. (1997). MAX-MIN Ant System for the Quadratic Assignment
Problem, Technical Report AIDA-97-4, FG Intellektik,TU Darmstadt,
Germany.
[32] Stützle, T. (1998a). Local Search Algorithms for Combinatorial Problems-
Analysis, Improvements , and New Applications. PhD thesis, Departement
of Computer Science, Darmstadt University of Technology, Germany.
[33] Stützle, T. (1998b). An ant approach for the flow shop problem, In
Proceeding of the 6th European Congress on Intelligent Techniques & Soft
Computing (EUFIT’98), 3: 1560-1564, Verlag Mainz.
[34] Stützle, T. and Hoos, H. (1999). Max-Min Ant System and Local Search for
Combinatorial Optimization. in: S. Voβ, S. Martello, I.H. Osman and C.
Roucairol (eds), Meta-Heuristics: Trends in Local Search paradigms for
Optimization, Kluwer Academic Publishers, pp. 313-329.
[35] Trick, M.A. (1992). A linear relaxation heuristic for the generalized
assignment problem, Naval Res Logist 39: 137-152.
[36] Yagiura, M., Yamaguchi, T. and Ibaraki, T. (1998). A variable depth search
algorithm with branching search for the generalized assignment problem,
Optimization Methods and Software, vol. 10, 419-441.
[37] Yagiura, M., Yamaguchi, T. and Ibaraki, T. (1999). A variable depth search
algorithm for the generalized assignment problem, in: S. Vo β, S. Martello,
I.H. Osman and C. Roucairol (eds), Meta-Heuristics: Trends in Local
Search paradigms for Optimization, Kluwer Academic Publishers, 459-471.
[38] Wilson, J.M. (1997a). A genetic algorithm for the generalised assignment
problem, J Opl Res Soc, 48: 804-809.
[39] Wilson, J.M. (1997b). A simple dual algorithm for the generalized
assignment problen, J Heuristics 2(4): 303-311.
Table 1:Average percentage deviation from optimal for 5 runs of the adaptive search heuristics.
prob. m*n optimal MMAS GRASP ASH+TS GRAH+TS ASH+
LS+TS
GRAH+
LS+TS
ASH+
LS+CTS
gap7-1 8*40 942 0.08% 0.04% 0.00% 0.00% 0.00% 0.00% 0.00%
gap7-2 8*40 949 0.00% 0.02% 0.00% 0.00% 0.00% 0.00% 0.00%
gap7-3 8*40 968 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
gap7-4 8*40 945 0.00% 0.17% 0.00% 0.00% 0.00% 0.00% 0.00%
gap7-5 8*40 951 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00%
gap8-1 8*48 1133 0.32% 0.34% 0.14% 0.09% 0.05% 0.25% 0.28%
gap8-2 8*48 1134 0.07% 0.11% 0.00% 0.02% 0.00% 0.02% 0.00%
gap8-3 8*48 1141 0.210% 0.280% 0.105% 0.140% 0.070% 0.123% 0.175%
gap8-4 8*48 1117 0.143% 0.269% 0.054% 0.125% 0.000% 0.036% 0.018%
gap8-5 8*48 1127 0.248% 0.266% 0.106% 0.089% 0.089% 0.106% 0.142%
gap9-1 10*30 709 0.000% 0.028% 0.000% 0.000% 0.000% 0.000% 0.000%
gap9-2 10*30 717 0.223% 0.223% 0.000% 0.056% 0.000% 0.056% 0.056%
gap9-3 10*30 712 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000%
gap9-4 10*30 723 0.111% 0.028% 0.000% 0.000% 0.000% 0.000% 0.000%
gap9-5 10*30 706 0.057% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000%
gap10-1 10*40 958 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000%
gap10-2 10*40 963 0.042% 0.104% 0.000% 0.000% 0.000% 0.021% 0.000%
gap10-3 10*40 960 0.229% 0.250% 0.125% 0.125% 0.063% 0.104% 0.104%
gap10-4 10*40 947 0.084% 0.169% 0.000% 0.000% 0.000% 0.000% 0.021%
gap10-5 10*40 947 0.211% 0.190% 0.063% 0.042% 0.000% 0.063% 0.042%
gap11-1 10*50 1139 0.018% 0.070% 0.000% 0.000% 0.000% 0.000% 0.000%
gap11-2 10*50 1178 0.000% 0.034% 0.000% 0.000% 0.000% 0.000% 0.017%
gap11-3 10*50 1195 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000%
gap11-4 10*50 1171 0.051% 0.017% 0.000% 0.000% 0.000% 0.000% 0.000%
gap11-5 10*50 1171 0.034% 0.102% 0.000% 0.000% 0.000% 0.000% 0.000%
gap12-1 10*60 1451 0.055% 0.096% 0.000% 0.014% 0.000% 0.014% 0.014%
gap12-2 10*60 1449 0.055% 0.055% 0.000% 0.000% 0.000% 0.000% 0.000%
gap12-3 10*60 1433 0.000% 0.028% 0.000% 0.000% 0.000% 0.000% 0.000%
gap12-4 10*60 1447 0.041% 0.069% 0.000% 0.000% 0.000% 0.000% 0.000%
gap12-5 10*60 1446 0.028% 0.055% 0.000% 0.000% 0.000% 0.000% 0.028%
Average 0.077% 0.100% 0.020% 0.023% 0.009% 0.026% 0.030%
Table 2:Average percentage deviation to optimal of the adaptive search heuristics for the 6 groups of problems.
prob. m*n MMAS GRASP ASH+TS GRAH+TS ASH+LS+TS GRAH+LS+TS ASH+LS+CT
S
gap7 8*40 0.017% 0.047% 0.000% 0.000% 0.000% 0.000% 0.000%
gap8 8*48 0.198% 0.251% 0.081% 0.092% 0.042% 0.106% 0.124%
gap9 10*30 0.078% 0.056% 0.000% 0.011% 0.000% 0.011% 0.011%
gap10 10*40 0.113% 0.143% 0.038% 0.033% 0.013% 0.038% 0.034%
gap11 10*50 0.021% 0.045% 0.000% 0.000% 0.000% 0.000% 0.003%
gap12 10*60 0.036% 0.061% 0.000% 0.003% 0.000% 0.003% 0.008%
Average 0.077% 0.100% 0.020% 0.023% 0.009% 0.026% 0.030%
Table 3:Average total running time in seconds of the adaptive search heuristics.
prob. m*n MMAS GRASP ASH+TS GRAH+TS ASH+LS+TS GRAH+LS+TS ASH+LS+CT
S
gap7 8*40 94.0 120.7 111.1 139.5 113.7 125.3 342.0
gap8 8*48 172.0 232.3 172.8 214.5 178.9 205.9 576.1
gap9 10*30 54.6 74.9 72.8 81.3 74.0 79.2 209.6
gap10 10*40 141.4 172.2 146.8 160.2 137.9 150.3 468.3
gap11 10*50 256.6 334.6 140.8 192.9 144.1 163.5 878.9
gap12 10*60 427.0 530.4 238.5 349.9 242.9 320.4 1485.3
Average 190.9 244.2 147.1 189.7 148.6 174.1 660.0
Table 4:Average running time (in seconds) to best found solution of the adaptive search heuristics.
prob. m*n MMAS GRASP ASH+TS GRAH+TS ASH+LS+TS GRAH+LS+TS ASH+LS+CT
S
gap7 8*40 35.8 42.8 19.7 56.6 22.5 21.4 92.4
gap8 8*48 80.3 88.8 68.1 96.5 85.3 78.3 225.9
gap9 10*30 18.8 22.2 19.0 10.7 15.3 13.4 55.8
gap10 10*40 45.5 85.4 35.7 51.2 37.5 34.5 159.7
gap11 10*50 84.1 98.3 33.8 64.3 28.4 33.9 278.2
gap12 10*60 137.5 211.9 59.3 91.7 59.7 95.0 379.7
Average 67.0 91.5 39.3 61.8 41.5 46.1 198.6
ASH+LS+TS
ASH+LS+CTS
GRAH+TS
GRAH+LS+TS
ASH+TS
GRASP
MMAS
0.000%
0.050%
0.100%
0.150%
0.200%
0.250%
0.300%
0.0 50.0 100.0 150.0 200.0 250.0
Time in Seconds
Average
Percentage
Figure 1: Average percentage deviation from optimal versus time for gap8 instances
ASH+LS+TS
ASH+LS+CTS
GRAH+TS
GRAH+LS+TS
ASH+TS
GRASP
MMAS
0.000%
0.020%
0.040%
0.060%
0.080%
0.100%
0.120%
0.140%
0.160%
0.0 20.0 40.0 60.0 80.0 100.0 120.0 140.0 160.0 180.0
Time in Seconds
Average
Percentage
Figure 2: Average percentage deviation from optimal versus time for gap10 instances
A
A
A
A
A
A
G
G
G
G
G
G
0.000%
0.050%
0.100%
0.150%
0.200%
0.250%
0.300%
0.0 50.0 100.0 150.0 200.0 250.0
Time in Seconds
Average
Percentage
Figure 3: Average percentage deviation from optimal versus time of the MMAS (A) and GRASP (G) heuristics
for gap7 to gap12 instances
G
G
G
G
G
G
A
A
A
A
A A
0.000%
0.020%
0.040%
0.060%
0.080%
0.100%
0.120%
0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0
Time in Seconds
Average
Percentage
Figure 4: Average percentage deviation from optimal versus time of the ASH+LS+TS (A) and GRAH+LS+TS (G)
heuristics for gap7 to gap12 instances.
Figure 5: Initial solution value obtained by GRAH and ASH for the instance gap9-2, using LS+TS as the second
step.
-1000
0
1000
2000
3000
4000
5000
0 10 20 30 40
Iteration number
Initial
Solution
Value
Ant
Grasp
Table 5:Average percentage deviation from optimal of more recent heuristics.
prob. m*n TS6 TS1 GAa GAb ASH+LS+TS GRAH+LS+TS
gap7 8*40 0.02% 0.00% 0.08% 0.00% 0.00% 0.00%
gap8 8*48 0.14% 0.09% 0.33% 0.05% 0.04% 0.11%
gap9 10*30 0.06% 0.06% 0.17% 0.00% 0.00% 0.01%
gap10 10*40 0.15% 0.08% 0.27% 0.40% 0.01% 0.04%
gap11 10*50 0.02% 0.02% 0.20% 0.00% 0.00% 0.00%
gap12 10*60 0.07% 0.04% 0.17% 0.01% 0.00% 0.00%
Average 0.08% 0.05% 0.20% 0.08% 0.01% 0.03%
TS6: Long term Tabu Search, FA selection, Osman (1995)
TS1: Long term Tabu Search, BA selection, Osman (1995)
GAa: Genetic Algorithm without heuristic operator , Chu and Beasley (1997)
GAb: Genetic Algorithm with heuristic operator , Chu and Beasley (1997)

More Related Content

Similar to Adaptive Search Heuristics For The Generalized Assignment Problem

Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmReview of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmXin-She Yang
 
SigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerSigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerIan Dewancker
 
Enhancing three variants of harmony search algorithm for continuous optimizat...
Enhancing three variants of harmony search algorithm for continuous optimizat...Enhancing three variants of harmony search algorithm for continuous optimizat...
Enhancing three variants of harmony search algorithm for continuous optimizat...IJECEIAES
 
A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...
A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...
A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...Majid_ahmed
 
A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsIJMERJOURNAL
 
Abrigo and love_2015_
Abrigo and love_2015_Abrigo and love_2015_
Abrigo and love_2015_Murtaza Khan
 
Application Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary AlgorithmsApplication Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary AlgorithmsAmy Isleb
 
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...IOSRJM
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
 
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...Martha Brown
 
The potential role of ai in the minimisation and mitigation of project delay
The potential role of ai in the minimisation and mitigation of project delayThe potential role of ai in the minimisation and mitigation of project delay
The potential role of ai in the minimisation and mitigation of project delayPieter Rautenbach
 
An Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling ProblemsAn Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling ProblemsLinda Garcia
 
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...ijmpict
 
Multi objective predictive control a solution using metaheuristics
Multi objective predictive control  a solution using metaheuristicsMulti objective predictive control  a solution using metaheuristics
Multi objective predictive control a solution using metaheuristicsijcsit
 
Towards reducing the
Towards reducing theTowards reducing the
Towards reducing theIJDKP
 

Similar to Adaptive Search Heuristics For The Generalized Assignment Problem (20)

EJSR(5)
EJSR(5)EJSR(5)
EJSR(5)
 
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk AlgorithmReview of Metaheuristics and Generalized Evolutionary Walk Algorithm
Review of Metaheuristics and Generalized Evolutionary Walk Algorithm
 
SigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerSigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_Primer
 
Enhancing three variants of harmony search algorithm for continuous optimizat...
Enhancing three variants of harmony search algorithm for continuous optimizat...Enhancing three variants of harmony search algorithm for continuous optimizat...
Enhancing three variants of harmony search algorithm for continuous optimizat...
 
A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...
A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...
A Comparative Study of AI Techniques for Solving Hybrid Flow Shop (HFS) Sched...
 
A Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test FunctionsA Genetic Algorithm on Optimization Test Functions
A Genetic Algorithm on Optimization Test Functions
 
Abrigo and love_2015_
Abrigo and love_2015_Abrigo and love_2015_
Abrigo and love_2015_
 
Optimal combination of operators in Genetic Algorithmsfor VRP problems
Optimal combination of operators in Genetic Algorithmsfor VRP problemsOptimal combination of operators in Genetic Algorithmsfor VRP problems
Optimal combination of operators in Genetic Algorithmsfor VRP problems
 
Application Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary AlgorithmsApplication Issues For Multiobjective Evolutionary Algorithms
Application Issues For Multiobjective Evolutionary Algorithms
 
Borgulya
BorgulyaBorgulya
Borgulya
 
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...
A Hybrid Pareto Based Multi Objective Evolutionary Algorithm for a Partial Fl...
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
 
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSA HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
A HYBRID COA/ε-CONSTRAINT METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMS
 
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
A General Purpose Exact Solution Method For Mixed Integer Concave Minimizatio...
 
10.1.1.34.7361
10.1.1.34.736110.1.1.34.7361
10.1.1.34.7361
 
The potential role of ai in the minimisation and mitigation of project delay
The potential role of ai in the minimisation and mitigation of project delayThe potential role of ai in the minimisation and mitigation of project delay
The potential role of ai in the minimisation and mitigation of project delay
 
An Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling ProblemsAn Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
An Adaptive Problem-Solving Solution To Large-Scale Scheduling Problems
 
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
A NEW APPROACH IN DYNAMIC TRAVELING SALESMAN PROBLEM: A HYBRID OF ANT COLONY ...
 
Multi objective predictive control a solution using metaheuristics
Multi objective predictive control  a solution using metaheuristicsMulti objective predictive control  a solution using metaheuristics
Multi objective predictive control a solution using metaheuristics
 
Towards reducing the
Towards reducing theTowards reducing the
Towards reducing the
 

More from Sandra Long

Essay On Teachers Day (2023) In English Short, Simple Best
Essay On Teachers Day (2023) In English Short, Simple BestEssay On Teachers Day (2023) In English Short, Simple Best
Essay On Teachers Day (2023) In English Short, Simple BestSandra Long
 
10 Best Printable Handwriting Paper Template PDF For Free At Printablee
10 Best Printable Handwriting Paper Template PDF For Free At Printablee10 Best Printable Handwriting Paper Template PDF For Free At Printablee
10 Best Printable Handwriting Paper Template PDF For Free At PrintableeSandra Long
 
Buy College Application Essay. Online assignment writing service.
Buy College Application Essay. Online assignment writing service.Buy College Application Essay. Online assignment writing service.
Buy College Application Essay. Online assignment writing service.Sandra Long
 
FREE 6 Sample Informative Essay Templates In MS Word
FREE 6 Sample Informative Essay Templates In MS WordFREE 6 Sample Informative Essay Templates In MS Word
FREE 6 Sample Informative Essay Templates In MS WordSandra Long
 
Small Essay On Education. Small Essay On The Educ
Small Essay On Education. Small Essay On The EducSmall Essay On Education. Small Essay On The Educ
Small Essay On Education. Small Essay On The EducSandra Long
 
Where Can I Buy A Persuasive Essay, Buy Per
Where Can I Buy A Persuasive Essay, Buy PerWhere Can I Buy A Persuasive Essay, Buy Per
Where Can I Buy A Persuasive Essay, Buy PerSandra Long
 
Chinese Writing Practice Paper With Pinyin Goodnot
Chinese Writing Practice Paper With Pinyin GoodnotChinese Writing Practice Paper With Pinyin Goodnot
Chinese Writing Practice Paper With Pinyin GoodnotSandra Long
 
Elephant Story Writing Sample - Aus - Elephant W
Elephant Story Writing Sample - Aus - Elephant WElephant Story Writing Sample - Aus - Elephant W
Elephant Story Writing Sample - Aus - Elephant WSandra Long
 
391505 Paragraph-Writ. Online assignment writing service.
391505 Paragraph-Writ. Online assignment writing service.391505 Paragraph-Writ. Online assignment writing service.
391505 Paragraph-Writ. Online assignment writing service.Sandra Long
 
Get Essay Writing Assignment Help Writing Assignments, Essay Writing
Get Essay Writing Assignment Help Writing Assignments, Essay WritingGet Essay Writing Assignment Help Writing Assignments, Essay Writing
Get Essay Writing Assignment Help Writing Assignments, Essay WritingSandra Long
 
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, Whi
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, WhiAmpad EZ Flag Writing Pad, LegalWide, 8 12 X 11, Whi
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, WhiSandra Long
 
The Federalist Papers Writers Nozna.Net. Online assignment writing service.
The Federalist Papers Writers Nozna.Net. Online assignment writing service.The Federalist Papers Writers Nozna.Net. Online assignment writing service.
The Federalist Papers Writers Nozna.Net. Online assignment writing service.Sandra Long
 
Whoever Said That Money CanT Buy Happiness, Simply DidnT
Whoever Said That Money CanT Buy Happiness, Simply DidnTWhoever Said That Money CanT Buy Happiness, Simply DidnT
Whoever Said That Money CanT Buy Happiness, Simply DidnTSandra Long
 
How To Write An Essay In College Odessa Howtowrit
How To Write An Essay In College Odessa HowtowritHow To Write An Essay In College Odessa Howtowrit
How To Write An Essay In College Odessa HowtowritSandra Long
 
How To Write A Career Research Paper. Online assignment writing service.
How To Write A Career Research Paper. Online assignment writing service.How To Write A Career Research Paper. Online assignment writing service.
How To Write A Career Research Paper. Online assignment writing service.Sandra Long
 
Columbia College Chicago Notable Alumni - INFOLEARNERS
Columbia College Chicago Notable Alumni - INFOLEARNERSColumbia College Chicago Notable Alumni - INFOLEARNERS
Columbia College Chicago Notable Alumni - INFOLEARNERSSandra Long
 
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.001 P1 Accounting Essay Thatsnotus. Online assignment writing service.
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.Sandra Long
 
Essay Writing Tips That Will Make Col. Online assignment writing service.
Essay Writing Tips That Will Make Col. Online assignment writing service.Essay Writing Tips That Will Make Col. Online assignment writing service.
Essay Writing Tips That Will Make Col. Online assignment writing service.Sandra Long
 
Pin On Essay Writer Box. Online assignment writing service.
Pin On Essay Writer Box. Online assignment writing service.Pin On Essay Writer Box. Online assignment writing service.
Pin On Essay Writer Box. Online assignment writing service.Sandra Long
 
How To Write A Funny Essay For College - Ai
How To Write A Funny Essay For College - AiHow To Write A Funny Essay For College - Ai
How To Write A Funny Essay For College - AiSandra Long
 

More from Sandra Long (20)

Essay On Teachers Day (2023) In English Short, Simple Best
Essay On Teachers Day (2023) In English Short, Simple BestEssay On Teachers Day (2023) In English Short, Simple Best
Essay On Teachers Day (2023) In English Short, Simple Best
 
10 Best Printable Handwriting Paper Template PDF For Free At Printablee
10 Best Printable Handwriting Paper Template PDF For Free At Printablee10 Best Printable Handwriting Paper Template PDF For Free At Printablee
10 Best Printable Handwriting Paper Template PDF For Free At Printablee
 
Buy College Application Essay. Online assignment writing service.
Buy College Application Essay. Online assignment writing service.Buy College Application Essay. Online assignment writing service.
Buy College Application Essay. Online assignment writing service.
 
FREE 6 Sample Informative Essay Templates In MS Word
FREE 6 Sample Informative Essay Templates In MS WordFREE 6 Sample Informative Essay Templates In MS Word
FREE 6 Sample Informative Essay Templates In MS Word
 
Small Essay On Education. Small Essay On The Educ
Small Essay On Education. Small Essay On The EducSmall Essay On Education. Small Essay On The Educ
Small Essay On Education. Small Essay On The Educ
 
Where Can I Buy A Persuasive Essay, Buy Per
Where Can I Buy A Persuasive Essay, Buy PerWhere Can I Buy A Persuasive Essay, Buy Per
Where Can I Buy A Persuasive Essay, Buy Per
 
Chinese Writing Practice Paper With Pinyin Goodnot
Chinese Writing Practice Paper With Pinyin GoodnotChinese Writing Practice Paper With Pinyin Goodnot
Chinese Writing Practice Paper With Pinyin Goodnot
 
Elephant Story Writing Sample - Aus - Elephant W
Elephant Story Writing Sample - Aus - Elephant WElephant Story Writing Sample - Aus - Elephant W
Elephant Story Writing Sample - Aus - Elephant W
 
391505 Paragraph-Writ. Online assignment writing service.
391505 Paragraph-Writ. Online assignment writing service.391505 Paragraph-Writ. Online assignment writing service.
391505 Paragraph-Writ. Online assignment writing service.
 
Get Essay Writing Assignment Help Writing Assignments, Essay Writing
Get Essay Writing Assignment Help Writing Assignments, Essay WritingGet Essay Writing Assignment Help Writing Assignments, Essay Writing
Get Essay Writing Assignment Help Writing Assignments, Essay Writing
 
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, Whi
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, WhiAmpad EZ Flag Writing Pad, LegalWide, 8 12 X 11, Whi
Ampad EZ Flag Writing Pad, LegalWide, 8 12 X 11, Whi
 
The Federalist Papers Writers Nozna.Net. Online assignment writing service.
The Federalist Papers Writers Nozna.Net. Online assignment writing service.The Federalist Papers Writers Nozna.Net. Online assignment writing service.
The Federalist Papers Writers Nozna.Net. Online assignment writing service.
 
Whoever Said That Money CanT Buy Happiness, Simply DidnT
Whoever Said That Money CanT Buy Happiness, Simply DidnTWhoever Said That Money CanT Buy Happiness, Simply DidnT
Whoever Said That Money CanT Buy Happiness, Simply DidnT
 
How To Write An Essay In College Odessa Howtowrit
How To Write An Essay In College Odessa HowtowritHow To Write An Essay In College Odessa Howtowrit
How To Write An Essay In College Odessa Howtowrit
 
How To Write A Career Research Paper. Online assignment writing service.
How To Write A Career Research Paper. Online assignment writing service.How To Write A Career Research Paper. Online assignment writing service.
How To Write A Career Research Paper. Online assignment writing service.
 
Columbia College Chicago Notable Alumni - INFOLEARNERS
Columbia College Chicago Notable Alumni - INFOLEARNERSColumbia College Chicago Notable Alumni - INFOLEARNERS
Columbia College Chicago Notable Alumni - INFOLEARNERS
 
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.001 P1 Accounting Essay Thatsnotus. Online assignment writing service.
001 P1 Accounting Essay Thatsnotus. Online assignment writing service.
 
Essay Writing Tips That Will Make Col. Online assignment writing service.
Essay Writing Tips That Will Make Col. Online assignment writing service.Essay Writing Tips That Will Make Col. Online assignment writing service.
Essay Writing Tips That Will Make Col. Online assignment writing service.
 
Pin On Essay Writer Box. Online assignment writing service.
Pin On Essay Writer Box. Online assignment writing service.Pin On Essay Writer Box. Online assignment writing service.
Pin On Essay Writer Box. Online assignment writing service.
 
How To Write A Funny Essay For College - Ai
How To Write A Funny Essay For College - AiHow To Write A Funny Essay For College - Ai
How To Write A Funny Essay For College - Ai
 

Recently uploaded

Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 

Recently uploaded (20)

Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 

Adaptive Search Heuristics For The Generalized Assignment Problem

  • 1. Mathware & Soft Computing 7 (2000), 1-15 1 Adaptive Search Heuristics for The Generalized Assignment Problem Helena Ramalhinho Lourenço1 , Daniel Serra 2 1 DEE, Universitat Pompeu Fabra, Barcelona, Spain 2 DEE, Universitat Pompeu Fabra, Barcelona, Spain {helena.ramalhinho, daniel.serra}@econ.upf.es Abstract The Generalized Assignment Problem consists of assigning a set of tasks to a set of agents with minimum cost. Each agent has a limited amount of a single resource and each task must be assigned to one and only one agent, requiring a certain amount of the agent’s resource. We present the application of a MAX-MIN Ant System (MMAS) and a greedy randomized adaptive search procedure (GRASP) to the generalized assignment problem based on hybrid approaches. The MMAS heuristic can be seen as an adaptive sampling algorithm that takes into consideration the experience gathered in earlier iterations of the algorithm. Moreover, the latter heuristic is combined with local search and tabu search heuristics to improve the search. Several neighborhoods are studied, including one based on ejection chains that produces good moves without increasing the computational effort. We present computational results of a comparative analysis of the two adaptive heuristics, followed by concluding remarks and ideas on future research in generalized assignment related problems. Keywords: Generalized assignment, local search, GRASP, tabu search, ant colony optimization. . 1 Introduction The Generalized Assignment Problem (GAP) considers the minimum cost assignment of n jobs to m agents such that each job is assigned to one and only one agent subject to capacity constraints on the agents. The GAP has applications in areas like computer and communication networks, location problems, vehicle routing and machine scheduling. Our initial interest in this problem arose from two real applications in the area of health
  • 2. H. Lourenço & D. Serra 2 services, resource assignment problems and pure integer capacitated plant location problems. The aim of this paper is to present hybrid algorithms consisting of adaptive construction heuristics and subsequent application of local search to solve the GAP, and the respective computational results. The basic elements are extract from a specific Ant Colony Optimization algorithm, MAX-MIN Ant System, Stützle (1997,1998a,1998b), Stützle and Hoos (1999), and GRASP algorithm, Feo and Resende (1995), Resende and Ribeiro (2001). These heuristics can be embedded in a general framework with three steps. In the first step, a solution is constructed by a randomized construction heuristic; in the second step, a local search is applied to improve this initial solution, a descent local search and tabu search are proposed; the last step consists in updating a set of parameters which guide the construction process. The three steps are repeated until a stopping criterion is verified. The choices made in each step lead to different heuristic methods. The paper is organized as follows. First, we present the GAP and a review of the methods proposed to solve it. Second, we describe the general framework of the adaptive construction heuristics. Third, we focus on local search methods, describing the descent local search and the tabu search. Fourth, the computational experiments are described with the objective of evaluating the proposed framework and its components. Finally, we conclude with general remarks on this work and directions of future research. 2 The generalized assignment problem The GAP consists in assigning at a minimum total cost a set of tasks to a set of agents, with limited resource capacity. Each task must be assigned to one and only one agent requiring a certain amount of the agent’s resource. For an extended review of this well- known problem we refer to Martello and Toth (1990), Cattrysse and Van Wassenhove (1992) and Chu and Beasley (1997). Fisher, Jaikumar and Van Wassenhove (1986) proved that the problem is NP-complete. Moreover, the problem of deciding if there exits a feasible solution is NP-hard, Sahni and Gonzalez (1976). Osman (1995) presented a survey of many real-life applications. The Generalized Assignment Problem can be formulated as an integer program, as presented below. We use the following notation: I is the set of tasks (i=1,..., n) ; J is the set of agents (j=1,..., m) ; bj = resource capacity of agent j ; aij = resource needed if task i is assigned to agent j; cij = cost of task i if assigned to agent j . The variables are: xij = 1, if task i is assigned to agent j; 0, otherwise. Assume that j ij b a ≤ and j n i ij b a > ∑ =1 .
  • 3. Title of the paper 3 { } ,..., 1 ; ,..., 1 , 1 , 0 ) 4 ( ,..., 1 , 1 ) 3 ( ,..., 1 , ) 2 ( . . ) ( min ) 1 ( 1 1 1 1 m j n i x n i x m j b x a t s x c x f ij m j ij j n i ij ij m j n i ij ij = = ∈ = = = ≤ = ∑ ∑ ∑∑ = = = = Constraints (2) are the resource capacity of the agents, constraints (3) guarantee that each task is assigned to one agent, and constraints (4) are the integrality constraints. Several exact algorithms for the GAP have been proposed by Ross and Soland (1975), Martello and Toth (1990), Fisher, Jaikumar and Van Wassenhove (1986), Guinard and Roseenwein (1989), and Karabakal, Bean, and Lohmann (1992). Recently, Savelsbergh (1997) presented a branch-and-cut algorithm that employs both column generation and branch-and-bound to obtain the optimal solution to a set partitioning formulation of the GAP. The author mentioned that problems with 20 agents and 50 jobs can be solved from 210 to 1160 seconds. Also, several heuristics have been proposed to solve the GAP. Amini and Racer (1994) presented a variable-depth-search heuristic motivated by the work of Lin and Kernighan (1973) on the Traveling Salesman Problem. They offered a rigorous statistical analysis of the solution methods. Trick (1992) proposed an LP approximation algorithm. Cattrysse, Salomon and Van Wassenhove (1994) formulated the problem as a Set Partitioning Problem and proposed a heuristic based on column generation techniques. Osman (1995) presented a comparison of the algorithms based on tabu search and simulated annealing techniques. Wilson (1997a) presented a genetic algorithm to restore feasibility to a set of near-optimal solutions, and then to improve the best solution found by local search. Wilson also presented a dimple dual algorithm for the GAP, Wilson (1997b). Chu and Beasley (1997) also presented a genetic algorithm for the GAP that tries to improve feasibility and optimality simultaneously. Laguna, Kelly, González-Velarde, and Glover (1995) proposed a tabu search algorithm based on ejection chains to an extension of the GAP, the multilevel generalized assignment problem. A variable depth search algorithm has been recently presented by Yagiura, Yamaguchi, and Ibaraki (1999) and Yagiura, Yamaguchi, and Ibaraki (1998).
  • 4. H. Lourenço & D. Serra 4 3 Adaptive search heuristic This section presents the general framework and the main aspects of the adaptive heuristics proposed to solve the GAP. These adaptive heuristics are based on two metaheuristic approaches to solve combinatorial optimization problems: t he Greedy Randomized Adaptive Search Procedure (GRASP), Feo and Resende (1995), Resende and Ribeiro (2001), and the Max-Min Ant System (MMAS), Stützle (1997,1998a,1998b), Stützle and Hoos (1999), a specific algorithm falling into the framework of the Ant C olony Optimization metaheuristic, Dorigo and Di Caro (1999). Both techniques include a step where a local search method is applied. The local search methods are presented in the next section, since they present some innovative aspects. General Framework The adaptive heuristics proposed to solve the GAP can be described in a general framework comprising three steps, that are repeated iteratively until some stopping criteria is met. In step 1 a solution is generated using a randomized construction heuristic; in step 2, a local search is applied to the solution constructed in step 1, and finally in step 3, if there exist parameters, they are updated following a certain criteria. The objective of the above method is to put together in the same framework recently proposed metaheuristics which consist in, first, generating a starting solution for a local search using some randomized adaptive construction heuristic, then possibly updating some parameters based on the incumbent solution and continuing this process until some stopping criterion is met. Randomized adaptive construction heuristics are used because, with a simple construction heuristics, only one initial solution can be generated and randomization of the construction heuristic allows more different initial solutions. The same objective could, in principle, be obtained by restarting a local search from random initial solutions; yet, computational experience on a wide variety of combinatorial optimization problems has shown that, using random solutions do not result in very good performance. It is also outperformed by randomized construction heuristics (RCH) which take into account the contribution to the objective function value. The RCH guides their solution construction using information recently seen in good solutions, Glover (2000). Other metaheuristics like Iterated Local Search (ILS), Lourenço, Martin and Stützle (2001), Scatter Search, Glover(1998), and Memetic Algorithms (MA), Moscato (1999) fit into the above framework. They generate new initial solutions for a subsequent improvement by local search. Yet, the main differences from GRASP and ACO algorithms are as follows: in the case of the ILS, the initial solutions are generated by modifying a current solution through a kick move; In the case of Memetic algorithms and Scatter Search, the initial solutions are obtained by manipulating a population of previously seen solutions.
  • 5. Title of the paper 5 In particular, the two approaches considered in the present work, are the greedy randomized adaptive search procedure (GRASP) and a specific version of the Ant Colony Optimization Algorithms, known as MAX-MIN Ant Systems. Both of these methods will be discuss in detail in the next sections. The first step of the proposed framework is based on a greedy heuristics, which constructs a solution as follows: at each iteration, an unassigned task is chosen following a priority function; next, an agent is chosen amount the agents that can perform this task, and the task is assigned to it. This procedure is repeated until all tasks have been assigned to an agent. We propose two heuristics for this first step: A Greedy Randomized Adaptive Heuristic (GRAH) and an Ant System Heuristic (ASH). The main difference between the basic greedy heuristic and the two heuristics, GRAH and ASH, is the method of selecting the agent to whom the previously chosen task is assigned. For the basic greedy heuristic, the choice is deterministic and based on the cost function. For the GRAH, the choice is a probabilistic bias according to an adaptive priority function, which does not depend on the solutions seen in previous iterations of the general framework but only on the choices done in the previous iterations of the GRAH. For the ASH, the choice is also a probabilistic bias based on desirability values, which are adapted in a reinforcement style of learning during the run of the algorithm and reflect prior performance. For the second step, two local search algorithms are proposed, a descent local search and a tabu search approach. The third step is only applied if there are parameters that have to be updated at the end of each iteration, which is the case for the ant system heuristic. Each of the proposed metaheuristics differs in the way choices are made at each step of the above framework. We will admit unfeasible solutions with respect to capacity constraints, i.e. the total resource required by the tasks assigned to some agents may exceed their capacity. Unfeasible solutions will be penalized into the objective function. The main reason for admitting unfeasible solutions is that, for some solutions close to the optimal one, the capacity of the agents will be almost completely taken up, therefore any neighbor obtained by interchanging or reassigning tasks will be unfeasible or a “bad” neighbor. Allowing these extra solutions usually provides escape routes out of local optima. This approach is quite common in implementations of metaheuristics, and is often very effective, Glover and Laguna (1997). The penalty function is chosen as follows: ∑ ∑ ∑∑ = = = =           − + = m j n i j ij ij m j n i ij ij b x a x c x f 1 1 1 1 , 0 max ) ( ' α where 0 > α is a parameter, representing the cost of using one unit of extra capacity. If a solution is not feasible, the second term will be positive and therefore the search will look for a feasible solution.
  • 6. H. Lourenço & D. Serra 6 The parameter α can be increased during the run to penalize unfeasible solutions, and drive the search to feasible ones. Greedy Randomized Adaptive Procedure The first method proposed to solve the GAP is based on the greedy randomized adaptive search procedure (GRASP). GRASP is an iterative randomized sampling technique, with two phases. The first phase consists of a greedy randomized adaptive heuristic that constructs an initial solution. It is called adaptive because the greedy function takes into account previous decisions made during the construction of a solution when considering the next choice. The second phase consists of an improvement phase, which usually corresponds to a local search method. As can be seen, the GRASP fits into the general framework proposed in the previous section. GRASP has been successfully applied to many combinatorial optimization problems, Resende and Ribeiro (2001). For a list of several GRASP applications see the web site of Resende: http://www.research.att.com/~mgcr/. In this section, we will only describe the greedy randomized adaptive heuristic, i.e. the first phase. The local search will be explained in the next section. At each iteration of the Greedy Randomized Adaptive Heuristic (GRAH), one task is assigned to an agent. The heuristic finishes when all tasks have been assigned. The GRAH can be described as follows: 1. Let m j Sj , , 1K = ∀ ∅ = . ( j S is the set of tasks assigned to agent j) 2. Construct the restricted candidate list (RCL) of agents for each task, i L , such that { } max : c c j L ij i ≤ = , cmax is a parameter that limits the dimension of the RCL (if ij j i c c , max max ∀ = all agents will be included). 3. Consider any order of the tasks, for example n i ,..., 1 = . Let 1 = i . 4. While (not all tasks have been assigned) repeat: 4.1. Choose randomly an agent j* from i L . Each agent in the list i L has the following probability of being chosen: i L l a b a b ij L j p i il l ij j ∈ = ∑ ∈ , , which depends on the resource of agent j and the resource needed by task i. 4.2. If there is available capacity, task i is assigned to agent {} i S S j j j ∪ = * * : * . Otherwise, the task is assigned to the first agent with spare capacity. If all agents are fully occupied, the assignment is random. 4.3. Let i=i+1 and if ∑ ∈ > * * * j S i j ij b a remove j* from any list. Repeat step 4 until all tasks are assigned. (Note that the capacity constraint can be violated).
  • 7. Title of the paper 7 5. Let otherwise. , 0 ; if 1 = ∈ = ij j ij x S i x Calculate the value of the penalty function for the solution, f’(x). In this first step of the framework, we are looking for a feasible solution. Therefore, we are interested in assigning a task to an agent if this task uses a small amount of the agent’s resource, i.e. j ij b a is small. For each task, we order the agents in decreasing order of pij. The task can be assigned to any agent following the probability function pij, if there is available capacity. If not, the task is assigned to the first agent in the above order with spare capacity. If all agents are fully occupied, an agent is randomly selected with uniform probability and the task is assigned to this agent. Note that the solution obtained can be unfeasible with respect to the capacity constraints. And also, that the construction heuristics does not take into considerations previous seen solutions during the run of the general framework, only the choices done previous within the greedy heuristic. The Ant Colony Optimization The GRASP procedure described in the previous section is a multi-start local search, but, instead of considering a random initial solution, a greedy randomized heuristic is used to try to find better initial solutions than random ones. A different way of generating initial solutions is followed by the Ant Colony Optimization (ACO) paradigm, Colorni, Dorigo and Maniezzo (1991a,1991b), Dorigo, Maniezzo, and Colorni (1996), Dorigo and Di Caro (1999). ACO is a cooperative algorithm inspired by the foraging behavior of real ants. Ants lay down in some quantity of an aromatic substance, known as pheromone, on their way to food and on their way back to the nest. Ants choose to follow a pheromone trail with a probability proportional to the pheromone trail intensity. Research on real ants has shown that such a trail following behavior allows the ants to identify shortest path between a food source and their nest, Goss, Aron, Deneubourg, and Pasteels (1989). ACO is inspired by this behavior and tries to adapt it to the solution of combinatorial optimization problems. The ACO approach associates pheromone trails to features of the solutions of a combinatorial problem, and can be seen as a kind of adaptive memory of the previous solutions. Solutions are iteratively constructed in a randomized fashion biased by the pheromone trails left by the previous ants and possible available information based on the problem data. In the specific ACO algorithms we are concerned with the way the pheromone trails are updated after the construction of a solution, by ensuring that the best features will have a more intensive pheromone. Note that the ACO metaheuristic proposed in Dorigo and Di Caro (1999) allows other features like on-line pheromone update during solution construction, but for this paper we do not need these features. The first ACO algorithm was an Ant System which was applied to the TSP, Colorni, Dorigo and Maniezzo (1991a,1991b), Dorigo, Maniezzo, and Colorni (1996). Subsequently several improvements of Ant System have been proposed and currently one of the best performing ACO algorithms is MAX-MIN Ant System, Stützle
  • 8. H. Lourenço & D. Serra 8 (1997). For more information, see the web page: http://iridia.ulb.ac.be/dorigo/ACO/ACO.html. The MAX-MIN Ant system (MMAS) is an adaptive sampling algorithm that takes into consideration the experience gathered in earlier iterations of the algorithm. Moreover, combining MMAS with local search, Stützle and Hoos (1999), Stützle (1997, 1998a) were able to find very good solutions to the Traveling Salesman Problem, the Quadratic Assignment Problem and the Flow-Shop Scheduling Problem. Next, we will propose a MMAS approach with an additional Local Search for the GAP, which can be seen as an improvement over the Ant Colony Optimization and is currently the best performance variants of the ACO algorithms for a wide variety of combinatorial optimization problems, Stützle (1998a). The pheromone trails, ij τ , for the GAP represent the desirability of assigning a task i to an agent j. Initially, let ij ij c 1 = τ (for GAP with a maximization objective, we set ij ij c = τ ). The cheaper the assignment of the task i to agent j is, the more desired is the assignment. First of all, let us explain how to construct a solution using only one ant. As suggested by Stützle and Hoos (1999), the selection rule for the assignment of a task to an agent can be described as follows: with some probability p0 the best choice is made, otherwise the assignment is done according a probability function. We designated this heuristic by Ant System Heuristic ( ASH). The tasks are assigned to the agents in a greedy way, like in the Greedy Randomized Adaptive Heuristic (except for step 4.1) where the assignment is done biased by the τij . A task i is assigned to a particular agent j in the following way: 4.1.1. With probability 0 p , choose the agent j* with maximal value of τij . 4.1.2 With probability 0 1 p − , choose the agent j* according to the following probability distribution:        ∈ = ∑ ∈ otherwise 0 if i L l il ij ij L j p i τ τ . The assignment obtained by the ASH constitutes the first step in the general framework, followed by a local search that tries to improve this initial solution. Afterwards, in the third step of the general framework, the pheromone trails are updated using the current local optimal solution in the following way: ij old ij new ij τ ρτ τ ∆ + = , where 1 0 , < ≤ ρ ρ , is the persistence of the trail, i.e. ρ − 1 represents the evaporation. The amount of pheromone deposited by the current local optimal solution is:
  • 9. Title of the paper 9    × = ∆ otherwise 0, solution in the agent to assigned is task if j i Q max ij τ τ where    = feasible. is solution the if , 05 . 0 ; unfeasible is solution the if , 01 . 0 Q The values of parameter Q were set by preliminary tests following the recommendations of Stützle and Hoos (1999), and they proved to be robust. Note that, if the solution is feasible, the pheromone trail has a bigger increment, trying to give larger probability to feasible assignments. Moreover, τ τ τ min max , , ≤ ≤ ∀ ij i j , so these limits must be imposed if the updated pheromone falls outside the above interval. Also, the amount of pheromone deposit does not depend on the solution’s objective function, which is different from others ACO applications, Dorigo, Di Caro (1999), but we have found this update rule very effective. Next, we present the local search methods for the second step of the general framework proposed in chapter 3. 4 Local search methods In order to derive a local search method, it is necessary to define a neighborhood structure, that associates a set of solutions N(x) with each solution x. The neighborhood is usually obtained by specific modifications on x, called moves. The local search starts with an initial solution and searches the neighborhood for the solution with the lowest cost. Then, this neighbor solution would replace the current solution if it has a lower cost than the current solution. The search continues until a stopping criterion is verified. The algorithm returns the best solution found with respect to the cost function. Neighborhoods We present two neighborhoods for the GAP, a simple shift neighborhood where a task is reassigned to a new agent and an ejection chain neighborhood where more than one task is reassigned to new agents. The shift neighborhood is a special case of the λ-generation mechanism proposed by Osman (1995). The ejection chain neighborhood is based on the work of Laguna, Kelly, González-Velarde, and Glover (1995). Other complex neighborhoods with very good results have been proposed for this problem, we refer to Yagiura, Yamaguchi, and Ibaraki (1999) and Yagiura, Yamaguchi, and Ibaraki (1998). The move in the shift neighborhood consists in removing a task from one agent and assigning it to another agent. The size of the neighborhood is n(m-1). Note that we should start by considering removing tasks from overloaded agents. The task can be reassigned to an agent with available capacity. The shift neighborhood can be obtained by the following procedure, where flag indicates if a neighbor solution with better value
  • 10. H. Lourenço & D. Serra 10 of the penalty function was found, (flag=true) or not (flag=false) (flag is initially assign to false in the local search heuristic, see next page). Neighborhood (x, flag) 1. Arrange the agents in decreasing order by the amount of capacity over the maximum allowable. Let j=1. 2. Consider any order of the tasks assigned to agent j. Let i=1 (the first task assigned to j). 3. Remove i from the set of tasks assigned to j, {} i S S j j − = . 4. Assign i to another, not yet considered, agent starting with the last agent in the list obtained in step 1, (neighbor x’ of x). 5. Calculate the value of the penalty function for x’, f’(x’). If f’(x’)<=f’(x), let x=x’, flag=true and stop. 6. Let i = i +1 and repeat steps 3 and 4, until all tasks of j have been considered. 7. Let j=j+1, and repeat from step 2, until all agents have been considered. In preliminary tests, this neighborhood was able to obtain feasible solutions when starting from an unfeasible one. However, the objective function values of these feasible solutions were not very good. This led us to define a more complex neighborhood, which we present next. The ejection chain neighborhood is a variable depth procedure, which moves more than one task from the current agent to a new agent. Ejection chains were introduced by Glover (1992), and have been applied to several problems, including an extension of the GAP, Laguna, Kelly, González-Velarde, and Glover (1995). This second neighborhood structure is more complex than the shift neighborhood, but leads to a more powerful and efficient search without increasing significantly the computation time. The ejection chain neighborhood can be obtained by the application of the following two types of moves: Move A: Remove a task i from an agent (j), then assign task i to a different agent (w) as in the shift neighborhood. Move B: Remove a task i from an agent (j), then assign task i to a different agent (w). After, remove a task k from agent w and assign task k to another agent (different from w, but it can agent j). The ejection chain neighborhood of a solution can be obtained in a similar way as the shift neighborhood, but a type B move is only applied if the type A move was unsuccessful. Note also that the number of neighbors is of order O(n2 m2 ). As described a move in the ejection chain neighborhood is like doing two shift moves at once, one move after another one, hence, it is significantly larger than the shift neighborhood. Descent Local Search We designed the descent local search method with a first improvement strategy. This means that, the first cost improvement neighbor solution found becomes the new current
  • 11. Title of the paper 11 solution. This method stops at a locally optimal solution with respect to the chosen neighborhood structure. This can be a main drawback, since it is unable to overcome a locally optimal solution. The tabu search method presented in the next section has a different acceptance strategy from the local search and other features designed to avoid being trapped at a bad local optimum. The main steps of the first-descent local search are: 1. Obtain an initial solution x (for example, using the GRAH). Let flag=false; 2. Neighborhood(x,flag); 3. If flag=false, stop (a local optimum was found), otherwise repeat step 2. If the heuristic finishes with an unfeasible solution, apply a local search with the following neighborhood: interchange tasks between agents considering only moves that reduce the extra used capacity. After a feasible solution is obtained, we can apply the same local search considering only feasible solutions, i.e. verifying the following conditions: Let ) , ( k i and ) , ( l j be pairs of tasks and agents respectively, such that 1 = = kl ij x x . If kl ij kj il c c c c + < + , j kj ij n s sj sj b a a x a ≤ + − ∑ =1 and l il kl n s sl sl b a a x a ≤ + − ∑ =1 , then let 1 = = kj il x x and 0 = = kl ij x x . We are now in a position to present the GRASP method, which consists of one of the approaches proposed to solve the GAP: 1. While a stopping criterion is not satisfied: 1.1. Construct a solution (x) using the Greedy Randomized Adaptive Heuristic. In the first iteration initialize the best solution, xb, as x. 1.2. Apply descent local search(x) 1.3. If x is feasible, and ) (x f (x) f b ' ' < let x xb = . 2. Return the best solution found, xb. The next method for the GAP, also based on the general framework presented above is the MAX-MIN Ant System procedure with Local Search (MMAS), and can be described as follows: 1. Initialize the pheromone trails and parameters 2. While (termination condition not met) 2.1. Construct a solution x using the Ant System Heuristic. In the first iteration initialize the best solution, xb, as x. 2.2. Apply descent local search(x). 2.3. Update the pheromone trails using the current solution x. 2.4. If x is feasible, and ) (x f (x) f b ' ' < let x xb = . 3. Return the best solution found, xb.
  • 12. H. Lourenço & D. Serra 12 In both methods, the stopping criterion applied consists of a maximum number of iterations. Note that the main difference in the above methods is the way the initial solutions are constructed, i.e. the first step in the general framework of the adaptive construction heuristic. The GRASP follows a random approach by means of a random sampling over solutions constructed in a greedy fashion, which can be seen as a diversification strategy. Meanwhile, the MMAS constructs the initial solutions using adaptive and cooperative memory, which can be considered as an intensification strategy. Tabu Search Tabu search was originally proposed by Glover (1986), and since then it has been subject to extensive studies and applied to several optimization problems with great success. Tabu search can be described as an intelligent search that uses memory to drive the search out of locally optimal solutions and to find good results. For a survey of tabu search see Glover and Laguna (1997). Our reason for applying tabu search to the GAP was the excellent results obtained and by Osman (1995) and Laguna, Kelly, González-Velarde, and Glover (1995) to GAP and a multilevel generalized assignment problem, respectively. They also presented computational results for the GAP and were always able to obtain the optimal solution for test problems with 5 agents and 25 task in very short time. The basic ingredients of the applied tabu search are: the neighborhood, the tabu list and its size, the aspiration criteria and the stopping criteria. We will now describe these aspects in detail for the GAP. The tabu search can be briefly described as follows: 1. Generate an initial solution x. 2. While the stopping criteria is not met do: 2.1. Generate the candidate list of moves/neighbors; 2.2. Choose the best neighbor not tabu or verifying the aspiration criteria, x’; 2.3. Update the current solution, x=x’. 3. Output the best solution found. It can be easily observed that the ejection chain neighborhood has a large number of neighbors. Some of them lead to very bad solutions. Therefore, to avoid spending a large amount of time evaluating the neighborhood, a candidate list strategy is used to restrict the number of solutions examined at each iteration of the tabu search. The candidate list strategy implemented uses context information to limit the search to those moves that are more likely to improve the current solution. The problem-specific candidate list strategy can be described as follows: a task or a pair of tasks are considered for moving if one of
  • 13. Title of the paper 13 the following situations occurs. Let x be the current solution and x’ be a neighbor solution of x: • A task u is considered for moving from an agent p to an agent k if uk up c c > ; • A pair of tasks u, l are considered for moving if lp uk lk up c c c c + > + ; • A task u is considered for moving from an agent p to an agent k if ∑ = > n i p ip ip b x a 1 and ∑ = ≤ n i k ik ik b x a 1 ' , where 1 ' , 0 ' ; , ' = = ≠ ∀ = uk up ik ik x x p u i x x . • A pair of tasks u, l are considered for moving if ∑ = > n i p ip ip b x a 1 , ∑ = ≤ n i k ik ik b x a 1 ' and ∑ = ≤ n i q iq iq b x a 1 ' , where 1 ' , 0 ' , 1 ' , 0 ' ; , , ; , ' = = = = ≠ ≠ ∀ = lq lk uk up ij ij x x x x q p k j l u i x x . A tabu attribute is related to the move of a task from one agent to another agent, i.e. suppose a task i is assigned to an agent j in the current solution and this task is reassigned to agent k, then for a certain number of iterations it is forbidden to assign task i to agent j. The tabu list was implemented as a matrix m n × , such that the entry (i,j) contains the iteration number where the task i was removed from agent j, therefore in the next “tabu tenure” iterations the move “assign task i to agent j” is tabu. An aspiration criterion is considered which overrules the tabu status of a move, if it leads to a new best solution. Finally, the tabu search stops after a maximum number of iterations. As we have done for the descent local search, now we have two more methods for the GAP, GRASP/TABU and MMAS/TABU that can be described as before, but instead of using the simp le descent local search method, the tabu search is applied instead. 5 Computational experiment In this section, we will present the computational experiments and the results obtained. We have followed the guidelines proposed by Barr, Golden, Kelly, Resende, and Stewart (1995). The computational experiment was designed with two main objectives: • To gain understanding of the behavior of the different proposed methods, based on the general framework of the adaptive construction heuristics. • To compare the two methods proposed for the first step of the general framework: greedy randomized adaptive heuristic versus the ant system heuristic. We would like to focus that these objectives are the most important ones, since with the computational experiment we would like to contribute to the understanding of the algorithms that follow on the framework of the adaptive search heuristics. For example,
  • 14. H. Lourenço & D. Serra 14 we would like to analyze if the consideration of information from previous solutions on the construction phase has an impact on the results, i.e. compare the GRAH with the ASH methods. Another questions that we would like to study are: the impact of considering or not a complete neighborhood in the local search phase, the benefits of using the tabu search method versus a local search; and the effect of using unfeasible solutions. We believe that by analyzing these questions we contribute to the advance of the knowledge of the field of heuristics in general, and adaptive search heuristics in particular by giving guidelines on the development of algorithms to real-life applications based on the GAP model and other combinatorial optimization methods. A secondary objective is to compare the best methods described in this work with other techniques and methodology proposed to solve the GAP, but the emphasis of the paper is not to realize a competition on algorithms, since as mentioned by Hooker (1995) “competitive testing tells us which algorithms are faster but not why”. As mentioned before, here, we try to ask which components make the algorithms faster and better (therefore the “why question” that Hooker talks about). All described methods were coded in Fortran, and were tested on a set of problems ranging from 5 agents/15 jobs to 10 agents/60 jobs. The test problems are available from the OR library (http://www.ms.ic.ac.uk/info.html) and have also been used by other authors in their computational experiments, Osman (1995), Cattrysse, Salomon and Van Wassenhove (1994), Chu and Beasley (1997). The set of test problems can be divided in two groups: easy (gap 1 to gap 6) and difficult (gap7 to gap12). This set of problems is in the maximization form of GAP, so we have converted them into minimization form. All numerical tests were carried on a PC-Pentium II with 166 MHz and 16MB RAM. Recently, a set of problems with a larger number of tasks and agents has been published. However, in our preliminary testing, the answer of the questions posed before did not change by the use of this larger size set of test problems, and the testing would be more limited because of the larger running times. We believe the use of this larger set would be more adequate for a competition type paper. The performance measures considered are: • Solution quality measured as the percentage deviation from the optimal solution. • Computation time: total running time and the time to find the best solution. The factors that can influence the behavior of a method and their results are: • Problem specific: number of agents (m); number of tasks (n); resource capacity of the agents (aj), cost of the overloaded capacity (α). • First Step: greedy randomized adaptive heuristic; ant system heuristic. • Second Step: neighborhood, search strategy (descent local search and tabu search). • Stopping criteria: number of total iterations (NTI) and the number of iterations of the tabu search (NITB).
  • 15. Title of the paper 15 • Other parameters: size of the RCL, size of the tabu list (STL), ant system parameters ( min τ , max τ , ρ and 0 p ). If we want to consider all the above factors, the exp erimentation would be quite extensive. Thus, to minimize the computational effort some of the above factors are chosen a priori based on previous experiments for the GAP or on preliminary computational results. The following parameters values for MAX-MIN ant system were found to give good performance in preliminary runs: ij j i τ τ , min min 1 . 0 ∀ × = and ij j i n τ τ , max max ∀ × = , 75 . 0 = ρ and 8 . 0 0 × = − n m n p . We have followed the guidelines of Stützle and Hoos (1999) to obtain the values for these parameters. These values are quite robust, and small changes did not affect the final results. Comparison between different approaches The main issue for these initial tests is to understand the behavior of the different methods based on the same general framework and on different approaches. The adaptive search heuristics considered are the following ones: MMAS : ant system heuristic and descent local search with ejection chain neighborhood. GRASP: greedy randomized adaptive heuristic and descent local search with ejection chain neighborhood. This version is a GRASP method. ASH+TS: ant system heuristic and tabu search with restricted ejection chain neighborhood. GRAH+TS: greedy randomized adaptive heuristic and tabu search with restricted neighborhood ejection chains. ASH+LS+TS: ant system heuristic, descent local search with shift neighborhood, and tabu search with restricted neighborhood ejection chains. GRAH+LS+TS: greedy randomized adaptive heuristic, descent local search with shift neighborhood, and tabu search with restricted ejection chains neighborhood. ASH+LS+CTS: ant system heuristic, descent local search with shift neighborhood, and tabu search with ejection chains neighborhood (search in the complete neighborhood, best-improvement). In the three last methods, before applying the tabu search method, we apply a simple descent local search method with shift neighborhood. Most of the solutions obtained in the first step are unfeasible, and the descent local search with shift neighborhood usually finds a feasible one in a short time. Therefore, if a local search method is applied before the tabu search, this last one will start from a better solution. The comparison between the TS alone and the LS+TS will permit us to evaluate the performance of the TS with respect to the initial solution of the tabu method. With the last method, ASH+LS+CTS, we will try to analyze the effect of using or not a restricted candidate list.
  • 16. H. Lourenço & D. Serra 16 In this experiment, the following factors are prefixed: NTI =30, NITB=200, α=50, STL=10. These values were set in preliminary tests, and they were set to control the running time. Also, since all the above methods performed very well on the easy test problems, we will present the results for the subset of large size problems, gap7 (8 agents, 40 jobs) to gap12 (10 agents, 60 jobs). For each test problem, we have performed 5 runs of each of the methods. In Table 1 we present the average percentage deviation from optimal of 5 runs for each of the heuristics proposed. First of all, we observe that the best results were obtained by the ASH+TS, GRAH+TS, ASH+LS+TS and GRAH+LS+TS, i.e. when the tabu search was used in the second step of the general framework. Also, the combination of the LS and TS improve the results, since the tabu search starts with a better solution and finds rapidly a good locally optimal solution. MMAS and GRASP obtained the worst results and they often got stuck at a bad locally optimal solution. Therefore, using a tabu approach enabled to continue the search for good solutions. When a tabu search considered the complete neighborhood, ASH+LS+CTS, the quality of the solution did not improve. So, the use of restricted candidate lists plays an important role in the search, and helps to find good solutions in significantly less time, as can be observed at the computation times given in Tables 3 and 4. It can also be seen that the proposed heuristic performs very well, finding the optimal solution in many instances. For those in which the heuristics failed to reach the optimal one, the solutions obtained are very close to optimality. In Table 2 we present the average solution quality for each set of the test problems. It will be seen that the ASH+LS+TS performs better than the remaining heuristics, obtaining the optimal in all runs for all test problems in 4 of the 6 groups. In Table 3 and 4 we present the average total CPU-time and the average time to find the best solution for the 6 test problems. For all heuristics, the average and total CPU-time increases with the ratio m n , and also with the number of tasks. For the same number of global iterations of the general framework, the ant system heuristic (MMAS, ASH+TS, ASH+LS+TS) always takes less time than the greedy randomized adaptive heuristic (GRASP, GRAH+LS, GRAH+LS+TS). The computation time to find the best solution is significantly lower than the total running time, and again the ant system heuristic finds the best solution faster. However, there is a small difference between the ASH+TS, ASH+LS+TS and GRAH+LS+TS with respect to the running times to find the best solution. The explanation for this behavior is that the tabu search with the ejection chain neighborhood efficiently finds good solutions. To better understand the behavior of the various heuristics we present two figures where we compare the tradeoff between solution quality and computational effort. For simplicity we present the average results for the test problem gap8, Figure 1, and gap10, Figure 2. It can be easily observed that the heuristics that obtain better results in terms of solution quality and computational time are the ASH+LS+TS, ASH+TS and
  • 17. Title of the paper 17 GRAH+LS+TS in approximately this order, since these ones dominate the remaining ones. If we had to choose only one, our choice will be the ASH+LS+TS because it obtains the best solution within a reasonable computational time. Comparison between the adaptive approaches A second issue that we would like to answer is related to the different approaches proposed for the first step. The greedy randomized adaptive heuristic is based on the use of randomization to obtain initial solutions in a greedy fashion and so diversify the search for a good solution. The other approach, based on the ant system, uses information on the good solutions visited in previous iterations to construct a solution, which also follows a greedy approach. We wanted to see if there is any difference between these two approaches for the GAP. Therefore, all the factors were kept constant, except for the two different heuristics proposed for the first step. We present the average results for the 6 test problems when the GRAH and the ASH were used in the first step of the general framework, and combined with the local search with ejection chain neighborhood, Figure 3, or with local search and tabu search, Figure 4. The results are presented by showing the tradeoff between solution quality and computational time. One can see that when the ant system heuristic is used in the first step the method obtains better solutions in less time for mo st of the test problems. The explanation of the difference between the ant system and the greedy randomized adaptive heuristics is the quality of the solution obtained by these greedy heuristics. We have observed that the solutions obtained by GRAH are very different and do not follow a pattern. However, for the ASH the solutions obtained in the first iterations are worse or of the same value as those obtained by the GRAH. But, as the search continues, the ant system heuristic is able to obtain good solutions, which means less running time by the local search method in the second step of the general framework. The behavior is explained by the way the ASH is designed, since good solutions seen in previous iterations are taken into account when defining the probability function for the greedy heuristic. To exemplify the behavior of the greedy heuristics we present in Figure 5, the value of the penalty function for the initial solution obtained by the GRAH and the ASH for the instance gap9-2, using the LS+TS as the second step.
  • 18. H. Lourenço & D. Serra 18 Computational Results Finally, in this last section, we show the performance of our best methods for all the test problems in Table 5 and compare them with other methods proposed to solve the GAP, the metaheuristics by Osman (1995), TS6 and TS1, and Chu and Beasley (1997), Gaa and Gab. All the results for these methods were taken from this last work. Chu and Beasley (1997) run their experiment on a Silicon Graphics Indigo (R400, 100 MHz). The fair comparison between methods is quite difficult since it depends on coding skill, machine type and speed, parameter setting, etc. The experiment was design to evaluate the contribution to good results of components of the general framework, and not to have a comparison between algorithms. Table 5 is presented here only to give an idea of the performance of the methods proposed with respect to which has been done in the area. We can observe that on average the ASH+LS+TS performed better than other approaches for these instances. This method obtains the optimal solution in all runs for all test problems in gap12, which no other previously proposed method was able to match. The average running time of the genetic algorithm approach for these problems was 300 CPU seconds, Chu and Beasley (1997), and for the ASH+LS+TS was less than 150 seconds. But our main objective here is not to declare a winning method but to understand their differences in solving different test problems. From the results obtained, the adaptive search heuristics proposed can obtain better or equal results for the GAP than other methods in the literature and in shorter running times. 6 Conclusions The main contribution of this work is the application of adaptive search heuristics to the generalized assignment problem, based on GRASP and Ant Colony Optimization. The general framework has also some innovative aspects like the combination of the MAX- MIN Ant Systems (MMAS) and GRASP with Tabu Search techniques, and the use of ejection chain neighborhoods. Our computational testing showed that the hybrid approach, based on ideas from MMAS and GRASP, combined with tabu search leads to good results within reasonable times, and outperforms MMAS or GRASP alone. From the results, we can conclude that the success of ASH+LS+TS relies on the combination of the tabu search and the ant system heuristics. The fact that the ASH takes into account information of previous iterations of the general algorithms is one of the main aspects to the success of the ASH+LS+TS. Also, the ejection chain neighborhood and the restricted candidate list strategy play an important role in driving the search to good solutions. We can also conclude that the ant system heuristic presented outperform the greedy randomized adaptive heuristics both in terms of solution quality and total running time. The results compare favorably with existing methods, both in terms of time taken and quality of the solution.
  • 19. Title of the paper 19 Further developments of this work are related to the application of the adaptive search methods to extensions of the GAP, a Resource Assignment Problem and a Pure Integer Capacitated Plant Location. More work is also being done in solving more difficult problems using a sophisticated tabu search and diversification strategies and an ant system with more ants. Moreover, as future research, we plan to apply the adaptive search heuristics based on the general framework to develop solution methods for other combinatorial optimization problems. Acknowledgments- The research was funded by Ministerio de Ciencia y Tecnología, Spain (BEC2000-1027). The authors are grateful to Thomas Stützle and the anonymous referees whose comments have improved this paper. Also, the authors thank the Fundación BBVA-CRES for the prize given to this work. References [1] Amini, M.M, and Racer, M. (1994). A rigorous comparison of alternative solution methods for the generalized assignment problem, Mgmt Sci 40: 868-890. [2] Barr, R.S., Golden, B.L., Kelly, J.P., Resende, M.G.C. and Stewart, Jr. W,R. (1995). Designing and Reporting on Computational Experiments with Heuristics Methods, J Heuristics, 1: 9-32. [3] Cattrysse, D.G. and Van Wassenhove, L.N. (1992). A survey of algorithms for the generalized assignment problem, Eur J of Opl Res 60: 260-272. [4] Cattrysse, D.G., Salomon, M. and Van Wassenhove, L.N. (1994). A set partitioning heuristic for the generalized assignment problem, Eur J of Opl Res 72: 167-174. [5] Chu, P.C. and Beasley, J.E. (1997). A genetic algorithm for the generalised assignment problem, Comp Opns Res 24: 17-23. [6] Colorni, A., Dorigo, M. and Maniezzo, V. (1991a). Distributed Optimization by Ant Colonies, Proceeding of ECAL91 - European Conference on Artificial Life: Elsevier Publishing, Paris, France, 134-142. [7] Colorni, A., Dorigo, M. and Maniezzo, V. (1991b). The Ant System: Optimization by a Colony of Cooperating Agents, IEEE Transactions on Systems, Man, and Cybernetics -Part B, 26, 1, 29-41. [8] Dorigo, M. and Di Caro, G. (1999). The ant colony optimization meta- heuristic. In D. Corne, M. Dorigo, and F. Glover (eds), New Ideas in Optimization, McGraw-Hill. [9] Dorigo, M., Maniezzo, V. and Colorni, A. (1996), The ant system: Optimization by a colony of cooperating agents, IEEE Transactions on Systems, Man, and Cybernetics – Part B, 26(1): 29-42.
  • 20. H. Lourenço & D. Serra 20 [10] Feo, T.A. and Resende, M.G.C. (1995). Greedy randomized a daptive search heuristic, Journal of Global Optimization 6: 109-133. [11] Fisher, M., Jaikumar, R. and Van Wassenhove, L. (1986). A multiplier adjustment method for the generalized assignment problem, Mgmt Sci 32: 1095-1103. [12] Glover, F. (1986). Future paths for integer programming and links to artificial intelligence, Comp Opns Res 5: 533-549. [13] Glover, F. (1992). Ejection Chains, Reference Structures and Alternating Path Methods for Traveling Salesman Problem, University of Colorado. Shortened version published i n Discrete Applied Mathematics, 65: 223:253 (1996). [14] Glover, F. and Laguna, M. (1997). Tabu Search, Kluwer Academic Publishers: Norwell, Massachusetts. [15] Glover, F. (1998). A Template for Scatter Search and Path Relinking, in Artificial Evolution, Lecture Notes in Computer Science 1363, J -K. Hao, E. Lutton, E. Ronald, M. Schoenauer and D. Snyers (Eds.), Springer-Verlag, 13-54. [16] Glover, F. (2000). Multi-start and strategic oscillation methods – Principles to exploit adaptive memory, in Computing Tools for Modeling, Optimization and Simulation, edited by Manuel Laguna and José Luis González Velarde, Kluwer Academic Publishers, 1-38. [17] Goss, S., Aron, S., Deneubourg, J.L., and Pasteels, J.M. (1989). Self- organized shortcuts in the Argentine Ant, Naturwissenschaften, 79: 579- 581. [18] Guignard, M. and Rosenwein, M. (1989). An improved dual-based algorithm to the knapsack problem, Eur J Opnl Res 27: 313-323. [19] Hooker, J.N. (1995). Testing heuristics. We have it all wrong. J Heuristics, 1: 33-42. [20] Karabakal, N., Bean, J.C. and Lohmann, J.R. (1992). A steepest descent multiplier adjustment method for the generalized assignment problem. Report 92-11, University of Michigan, Ann Arbor, MI. [21] Laguna, M., Kelly, J.P., González-Velarde, J.L. and Glover, F. (1995). Tabu search for the m ultilevel generalized assignment problem, Eur J Oper Res 82: 176-189. [22] Lin, S. and Kernighan, B.W. (1973), An efficient heuristic algorithm for the traveling salesman problem, Operations Research 21: 498-516. [23] Lourenço, H.R., Martin, O. and Stützle, T. (2001). Iterated Local Search. To appear in State-of-the-Art Handbook of Metaheuristics, F. Glover and G. Kochenberger, eds. Kluwer Academic Publishers. [24] Martello, S. and Toth, P. (1990). Knapsack Problems: Algorithms and Computer Implementations, Wiley: New York.
  • 21. Title of the paper 21 [25] Moscato, P. (1999). Memetic Algorithms: a short introduction, in New Ideas in Optimization, edited by D. Corne, F. Glover, and M. Dorigo, McGraw-Hill, 219-234. [26] Osman, I.H. (1995). Heuristics for the generalized assignment problem: simulated annealing and tabu search approaches, OR Spektrum 17: 211- 225. [27] Resende, M.G.C. and Ribeiro, C.C. (2001). Greedy Randomized Adaptive Search Procedures. To appear in State-of-the-Art Handbook of Metaheuristics, F. Glover and G. Kochenberger, eds. Kluwer Academic Publishers. [28] Ross, G.T. and Soland, P.M. (1975). A branch and bound based algorithm for the generalized assignment problem, Math Prog 8: 91-103. [29] Sahni, S. and Gonzalez, T. (1976). P-Complete Approximation Problems. J ACM, 23: 555-565. [30] Savelsbergh, M. (1997). A Branch-And-Cut Algorithm for the Generalized Assignment Problem, Opns Res 45: 6, 831- 841. [31] Stützle, T. (1997). MAX-MIN Ant System for the Quadratic Assignment Problem, Technical Report AIDA-97-4, FG Intellektik,TU Darmstadt, Germany. [32] Stützle, T. (1998a). Local Search Algorithms for Combinatorial Problems- Analysis, Improvements , and New Applications. PhD thesis, Departement of Computer Science, Darmstadt University of Technology, Germany. [33] Stützle, T. (1998b). An ant approach for the flow shop problem, In Proceeding of the 6th European Congress on Intelligent Techniques & Soft Computing (EUFIT’98), 3: 1560-1564, Verlag Mainz. [34] Stützle, T. and Hoos, H. (1999). Max-Min Ant System and Local Search for Combinatorial Optimization. in: S. Voβ, S. Martello, I.H. Osman and C. Roucairol (eds), Meta-Heuristics: Trends in Local Search paradigms for Optimization, Kluwer Academic Publishers, pp. 313-329. [35] Trick, M.A. (1992). A linear relaxation heuristic for the generalized assignment problem, Naval Res Logist 39: 137-152. [36] Yagiura, M., Yamaguchi, T. and Ibaraki, T. (1998). A variable depth search algorithm with branching search for the generalized assignment problem, Optimization Methods and Software, vol. 10, 419-441. [37] Yagiura, M., Yamaguchi, T. and Ibaraki, T. (1999). A variable depth search algorithm for the generalized assignment problem, in: S. Vo β, S. Martello, I.H. Osman and C. Roucairol (eds), Meta-Heuristics: Trends in Local Search paradigms for Optimization, Kluwer Academic Publishers, 459-471. [38] Wilson, J.M. (1997a). A genetic algorithm for the generalised assignment problem, J Opl Res Soc, 48: 804-809. [39] Wilson, J.M. (1997b). A simple dual algorithm for the generalized assignment problen, J Heuristics 2(4): 303-311.
  • 22. Table 1:Average percentage deviation from optimal for 5 runs of the adaptive search heuristics. prob. m*n optimal MMAS GRASP ASH+TS GRAH+TS ASH+ LS+TS GRAH+ LS+TS ASH+ LS+CTS gap7-1 8*40 942 0.08% 0.04% 0.00% 0.00% 0.00% 0.00% 0.00% gap7-2 8*40 949 0.00% 0.02% 0.00% 0.00% 0.00% 0.00% 0.00% gap7-3 8*40 968 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% gap7-4 8*40 945 0.00% 0.17% 0.00% 0.00% 0.00% 0.00% 0.00% gap7-5 8*40 951 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% 0.00% gap8-1 8*48 1133 0.32% 0.34% 0.14% 0.09% 0.05% 0.25% 0.28% gap8-2 8*48 1134 0.07% 0.11% 0.00% 0.02% 0.00% 0.02% 0.00% gap8-3 8*48 1141 0.210% 0.280% 0.105% 0.140% 0.070% 0.123% 0.175% gap8-4 8*48 1117 0.143% 0.269% 0.054% 0.125% 0.000% 0.036% 0.018% gap8-5 8*48 1127 0.248% 0.266% 0.106% 0.089% 0.089% 0.106% 0.142% gap9-1 10*30 709 0.000% 0.028% 0.000% 0.000% 0.000% 0.000% 0.000% gap9-2 10*30 717 0.223% 0.223% 0.000% 0.056% 0.000% 0.056% 0.056% gap9-3 10*30 712 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% gap9-4 10*30 723 0.111% 0.028% 0.000% 0.000% 0.000% 0.000% 0.000% gap9-5 10*30 706 0.057% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% gap10-1 10*40 958 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% gap10-2 10*40 963 0.042% 0.104% 0.000% 0.000% 0.000% 0.021% 0.000% gap10-3 10*40 960 0.229% 0.250% 0.125% 0.125% 0.063% 0.104% 0.104% gap10-4 10*40 947 0.084% 0.169% 0.000% 0.000% 0.000% 0.000% 0.021% gap10-5 10*40 947 0.211% 0.190% 0.063% 0.042% 0.000% 0.063% 0.042% gap11-1 10*50 1139 0.018% 0.070% 0.000% 0.000% 0.000% 0.000% 0.000% gap11-2 10*50 1178 0.000% 0.034% 0.000% 0.000% 0.000% 0.000% 0.017% gap11-3 10*50 1195 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% 0.000% gap11-4 10*50 1171 0.051% 0.017% 0.000% 0.000% 0.000% 0.000% 0.000% gap11-5 10*50 1171 0.034% 0.102% 0.000% 0.000% 0.000% 0.000% 0.000% gap12-1 10*60 1451 0.055% 0.096% 0.000% 0.014% 0.000% 0.014% 0.014% gap12-2 10*60 1449 0.055% 0.055% 0.000% 0.000% 0.000% 0.000% 0.000% gap12-3 10*60 1433 0.000% 0.028% 0.000% 0.000% 0.000% 0.000% 0.000% gap12-4 10*60 1447 0.041% 0.069% 0.000% 0.000% 0.000% 0.000% 0.000% gap12-5 10*60 1446 0.028% 0.055% 0.000% 0.000% 0.000% 0.000% 0.028% Average 0.077% 0.100% 0.020% 0.023% 0.009% 0.026% 0.030%
  • 23. Table 2:Average percentage deviation to optimal of the adaptive search heuristics for the 6 groups of problems. prob. m*n MMAS GRASP ASH+TS GRAH+TS ASH+LS+TS GRAH+LS+TS ASH+LS+CT S gap7 8*40 0.017% 0.047% 0.000% 0.000% 0.000% 0.000% 0.000% gap8 8*48 0.198% 0.251% 0.081% 0.092% 0.042% 0.106% 0.124% gap9 10*30 0.078% 0.056% 0.000% 0.011% 0.000% 0.011% 0.011% gap10 10*40 0.113% 0.143% 0.038% 0.033% 0.013% 0.038% 0.034% gap11 10*50 0.021% 0.045% 0.000% 0.000% 0.000% 0.000% 0.003% gap12 10*60 0.036% 0.061% 0.000% 0.003% 0.000% 0.003% 0.008% Average 0.077% 0.100% 0.020% 0.023% 0.009% 0.026% 0.030%
  • 24. Table 3:Average total running time in seconds of the adaptive search heuristics. prob. m*n MMAS GRASP ASH+TS GRAH+TS ASH+LS+TS GRAH+LS+TS ASH+LS+CT S gap7 8*40 94.0 120.7 111.1 139.5 113.7 125.3 342.0 gap8 8*48 172.0 232.3 172.8 214.5 178.9 205.9 576.1 gap9 10*30 54.6 74.9 72.8 81.3 74.0 79.2 209.6 gap10 10*40 141.4 172.2 146.8 160.2 137.9 150.3 468.3 gap11 10*50 256.6 334.6 140.8 192.9 144.1 163.5 878.9 gap12 10*60 427.0 530.4 238.5 349.9 242.9 320.4 1485.3 Average 190.9 244.2 147.1 189.7 148.6 174.1 660.0
  • 25. Table 4:Average running time (in seconds) to best found solution of the adaptive search heuristics. prob. m*n MMAS GRASP ASH+TS GRAH+TS ASH+LS+TS GRAH+LS+TS ASH+LS+CT S gap7 8*40 35.8 42.8 19.7 56.6 22.5 21.4 92.4 gap8 8*48 80.3 88.8 68.1 96.5 85.3 78.3 225.9 gap9 10*30 18.8 22.2 19.0 10.7 15.3 13.4 55.8 gap10 10*40 45.5 85.4 35.7 51.2 37.5 34.5 159.7 gap11 10*50 84.1 98.3 33.8 64.3 28.4 33.9 278.2 gap12 10*60 137.5 211.9 59.3 91.7 59.7 95.0 379.7 Average 67.0 91.5 39.3 61.8 41.5 46.1 198.6
  • 26. ASH+LS+TS ASH+LS+CTS GRAH+TS GRAH+LS+TS ASH+TS GRASP MMAS 0.000% 0.050% 0.100% 0.150% 0.200% 0.250% 0.300% 0.0 50.0 100.0 150.0 200.0 250.0 Time in Seconds Average Percentage Figure 1: Average percentage deviation from optimal versus time for gap8 instances
  • 27. ASH+LS+TS ASH+LS+CTS GRAH+TS GRAH+LS+TS ASH+TS GRASP MMAS 0.000% 0.020% 0.040% 0.060% 0.080% 0.100% 0.120% 0.140% 0.160% 0.0 20.0 40.0 60.0 80.0 100.0 120.0 140.0 160.0 180.0 Time in Seconds Average Percentage Figure 2: Average percentage deviation from optimal versus time for gap10 instances
  • 28. A A A A A A G G G G G G 0.000% 0.050% 0.100% 0.150% 0.200% 0.250% 0.300% 0.0 50.0 100.0 150.0 200.0 250.0 Time in Seconds Average Percentage Figure 3: Average percentage deviation from optimal versus time of the MMAS (A) and GRASP (G) heuristics for gap7 to gap12 instances
  • 29. G G G G G G A A A A A A 0.000% 0.020% 0.040% 0.060% 0.080% 0.100% 0.120% 0.0 10.0 20.0 30.0 40.0 50.0 60.0 70.0 80.0 90.0 100.0 Time in Seconds Average Percentage Figure 4: Average percentage deviation from optimal versus time of the ASH+LS+TS (A) and GRAH+LS+TS (G) heuristics for gap7 to gap12 instances.
  • 30. Figure 5: Initial solution value obtained by GRAH and ASH for the instance gap9-2, using LS+TS as the second step. -1000 0 1000 2000 3000 4000 5000 0 10 20 30 40 Iteration number Initial Solution Value Ant Grasp
  • 31. Table 5:Average percentage deviation from optimal of more recent heuristics. prob. m*n TS6 TS1 GAa GAb ASH+LS+TS GRAH+LS+TS gap7 8*40 0.02% 0.00% 0.08% 0.00% 0.00% 0.00% gap8 8*48 0.14% 0.09% 0.33% 0.05% 0.04% 0.11% gap9 10*30 0.06% 0.06% 0.17% 0.00% 0.00% 0.01% gap10 10*40 0.15% 0.08% 0.27% 0.40% 0.01% 0.04% gap11 10*50 0.02% 0.02% 0.20% 0.00% 0.00% 0.00% gap12 10*60 0.07% 0.04% 0.17% 0.01% 0.00% 0.00% Average 0.08% 0.05% 0.20% 0.08% 0.01% 0.03% TS6: Long term Tabu Search, FA selection, Osman (1995) TS1: Long term Tabu Search, BA selection, Osman (1995) GAa: Genetic Algorithm without heuristic operator , Chu and Beasley (1997) GAb: Genetic Algorithm with heuristic operator , Chu and Beasley (1997)