This is the second part in my series on the “travelling salesman problem” (TSP). Part one covered defining the TSP and utility code that will be used for the various optimisation algorithms I shall discuss.
A common way to visualise searching for solutions in an optimisation problem, such as the TSP, is to think of the solutions existing within a “landscape”. Better solutions exist higher up and we can take a step from one solution to another in search of better solutions. How we make steps will depend on the “move operators” we have available and will therefore also affect how the landscape “looks”. It will change which solutions are “adjacent” to each other. For a simple optimisation problem we can directly visual the solution landscape:
The red dot represents our current solution. It should be pretty clear that if we simply carry on going “uphill” we’ll get to the highest point in this solution landscape.
If we are using evolutionary optimisation methods a solution landscape will often be referred to as a fitness landscape.
Hill-climbing, pretty much the simplest of the stochastic optimisation methods, works like this:
- pick a place to start
- take any step that goes “uphill”
- if there are no more uphill steps, stop;
otherwise carry on taking uphill steps
Metaphorically the algorithm climbs up a hill one step at a time. It is a “greedy” algorithm and only ever takes steps that take it uphill (though it can be adapted to behave differently). This means that it is pretty quick to get to the top of a hill, but depending on where it starts it may not get to the top of the biggest hill:
As you can see our current solution (the red dot) can only go downhill from it’s current position – yet it is not at the highest point in the solution landscape.
The “biggest” hill in the solution landscape is known as the global maximum. The top of any other hill is known as a local maximum (it’s the highest point in the local area). Standard hill-climbing will tend to get stuck at the top of a local maximum, so we can modify our algorithm to restart the hill-climb if need be. This will help hill-climbing find better hills to climb – though it’s still a random search of the initial starting points.
objective and initialisation functions
To get started with the hill-climbing code we need two functions:
- an initialisation function – that will return a random solution
- an objective function – that will tell us how “good” a solution is
For the TSP the initialisation function will just return a tour of the correct length that has the cities arranged in a random order.
The objective function will return the negated length of a tour/solution. We do this because we want to maximise the objective function, whilst at the same time minimise the tour length.
As the hill-climbing code won’t know specifically about the TSP we need to ensure that the initialisation function takes no arguments and returns a tour of the correct length and the objective function takes one argument (the solution) and returns the negated length.
So assuming we have our city co-ordinates in a variable
coords and our distance matrix in
matrix we can define the objective function and initialisation functions as follows:
objective_function=lambda tour: -tour_length(matrix,tour) #note negation
Relying on closures to let us associate
len(coords) with the
init_random_tour function and
matrix with the
tour_length function. The end result is two function
objective_function that are suitable for use in the hill-climbing function.
the basic hill-climb
The basic hill-climb looks like this in Python:
hillclimb until either max_evaluations
is reached or we are at a local optima
while num_evaluations < max_evaluations:
# examine moves around our current position
for next in move_operator(best):
if num_evaluations >= max_evaluations:
# see if this move is better than the current
if next_score > best_score:
break # depth first search
if not move_made:
break # we couldn't find a better move
# (must be at a local maximum)
(I’ve removed some logging statements for clarity)
The parameters are as follow:
init_function – the function used to create our initial solution
move_operator – the function we use to iterate over all possible “moves” for a given solution (for the TSP this will be
objective_function – used to assign a numerical score to a solution – how “good” the solution is (as defined above for the TSP)
max_evaluations – used to limit how much search we will perform (how many times we’ll call the
hill-climb with random restart
With a random restart we get something like:
repeatedly hillclimb until max_evaluations is reached
while num_evaluations < max_evaluations:
if score > best_score or best is None:
The parameters match those of the
This function simply calls
hillclimb repeatedly until we have hit the limit specified by
hillclimb on it’s own will not necessarily use all of the evaluations assigned to it.
Running the two different move operators (
swapped_cities – see part one for their definitions) on a 100 city tour produced some interesting differences.
Ten runs of 50000 evaluations (calls to the objective function) yielded:
(these are the scores from the objective function and represent negative tour length)
In this case
reversed_sections clearly performed much better. The best solution for
reversed_sections looked like:
Whereas the best for
swapped_cities is clearly much worse:
It’s pretty clear then that
reversed_sections is the better move operator. This is most likely due to it being less “destructive” than
swapped_cities, as it preserves entire sections of a route, yet still affects the ordering of multiple cities in one go.
As can be seen hill-climbing is a very simple algorithm that can produce good results – provided one uses the right move operator.
However it is not without it’s drawbacks and is prone to getting stuck at the top of “local maximums”.
Most of the other algorithms I will discuss later attempt to counter this weakness in hill-climbing. The next algorithm I will discuss (simulated annealing) is actually a pretty simple modification of hill-climbing, but gives us a much better chance at finding the global maximum for a given solution landscape.
Full source-code is available here as a .tar.gz file. Again some unit tests are included, which can be run using nosetests.
(The source-code contains more code than shown here, to handle parsing parameters passed to the program etc. I’ve not discussed this extra code here simply to save space.)