Category Archives: TSP

Tackling the travelling salesman problem: simulated annealing

This is the third part in my series on the “travelling salesman problem” (TSP). Part one covered defining the TSP and utility code that will be used for the various optimisation algorithms I shall discuss. Part two covered “hill-climbing” (the simplest stochastic optimisation method).

getting stuck, because you’re greedy

As I discussed in the article about hill-climbing it is possible for an algorithm to find a solution that is “locally optimal”, but not necessarily “globally optimal”. That is to say we may find ourselves with a solution that is at the top of a local maximum – it’s the best thing nearby, but it might not be the best thing. This happens with hill-climbing, because when we are offered the choice between two solutions we always take the best solution. The algorithm is “greedy”. It’s also short sighted.

So instead we could try occasionally choosing something that’s worse. By doing that the algorithm can go “downhill” sometimes and hopefully reach new areas of the solution landscape.

simulated annealing

Taking it’s name from a metallurgic process, simulated annealing is essentially hill-climbing, but with the ability to go downhill (sometimes).

It introduces a “temperature” variable. When the “temperature” is high a worse solution will have a higher chance of being chosen. It work’s like this:

  1. pick an initial solution
  2. set an initial temperature
  3. choose the next neighbour of the current solution:
    • if the neighbour is “better” make that neighbour the current solution
    • if the neighbour is “worse”, probabilistically make this neighbour the current solution, based on the current temperature and how much “worse” the neighbour is
  4. decrease the temperature slightly
  5. go to 3.

By slowly cooling the temperature we become less likely to choose worse solutions over time. Initially we are able to make some pretty big jumps around the solution landscape. By the end of a run we’ll be jumping around less and less. In fact if we lower the temperature enough we end up with plain old hill-climbing.

probabilistically choosing a neighbour

Below is the Python code to decide if what probability we will assign to moving from a solution with a score of prev_score to a solution with a value of next_score at the current temperature.

def P(prev_score,next_score,temperature):
    if next_score > prev_score:
        return 1.0
        return math.exp( -abs(next_score-prev_score)/temperature )

To keep later logic simpler I’m returning 1.0 if next_score is better – so we’ll always choose better solutions.

When the prev_score is worse we create a probability based on the difference between prev_score and next_score scaled by the current temperature. If we chart the probabilities versus the difference in scores we get (with a temperature of 1.0):

As can be seen, for small differences (relative to the current temperature) we will have a high probability. This then tails off very quickly, so solutions that are much worse are increasingly less likely to be chosen.

The net-effect being that solutions that are only a little bit worse are still fairly likely to be chosen. Much worse solutions may still be chosen, but it’s much less likely.

the cooling schedule

Temperature is a key part of simulated annealing. How we lower the temperature over time is therefore very important. There are a couple of possible approaches, but I’ll show the one outlined by Kirkpatrick et al:

def kirkpatrick_cooling(start_temp,alpha):
    while True:
        yield T

This is a generator function that takes an initial start temperature (start_temp) and returns a series of temperatures that are alpha times the size, where alpha is less than one. So we end up with a temperature that drops off quite quickly and then slowly decreases to practically nothing.

remembering the best solution

One other minor, but key, implementation detail is saving the best solution we find during the annealing process.

During hill-climbing the current solution was always the best solution found, but simulated annealing will deliberately accept worse solutions at times. So we need to make sure we don’t just throw away the best we see. To avoid complicating the algorithm itself with extra checks of scores etc.

I am going to use a class to wrap the objective function. I’ll override the __call__ method of the class, so that I can use the instance of the class like a function – in place of the normal objective function:

class ObjectiveFunction:
    '''class to wrap an objective function and 
    keep track of the best solution evaluated'''
    def __init__(self,objective_function):
    def __call__(self,solution):
        if is None or score > self.best_score:
        return score

We can then access then best and best_score fields when we have finished our annealing.

simulated annealing itself

The code below represents the simulated annealing algorithm. In many respects it is pretty similar to hill-climbing, but we are also concerned with a current temperature and we have introduced a probabilistic element to choosing the next solution.

def anneal(init_function,move_operator,objective_function,max_evaluations,start_temp,alpha):
    # wrap the objective function (so we record the best)
    for temperature in cooling_schedule:
        done = False
        # examine moves around our current position
        for next in move_operator(current):
            if num_evaluations >= max_evaluations:
            # probablistically accept this solution
            # always accepting better solutions
            if random.random() < p:
        # see if completely finished
        if done: break

    return (num_evaluations,best_score,best)

The parameters are much the same as hill-climbing, but there are two extra specific to simulated annealing:

  • init_function - the function used to create our initial solution
  • move_operator - the function we use to iterate over all possible "moves" for a given solution
  • objective_function - used to assign a numerical score to a solution - how "good" the solution is
  • max_evaluations - used to limit how much search we will perform (how many times we'll call the objective_function)
  • start_temp - the initial starting temperature for annealing
  • alpha - should be less than one. controls how quickly the temperature reduces

I am also only reducing the temperature after either accepting a new solution or evaluating all neighbours without choosing any of them. This is done so that temperature will only decrease as we start accepting moves. As that will be less frequent than just evaluating moves we cooling will happen at a slower pace. If we are accepting lots of moves then this will drop the temperature quite quickly. If we are not accepting many moves the temperature will stay steadier - maintaining the likelihood of accepting other "worse" moves. That latter point is useful, as if we are starting to get stuck on a local maximum the temperature won't decrease - hopefully helping us get unstuck.


It made sense to compare simulated annealing with hill-climbing, to see whether simulated annealing really helps us to stop getting stuck on local maximums.

I performed 100 runs of each algorithm on my randomly generated 100 city tour, once with 50000 and once with 100000 evaluations. Both algorithms used the reversed_sections move operator. For simulated annealing I chose an initial temperature and alpha that seemed to perform well.

evaluations algorithm average s.d. worst best
50000 hill-climbing -4228.50 126.45 -4627.07 -3942.03
50000 annealing (start_temp=10, alpha=0.9999) -4145.69 96.56 -4422.04 -3924.34
100000 hill-climbing -4154.25 90.60 -4513.11 -3946.65
100000 annealing (start_temp=10, alpha=0.99995) -4077.40 71.72 -4294.97 -3907.19

These results seem to show that simulated annealing performed better than hill-climbing. In fact it can be seen that with just 50000 evaluations, simulated annealing was able to do a slightly better job than hill-climbing with 100000 evaluations! This makes sense, as when running hill-climbing with logging turned on I could see that after about 50000 evaluations hill-climbing was getting stuck and restarting. With more evaluations available it was possible to push simulated annealing further still.

However in both cases I had to perform several test runs to find reasonable starting temperatures and alpha values to get these kind of results. It was quite easy to set these parameters wrong and get much worse results than with hill-climbing.


Simulated annealing is a pretty reasonable improvement over hill-climbing. For a modest amount of extra code (in this cases 10's of lines) we are able to address hill-climbing's fundamental weakness (getting stuck) and yield much better results.

However by introducing two extra parameters we have shifted some of the burden in finding good solutions to ourselves. We have to tune these parameters carefully. Values that are good for one problem may not work so well for another. We end up with more "switches to flick" in the hope of making something work.

Next time around I will be discussing evolutionary algorithms, which pursue other ways to avoid getting stuck on local maximums and are also able to combine several solutions to explore new areas of the solution landscape.


Full source-code is available here as a .tar.gz file.

(The source-code contains more code than shown here, to handle parsing parameters passed to the program etc. I’ve not discussed this extra code here simply to save space.)

Tackling the travelling salesman problem: a little profiling

So after implementing hill-climbing, I thought it would be a worthwhile exercise to use Python’s profile module and see what the slow parts of the code were. To do this I set-up a basic hill-climb on the 100 city data set and then called the run method of profile:

import tsp
import profile

init_function=lambda: tsp.init_random_tour(len(coords))
objective_function=lambda tour: -tsp.tour_length(matrix,tour)

This yielded the following (edited for clarity/brevity) output:

      122433 function calls in 4.710 CPU seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.010    0.010    0.060    0.060
     1    0.190    0.190    4.640    4.640
     1    0.000    0.000    4.640    4.640
   607    0.730    0.001    1.220    0.002
     1    0.000    0.000    0.000    0.000
     1    0.000    0.000    4.700    4.700
 10106    0.110    0.000    1.350    0.000
 10000    0.440    0.000    1.790    0.000
 10000    2.180    0.000    2.470    0.000

Now this is pretty close to what I would expect – the tour_length function (our objective function) is taking up most of the time (2.470 seconds out of 4.710). For optimisation problems it is usually very common that the objective function is the most time consuming – which is why when comparing different optimisation algorithms a lot of attention is paid to how many times the objective function must be called.

However there was one unexpectedly expensive call in there: reversed_sections is using 1.790 seconds! Examining the output further shows that most of the time in reversed_sections is spent in all_pairs and most of the time in that function is spent in random.shuffle. So looking at all_pairs:

def all_pairs(size,shuffle=random.shuffle):
    '''generates all i,j pairs for i,j from 0-size'''
    if shuffle:
    for i in r1:
        for j in r2:
            yield (i,j)

We see that we call random.shuffle twice at the start of the generator function. However quite often, when hill-climbing, we will not run this generator fully. If we find a better solution we will accept it and move on – creating a new generator. So clearly shuffling two arrays of 100 elements each is overkill, if we don’t always use every element.

Instead we can modify all_pairs to only generate the random elements as it needs them:

def rand_seq(size):
    '''generates values in random order
    equivalent to using shuffle in random,
    without generating all values at once'''
    for i in xrange(size):
        # pick a random index into remaining values
        # swap the values
        # return the swapped value
        yield values[i] 

def all_pairs(size):
    '''generates all i,j pairs for i,j from 0-size'''
    for i in rand_seq(size):
        for j in rand_seq(size):
            yield (i,j)

Profiling this version gives us:

      82205 function calls in 3.970 CPU seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
     1    0.010    0.010    0.050    0.050
     1    0.230    0.230    3.910    3.910
     1    0.000    0.000    3.910    3.910
     1    0.010    0.010    0.010    0.010
     1    0.000    0.000    0.010    0.010
     1    0.000    0.000    3.960    3.960
 10100    0.150    0.000    0.420    0.000
 10000    0.520    0.000    0.950    0.000
 10000    2.300    0.000    2.560    0.000
 10507    0.190    0.000    0.270    0.000

As can be seen all_pairs (and reversed_sections) takes less of the total running time than before – leaving tour_length as the major bottleneck.


The profile module is a pretty handy way for spotting slow points in code that you may not have realised existed. In this case we’ve been able to speed up code that would not seem like it should be costly – giving us a modest speed boost (~15%) for not much extra effort.

However care is advised, as we didn’t address the slowest part of the code. Changes to that would probably yield much larger speed improvements. This is particularly true as our problem size increases. With 500 cities tour_length represents an even higher proportion of the total running time (~90%). As ever:

“premature optimisation is the root of all evil”

Tackling the travelling salesman problem: hill-climbing

This is the second part in my series on the “travelling salesman problem” (TSP). Part one covered defining the TSP and utility code that will be used for the various optimisation algorithms I shall discuss.

solution landscapes

A common way to visualise searching for solutions in an optimisation problem, such as the TSP, is to think of the solutions existing within a “landscape”. Better solutions exist higher up and we can take a step from one solution to another in search of better solutions. How we make steps will depend on the “move operators” we have available and will therefore also affect how the landscape “looks”. It will change which solutions are “adjacent” to each other. For a simple optimisation problem we can directly visual the solution landscape:


The red dot represents our current solution. It should be pretty clear that if we simply carry on going “uphill” we’ll get to the highest point in this solution landscape.

If we are using evolutionary optimisation methods a solution landscape will often be referred to as a fitness landscape.


Hill-climbing, pretty much the simplest of the stochastic optimisation methods, works like this:

  1. pick a place to start
  2. take any step that goes “uphill”
  3. if there are no more uphill steps, stop;
     otherwise carry on taking uphill steps

Metaphorically the algorithm climbs up a hill one step at a time. It is a “greedy” algorithm and only ever takes steps that take it uphill (though it can be adapted to behave differently). This means that it is pretty quick to get to the top of a hill, but depending on where it starts it may not get to the top of the biggest hill:


As you can see our current solution (the red dot) can only go downhill from it’s current position – yet it is not at the highest point in the solution landscape.

The “biggest” hill in the solution landscape is known as the global maximum. The top of any other hill is known as a local maximum (it’s the highest point in the local area). Standard hill-climbing will tend to get stuck at the top of a local maximum, so we can modify our algorithm to restart the hill-climb if need be. This will help hill-climbing find better hills to climb – though it’s still a random search of the initial starting points.

objective and initialisation functions

To get started with the hill-climbing code we need two functions:

  • an initialisation function – that will return a random solution
  • an objective function – that will tell us how “good” a solution is

For the TSP the initialisation function will just return a tour of the correct length that has the cities arranged in a random order.

The objective function will return the negated length of a tour/solution. We do this because we want to maximise the objective function, whilst at the same time minimise the tour length.

As the hill-climbing code won’t know specifically about the TSP we need to ensure that the initialisation function takes no arguments and returns a tour of the correct length and the objective function takes one argument (the solution) and returns the negated length.

So assuming we have our city co-ordinates in a variable coords and our distance matrix in matrix we can define the objective function and initialisation functions as follows:

def init_random_tour(tour_length):
   return tour

init_function=lambda: init_random_tour(len(coords))
objective_function=lambda tour: -tour_length(matrix,tour) #note negation

Relying on closures to let us associate len(coords) with the init_random_tour function and matrix with the tour_length function. The end result is two function init_function and objective_function that are suitable for use in the hill-climbing function.

the basic hill-climb

The basic hill-climb looks like this in Python:

def hillclimb(
    hillclimb until either max_evaluations
    is reached or we are at a local optima
    while num_evaluations < max_evaluations:
        # examine moves around our current position
        for next in move_operator(best):
            if num_evaluations >= max_evaluations:
            # see if this move is better than the current
            if next_score > best_score:
                break # depth first search
        if not move_made:
            break # we couldn't find a better move 
                     # (must be at a local maximum)
    return (num_evaluations,best_score,best)

(I’ve removed some logging statements for clarity)

The parameters are as follow:

  • init_function – the function used to create our initial solution
  • move_operator – the function we use to iterate over all possible “moves” for a given solution (for the TSP this will be reversed_sections or swapped_cities)
  • objective_function – used to assign a numerical score to a solution – how “good” the solution is (as defined above for the TSP)
  • max_evaluations – used to limit how much search we will perform (how many times we’ll call the objective_function)

hill-climb with random restart

With a random restart we get something like:

def hillclimb_and_restart(
    repeatedly hillclimb until max_evaluations is reached
    while num_evaluations < max_evaluations:
        if score > best_score or best is None:
    return (num_evaluations,best_score,best)

The parameters match those of the hillclimb function.

This function simply calls hillclimb repeatedly until we have hit the limit specified by max_evaluations, whereas hillclimb on it’s own will not necessarily use all of the evaluations assigned to it.


Running the two different move operators (reversed_sections and swapped_cities – see part one for their definitions) on a 100 city tour produced some interesting differences.

Ten runs of 50000 evaluations (calls to the objective function) yielded:

reversed_sections swapped_cities
-4039.86 -6451.12
-4075.04 -6710.48
-4170.41 -6818.04
-4178.59 -6830.57
-4199.94 -6831.48
-4209.69 -7216.22
-4217.59 -7357.70
-4222.46 -7603.30
-4243.78 -7657.69
-4294.93 -7750.79

(these are the scores from the objective function and represent negative tour length)

In this case reversed_sections clearly performed much better. The best solution for reversed_sections looked like:


Whereas the best for swapped_cities is clearly much worse:


It’s pretty clear then that reversed_sections is the better move operator. This is most likely due to it being less “destructive” than swapped_cities, as it preserves entire sections of a route, yet still affects the ordering of multiple cities in one go.


As can be seen hill-climbing is a very simple algorithm that can produce good results – provided one uses the right move operator.

However it is not without it’s drawbacks and is prone to getting stuck at the top of “local maximums”.

Most of the other algorithms I will discuss later attempt to counter this weakness in hill-climbing. The next algorithm I will discuss (simulated annealing) is actually a pretty simple modification of hill-climbing, but gives us a much better chance at finding the global maximum for a given solution landscape.


Full source-code is available here as a .tar.gz file. Again some unit tests are included, which can be run using nosetests.

(The source-code contains more code than shown here, to handle parsing parameters passed to the program etc. I’ve not discussed this extra code here simply to save space.)

Tackling the travelling salesman problem: introduction

This is the first part in my series on the “travelling salesman problem” (TSP). An outline of what I plan to cover can be seen in the prologue.

To kick things off here’s a quick quote:

The traveling salesman problem (TSP) asks for the shortest route to visit a collection of cities and return to the starting point. Despite an intensive study by mathematicians, computer scientists, operations researchers, and others, over the past 50 years, it remains an open question whether or not an efficient general solution method exists. [1]

The TSP is an NP-Hard Problem. That does not necessarily mean any one instance of the problem will be hard to solve, it just means that we do not currently have an algorithm that can give us the guaranteed best solution for all problems in “polynomial time”. We can’t make predictions about how long it might take to find the best solution to the TSP from just looking at the data. We have no way of knowing how long a problem that is twice as large as one that took 2 minutes to solve will take.

Although we might not be able to make predications about finding the best solution, we often only want a good solution to the TSP. We aren’t always so worried if we find a route amongst 1000 cities that is only a few miles longer than the best solution – particularly if it would take an inordinate amount of computing time to get from the good solution we already have to the best solution.

stochastic optimisation

As we often just want a good solution fairly quickly we can turn to stochastic optimisation methods. We take randomly generated routes through the cities and incrementally improve/change them in some fashion to search for a better route. How these changes are made depends on the algorithm being used, but there are a couple of simple approaches we can take, that I will outline here:

  • swapping the position of two cities on a route
  • reversing the order of an entire section of a route

These simple operators – so called as they “operate” on a route to create a new one – will form the initial basis of my code. There are more complex operators, but for simplicity I shall not talk about them here.

finally on with some code….

I will be using standard Python lists to represent a route (or tour as I refer to it in my code – a name borrowed from graph theory) through a collection of cities. Each city will simply be assigned a number from 0 to N-1 (where N is the number of cities) and therefore our list of cities will be a list of unique numbers between 0 and N-1.

We also need to specify a “distance matrix” that we can use to find out the distance between two cities. To generate a distance matrix for a set of x,y co-ordinates the following will do nicely:

def cartesian_matrix(coords):
    '''create a distance matrix for the city coords
      that uses straight line distance'''
    for i,(x1,y1) in enumerate(coords):
        for j,(x2,y2) in enumerate(coords):
            dist=sqrt(dx*dx + dy*dy)
    return matrix

cartesian_matrix() takes a Python list of (x,y) tuples and outputs a Python dictionary that contains the distance between the distances between any two cities:

>>> matrix=cartesian_matrix([(0,0),(1,0),(1,1)])
>>> print matrix
{(0, 1): 1.0, (1, 2): 1.0, (0, 0): 0.0, (2, 1): 1.0, 
(1, 1): 0.0, (2, 0): 1.4142135623730951, (2, 2): 0.0, 
(1, 0): 1.0, (0, 2): 1.4142135623730951}
>>> print matrix[1,2]

Where matrix[1,2] gives the distance between city number 1 and city number 2. In our case this is the same as matrix[2,1], but for some TSP’s it may not be (for example if there is a one way street between locations/cities that means we have to take a long way round when going in one direction).

In addition to generating the distance matrix we will probably also want to read the city co-ordinates from a text file (one x,y per line):

def read_coords(coord_file):
    for line in coord_file:
    return coords

That should be sufficient for generating distance matrices for now. On real world problems generating a distance matrix may be more involved – you might need to take map data and calculate what the actual distance by road between any two cities is. This process can be done offline, before we start optimising our routes and is a subject for another time.

Ok, so now we can read in a list of cities from a file and generate our distance matrix. What next? Well it would be good if we knew how long a route was:

def tour_length(matrix,tour):
    for i in range(num_cities):
    return total

Where matrix is a distance matrix and tour is a list of cities (as integers).

implementing the operators

I am going to implement the two operators as generator functions, that will return (in a random order) all of the possible versions of a route that can be made in one step of the operator. By using a generator function we can assess each different possibility and perhaps decide to not generate any more variations – which saves us the overhead of generating all of the combinations in one go. Both operators rely on the following generator function:

def all_pairs(size,shuffle=random.shuffle):
    if shuffle:
    for i in r1:
        for j in r2:
            yield (i,j)

Which will generate all pairings of the numbers from 0 to size as (i,j) tuples in a random order (needs the random module imported to work).

So each operator then looks like:

def swapped_cities(tour):
    '''generator to create all possible variations
      where two cities have been swapped'''
    for i,j in all_pairs(len(tour)):
        if i < j:
            yield copy


def reversed_sections(tour):
    '''generator to return all possible variations 
      where the section between two cities are swapped'''
    for i,j in all_pairs(len(tour)):
        if i != j:
            if i < j:
            if copy != tour: # no point returning the same tour
                yield copy

Note I'm using copy=tour[:] to duplicate the route, rather than copy=list(tour) (or similar). Although I am currently using a Python list to represent a tour I may in the future want to use something else that is merely “list-like”, in which case I could just override __getitem__ to return an object of the appropriate type.

Using both operators looks like:

>>> for tour in swapped_cities([1,2,3]):
...     print tour
[3, 2, 1]
[2, 1, 3]
[1, 3, 2]
>>> for tour in reversed_sections([1,2,3]):
...     print tour
[3, 2, 1]
[2, 1, 3]
[2, 3, 1]
[3, 1, 2]
[1, 3, 2]

As you can see reversed_sections gives us a lot more possibilities - even with only 3 cities. I'll compare how these two operators stack up in the next part of the series.

additional code

The last piece of utility code for today generates a .png of a route overlaid on top of the city co-ordinates (using PIL):

def write_tour_to_img(coords,tour,img_file):
    # shift all coords in a bit
    coords=[(x+padding,y+padding) for (x,y) in coords]
    for x,y in coords:
    for i in range(num_cities):
    for x,y in coords:
    del d, "PNG")

Which produces output along these lines:



Well that's it for part one. At this point you can create a distance matrix, see how long a route is, create a few different variations of a route and create an image of a route. The missing part is being able to generate a good route. I'll save that for the next part, where I'll discuss the simplest stochastic optimisation method: Hill-climbing.


Full source-code is available here as a .tar.gz file. I've included some unit tests which can be run using nosetests.



Tackling the travelling salesman problem: prologue

The Travelling Salesman Problem (TSP) is a classic combinatorial optimisation problem. It also happens to be a problem I have spent various parts of my life looking at.

At my first job out of uni we worked on solving real-world problems that usually involved the TSP in some fashion. Often the problems we tackled were, in effect, the TSP with extra constraints or variants along those lines. A straightforward example being having multiple “salesman” and trying to assign each one a route (that didn’t overlap). The approach in that first job was mainly to use evolutionary algorithms.

Then during my MSc I took the course Nature Inspired Optimisation. This was by far my favourite course during my MSc and saw us applying different optimisation algorithms to several different problems. Kennon and myself were assigned the TSP to work on and had a lot of fun implementing various different algorithms to tackle it.

Anyway, I thought it would be interesting to give a brief guided tour of the different algorithms we used during the MSc to tackle the travelling salesman problem. I’m no expert, but I do have a fair bit of experience with this problem and there are some fairly straightforward algorithms out there, which are good to have in your arsenal.

So my intention for this series is to cover (over the next weeks/months/however long):

That’s the bare minimum for now. I may well cover a few more algorithms (e.g. ant algorithms), but those three algorithms provide quite a good starting point.

I will be writing everything from scratch, as I go, in Python. In general I’ll be aiming for clarity of code – rather than pure speed and Python should work well for that purpose.