# Local Search Algorithms in NetLogo

Última modificación: 9 de Octubre de 2017, y ha tenido 626 vistas

Etiquetas utilizadas: || || || ||

In the basic search algorithms mentioned in previous posts (BFS and A*), when a goal is found in the search space associated to the problem, the solution is given as a path connecting the initial position with the goal state. In these cases we are interested not only in the structure of the final state (that we can know in advance) but in how to get it by using some operations from a specific (commonly random) initial state.

But in many cases we don't know the structure of the final solution, only some properties to verify, and the path does not matter, and if we use a path it's only as a way to obtain the solution. This fact can be illustrated with the familiar 8 Queens problem:

Given a standard chess board ($$8\times 8$$ cells) and 8 queens, locate them in the board in such a way as to have no two queens attacking each other (that is, no queen can be on the same row, column, or diagonal as another).

We can generalize this problem to play with $$N$$ queens on a $$N\times N$$ board. From a theoretical point of view, it’s a simple problem, but we will see that easily illustrates the importance of having good algorithms. On a small board like $$4\times 4$$, a very simple way is to solve it by brute force: test every set of queen positions. In this small case, the state space (the different positions the queens can be) is really small, only $$4!= 24$$ positions. When playing in the standard board we obtain $$8!=40,320$$ positions, a bigger number that we can still manage by brute force in a short time. However, this simple search rapidly gets much larger space. If we extend the problem to be $$12$$ queens, we will get a space with $$12!= 479,001,600$$ states, and if we move to $$15$$ or $$20$$ queens we obtain $$1,307,674,368,000$$ or $$2,432,902,008,176,640,000$$ states, respectively... which changes the search from seconds to years to find the solution.

Of course, in this particular problem there are ways to reduce the search space by using symmetries or similar tricks, but the problem remains the same: the search space grows at factorial rate, while the reductions only remove a linear fraction of them. We need some other tools to get better algorithms for browsing the space in the search of the solution.

In the previous searching algorithms we use a set of operations that, usually, have a natural meaning in the real world (where the problem is extracted from), and the solution path is in fact a sequence of operations transforming the initial state into the goal state by means of legal moves. Since now we aim to obtain a solution state no mind the way to reach it, we can use operations that probably have no meaning in the real world, but they are only ways to transform states in (hopefully) better states (regarding the goal).

In this post we will see implementations of a couple of methods that, starting from initial states, will search in the state space by local moves that take us closer to the solution. As the space is more complex and bigger, more difficult is to find the solution, hence we will not be interested in the "best" path to reach the goal, and maybe we don't obtain the goal in a minimal optimal time, but frequently is good enough to reach it or find a state that is very similar to one solution (that is, sometimes we change the best solution for a good enough solution).

## Hill Climbing Methods

The basic Hill Climbing Algorithm is an iterative algorithm that starts with an random state in the search space and then attempts to find a better solution by changing some parts of the structure of the state. If the change produces a better option, we accept it and we repeat again the procedure until no further improvements can be found.

In order to measure the quality of the states, we need some kind of function that provides this information:

• If we measure how far we are from the best solution, it will be called error or distance and our problem can be rewritten into trying to minimize this function.
• If we measure how good the states are, it will be called fitness and our problem can be rewritten into trying to maximize it.
• In any case, the solution represents now an optimal value of the function, and we have transformed the search problem into an optimization problem.

We can see the problem as an agent that browses the search space moving from states to states improving the function as a climber trying to get higher in a mountain. Of course, this method presents some limitations and it is not efficient, as the climbers can reach a local extreme that is not optimal (that is, the climber will improve the states until it reaches one state that can't change to a better one).

Under some assumptions (if the function to optimize is continuous and derivable on the search space,... think about functions between real numbers if it is easier) the hill climbing algorithm is closely related to Gradient Descent method for optimization.

With this method as basement, we can create slightly variants that improve the results in a several ways. The one that we will implement in NetLogo works with a number of simultaneous climbers that, in any state, look for all the states it can reach from it by simple operations and take the best option regarding the optimization function (in any of the states, they can evaluate this function to compare between neighbour states).

In order to show how to implement this type of searchers in NetLogo, we will work with a very specific and graphical situation: in the patches of the world we will suppose a numerical value, and we aim to search for the patch with the higher value. To simplify the code, we will use pcolor as the value to maximize on patches.

Let's start to comment this Hill Climbing implementation on patches:

We define a breed of agents that will browse the space while searching. As the climber can reach a local maximum, we will store in the agent if it is on a patch that has no better neighbours.

breed [climbers climber]

climbers-own  [
on-hill? ; shows if the turtle has reached one hill
]

globals [
num-states  ; It will store the number of states that have been visited by the climbers-own              ;   during the search process
]


In the setup procedure we create a random landscape (by using the diffuse function) that stores values in the pcolor properties of the patches. After that, we create some climbers on random initial positions (they will have the pen down to draw the path they follow during the execution).

to setup
ca
; Create a landscape with hills and valleys. The height will be stored in patches color
ask n-of 100 patches [ set pcolor 120 ]
repeat 20 [ diffuse pcolor 1 ]
; Create some climbers in the ground
sprout-climbers 1 [
set on-hill? false
set color red
set shape "circle"
pen-down
]
]
set num-states 0
reset-ticks
end


In the go procedure we code one step of the iterative algorithm, later we will call it from a forever button. This procedure will stop if all the climbers have reached a local maximum... hopefully, some of them have found the global maximum and, consequently, we have a correct solution to the problem. Otherwise, we will obtain better states than the original ones, and some approximation to the optimal.

For this case NetLogo provides a factory turtle procedure, uphill, that moves the agent to the best patch neighbour regarding a desired property of patches. During this procedure we know that the climber has reached a local maximum if the best neighbour is the patch where the agent is.

to go
; Stop when all the climbers are on a hill
ifelse all? climbers [on-hill?]
[
; Highlight the maximum height and the optimum reached by the climbers
ask climbers [set label precision pcolor 2]
ask one-of climbers with-max [pcolor] [      set size 3       set color blue    ]
ask patches with-max [pcolor] [ set plabel precision pcolor 2 ]
stop
]
[
; Move the climbers uphill
let current-patch patch-here
; Uphill function from NetLogo moves the climber to a higher patch
uphill pcolor
ifelse current-patch = patch-here
[ set on-hill? true ]
[ set num-states num-states + 1 ]
]
]
tick
end


Of course, the efficiency of the algorithm depends on the roughness of the function (that reflects the complexity of the state space) and the number of climbers. In the next plot we can see the ratio of success to find the optimal value while varying the number of climbers from $$0$$ to $$100$$. In this experiment, for every number of climbers we have performed $$100$$ repetitions. Take into account that the size of the space is the number of patches in the world ($$100\times 100= 10,000$$) and that every climber visits an average of $$8$$ states before reaching a hill, so the proportion of visited states during the search is really low.

Now, we will present a slightly variant that looks for optimize a height function on a network. The idea is the same, we have climbers on the nodes of the network and they move to neighbours nodes (connected by links of the network) if they have higher values for this function. In the following code we highlight only the differences. You can find the complete code in the model.

Now we have two breeds of agents, one for nodes of the network and one more for climbers. Note that we need a property on the climbers to store the localization (the node) where it is.

breed [nodes node]       ; nodes of the network
breed [climbers climber] ; agents that will make the search

climbers-own  [
on-hill?     ; shows if the climber has reached one hill
localization ; the node where the climber is
]

nodes-own [
height ; value of the nodes to optimize
]


We will not show the rest of the code here (download the model file to see it), but only explain its particularities. The process we follow to create a coherent network valid for the use of the algorithm uses a height landscape on patches, and then project these values on a geometrical network built over them. After that, since there is no uphill-like procedure for networks, we must calculate explicitly the best neighbour and move the climber there. Except this, the rest is very similar to the previous patches case.

## Simulated Annealing

If you have played a little bit with the previous models, you have realized that there are many cases where the hill climbing algorithms get stuck in not too good optimal states. For these cases there exist algorithms that sometimes can give relatively better solutions by allowing sometimes movements that goes against improving. We will see in this section the most common of them: Simulated Annealing.

This algorithm is inspired by a physical phenomenon observed in the metal annealing and crystallization processes. In this inspiration, an energy is associated with every state in our search space. This energy function reaches its minimum (usually $$0$$, but not necessarily) in the solution we look for. Hence, the search again has been transformed into an optimization problem where a global minimum is searched. A searcher in this space will try to descend locally by comparing energies in order to reach a minimum... exactly in the same way as in hill climbing. The difference now is that we can provide an extra energy to the searcher (in the form of temperature) that allows it to move to worse states (with higher energy, so theoretically further from the minimum). With this little trick we can avoid the searcher to get stuck in a local minimum by jumping small hills to see if, behind them, there is a better optimal state.

If you would allow this behaviour indefinitely you would find the problem that the searcher will never stop in the optimal and will be jumping forever from one state to others, so our solution is to promote the exploration of the space allowing higher temperatures (extra energies) and, as the search time goes, reduce this temperature to avoid jumps and promote the exploitation of a good enough area of states (by hill-climbing similar ways).

As this method allows longer jumps than the usual hill-climbing, it can be applied to search spaces with more complex structures, bigger and roughness than the common one... but it still can find problems getting stuck in local optimum states.

You can find a nice example of implementation of this algorithm in the models library of NetLogo. Here we will present two more samples trying to make more clear how to use this algorithm for real problems.

We will not introduce here what a Sudoku is... a simple query in any search engine will provide you with more information, examples, and interactive games than you can afford in your next years.

We will use Simulated Annealing to provide a model that generates valid Sudoku boards (a $$9\times 9$$ board verifying all the restrictions). In this problem, one state will be any distribution of numbers on the grid that make use of 9 copies of every number between 1 to 9, not necessarily verifying the restrictions. Only to give an idea of the size of this state space we will say that the number of valid Sudoku grids for the standard 9×9 grid was calculated by Bertram Felgenhauer and Frazer Jarvis in 2005 to be 6,670,903,752,021,072,936,960. The result was derived through logic and brute force computation. Russell and Jarvis also showed that when symmetries were taken into account, there were 5,472,730,538 solutions. We leave to the reader the computation of the available states to browse in our search for one of the valid grid.

If you want to get more mathematical information about Sudoku, you can visit this wonderful page on Wikipedia.

This problem is very appropriated for local search methods, as we can transform easily states in similar states (by swapping 2 cells, for example), and measure the energy of the state by measuring the number of constraints present in the grid (in fact, we will compute in constraint variable some measure of the failed restrictions). Specifically, for every cell we compute three different values (that will be sumed up):

• We count the number of cells that has the same value in the same row.
• We count the number of cells that has the same value in the same column.
• We count the number of cells that has the same value in the same block.
• Finally, we sum up this three numbers, $$s$$, and return $$e^s$$. Later, we will use this constraints to decide probabilistically which cells will be swapped. By using the exponential we get to make exponentially more probable to take cells with higher error.

Taking into account that we will store the Sudoku grid on the patches (in a world of size $$9\times 9$$), and that we add a property to patches storing the block it belongs to, the code in NetLogo that implements this calculation is:

patches-own [
value          ; Value in the cell
block          ; Block 3x3 the cell belongs to
constraints    ; Number of constraints the cell has in the current state:
;    Number Cells in the same Row with the same value
;    + Number Cells in the same Column with the same value
;    + Number Cells in the same Block with the same value                 ;   Finally, we take the exponential of this sum.
]

to compute-constraints
set constraints e ^ (same-in-row + same-in-column + same-in-block)
]
end

; Auxiliary reports to compute the different constraints
to-report same-in-row
let x pxcor
report -1 + length filter [? = value] ([value] of patches with [pxcor = x])
end

to-report same-in-column
let y pycor
report -1 + length filter [? = value] ([value] of patches with [pycor = y])
end

to-report same-in-block
let b block
report -1 + length filter [? = value] ([value] of patches with [block = b])
end


We must consider that, when a cell has no constraints (it verifies all the restrictions) the energy it contributes to the system energy is $$1$$ ($$e^0$$), consequently, the minimal energy we must reach to be sure that we have a valid Sudoku state is $$81$$ (there are $$81$$ cells).

Temperature and global-energy are global variables that store what their names show. The setup procedure simply creates an initial state (where several restrictions can fail) and sets the parameters to launch the search:

to setup
clear-all
; Fill the initial data on patches
; Block the cell belongs to (9 blocks 3x3)
set block (3 * int (pycor / 3)) + 1 + int (pxcor / 3)
; Each block is filled with a slightly different color
set pcolor scale-color black block -1 12
; Initial value in the cell: ordered in a row
set value 1 + pxcor
set plabel value
]
; Make a random swap of the cells
repeat 100 [
swap-values (one-of patches) (one-of patches)
]
; Compute the constraints of every cell
compute-constraints
; The global energy of the system is the sum of local energies (constraints) of the cells
set global-energy (sum [constraints] of patches )

; Update the view showing
update-view
; Initial temperature
set temperature 1
; Reset the tick counter
reset-ticks
end

; Auxiliary swap procedure between values of cells
to swap-values [ p1 p2 ]
let temp [ value ] of p1
ask p1 [ set value [value] of p2 ]
ask p2 [ set value temp ]
end

; Update View procedure. It gives a scale-red color to the label of the cell according to the; relative energy of the cell
;    dark: correct, bright: higher energy
; It also plots the energy
to update-view
let M max [constraints] of patches
set plabel-color scale-color red constraints 0 M
set plabel value
]
plot global-energy
end


The Go procedure, and auxiliary reports, that implement one step of the iterative process of the Simulated Annealing Algorithm are directly coded as follows (we have made use of the rnd extension, that allows to select one agent from an agentset proportionally to some internal property, in this case the constraints):

to go
; In every tick, with the same temperature, we make some swaps
repeat 5
[
; The swapping is made between 2 random cells. The cells are chosen proportionally to     ; their constraints, in order to change the worst with higher probability
let p1 rnd:weighted-one-of patches [constraints]
let p2 rnd:weighted-one-of patches [constraints]
try-swap p1 p2
]
; Cool the system
set temperature temperature * (1 - cooling-rate / 100)
; Compute the energy after the swaps
set global-energy (sum [constraints] of patches )
update-view
; The miminum energy of the system is 81
if global-energy = 81 [stop]
tick
end

; Swap procedure between cells p1 and p2
to try-swap [p1 p2]
; If they have the same value, there is no need to change them
if ([value] of p1 = [value] of p2)
[ stop ]
; Try the swap
swap-values p1 p2
; Compute new energy after the swapping
compute-constraints
let new-energy (sum [constraints] of patches)
; If the swap is accepted
ifelse (accept-swap? global-energy new-energy)
[
; Set the new energy as global
set global-energy new-energy
]
[
; If not, swap them again and compute their constraints
swap-values p1 p2
compute-constraints
]
end

; A swap is accepted if it improves the energy, or randomly depending on the temperature of ; the system (this is the central idea of the Algorithm)
to-report accept-swap? [ previous-energy new-energy]
report (new-energy <= previous-energy) or ((random-float 1.0) < temperature)
end


With small changes in the code we can build from the previous model a general Sudoku Solver. You can find a simple proposal for this task in this link.

(From Wikipedia) Nonograms, also known as Hanjie, Picross or Griddlers, are picture logic puzzles in which cells in a grid must be colored or left blank according to numbers at the side of the grid to reveal a hidden picture. In this puzzle type, the numbers are a form of discrete tomography that measures how many unbroken lines of filled-in squares there are in any given row or column. For example, a clue of [4 8 3] would mean there are sets of four, eight, and three filled squares, in that order, with at least one blank square between successive groups.

In this section we will give a model that will solve (in some cases) nonograms with a Simulated Annealing Algorithm. For that, we will use the patches as the cells of the grid where the nonogram must be solved, and some global variables to store information about it:

globals [
rows-dist     ; Distribution of cells in rows, for example: [3] or [2 5 2]
cols-dist     ; Distribution of cells in columns
Global-Energy ; Global Energy of the system
Temperature   ; Temperature of the system
]

patches-own [
state           ; False/True = deactivated/active
previous-state  ; Previous state when we try changes
]


One sample of nonogram can be the following (observe that we define the figure as a procedure that stores the rows and columns clues in the corresponding global variables):

to Figure1
;rows-dist: up to down
set rows-dist [ [2] [1] [4] [5] [3 2] [2 1] [3] ]

;cols-dist: left to right
set cols-dist [ [1] [3] [1 4] [4 1] [4] [3] [2] ]
end


The setup procedure prepares the model to run the algorithm. It makes use of auxiliary procedures show-states and update-plot that makes what we expect from them. Later we will analyze in detail the Compute-Global-Energy procedure that calculates the function to be minimized.

to setup
ca
; Load Selected Figure (it simply fill rows-dist and cols-dist variables)
; Figure is a choose widget in the interface to select the figure to load
run Figure
; Resize the world to fit the figure
resize-world 0 (length cols-dist - 1) 0 (length rows-dist - 1)
set-patch-size (25 * 16) / (max (list world-width world-height))
; Starts cell randomly (in the interface we choose the % of initially activated cells)
set state (random 100 < %-initial)
set previous-state true
]
; Show the current states of the cells
show-states
; Compute the current Energy of the system
Compute-Global-Energy
; Update plot
Update-Plot
; Reset the Temperature
set Temperature 1
reset-ticks
end


To compute the global energy of the state we sum the energies of all the rows and columns. The error of a row/column is obtained by comparing the clue of this line in the current state with the clue of the same line in the goal state. For that, the first step is to convert the line of states in a clue by grouping them (group and auxiliary-group reports are in charge of this process).

If we get $$v_1$$ as the grouping of a line, and $$v_2$$ is the goal grouping, the error of the line is computed as the euclidean distance between them. If they are not of the same length, the shortest is extended to equalize them by adding $$0$$'s, and a penalty is also considered proportional to the difference of lengths.

to Compute-Global-Energy
let rows-error sum map [Energy-Row    ?] (n-values world-height [?])
let cols-error sum map [Energy-Column ?] (n-values world-width  [?])
set Global-Energy ( rows-error + cols-error )
end

; The energy of a row is computed from the error between the pattern of the goal row and ; the current row
to-report Energy-Row [row]
if row < 0 or row > max-pycor [report 0]
let Row-states map [[state] of ?] (sort patches with [pycor = row])
let Row-Pattern group Row-states
let Row-Error Compute-Error Row-Pattern (item row rows-dist)
report Row-Error
end

; The energy of a column is computed from the error between the pattern of the goal column ; and the current column
to-report Energy-Column [col]
if col < 0 or col > max-pxcor [report 0]
let Col-States map [[state] of ?] (sort patches with [pxcor = col])
let Col-Pattern group Col-States
set Col-Pattern reverse Col-Pattern
let Col-Error Compute-Error Col-Pattern (item col cols-dist)
report Col-Error
end

; The error between 2 patterns
to-report Compute-Error [ v1 v2 ]
; Compute the differnce in lengths
let dif abs (length v1 - length v2)
; Euqalize the patterns by adding 0s ti the shortest
if (length v1) < (length v2)
[ set v1 sentence v1 (n-values dif [0]) ]
if (length v2) < (length v1)
[ set v2 sentence v2 (n-values dif [0]) ]
; Compute the euclidean distance between patterns
let er sum (map [(?1 - ?2) ^ 2] v1 v2)
; Adding a penalty for the diference of lengths
set er er + ( dif * Weight-Dif)
report er
end

; Report to get the groups of a row/column of states:
;   [true true true false false true true false] -> [3 2]
; It works with reduce, leaving the false's and counting consecutive true's with the help of ;   an auxiliary report. After that, we must remover the false's
to-report group [states]
let res filter is-number? (reduce auxiliary-group (fput [false] states))
ifelse res = []
[ report [0] ]
[ report res ]
end

to-report auxiliary-group [L x]
ifelse x = false
[report lput x L]
[
; if x is a true, we must see if the last one was a true too or not. We recognize this     ; because L ends in a number that we must increment
let a last L
ifelse is-number? a
[ report lput (a + 1) (bl L) ]
[
; Otherwise, x is the first true of a serie... and we count it as 1
report lput 1 L
]
]
end


As in the previous samples, the go procedure implements one step of the iterative process of the algorithm. As you can see in the code, we implement two different strategies to change a cell:

• In the first one we simply change the state of a cell (on / off).
• In the second one, according to the weights set in the interface, we apply one of the following operators:
• Set off the cell.
• Set on the cell.
• Swap the value with one of its neighbours.

You can try different strategies and add them to the code easily.

Observe that, in every step, we randomly choose the cell to be changed. We leave to the reader the option to give a procedure where the cell is chosen in a different way (for example, higher probability for cells with higher errors), and check if the efficiency of the algorithm is improved.

to go
; Make a change on one cell
; We compute the part of the energy depending on this cell
let old-energy Compute-Context-Energy pxcor pycor
; and store the previous states
Store-Previous-States
; Apply the change following one of the strategies. It will affect only to the neighbors     ; of the cell
if Strategy = "Change-me" [strategy1]
if Strategy = "Probabilistic" [strategy2]
; Compute the new energy after the change
let current-energy Compute-Context-Energy pxcor pycor
; If the change is not accepted, we reverse the changes made
if not Accept-Change? current-energy old-energy [ Reverse-changes ]
]
; Update the view of the cells
show-states
; Compute and plot the energy of the system
Compute-Global-Energy
Update-Plot
; Stop if the process has reached the goal
if Global-Energy = 0 [stop]
; Cooling process
set Temperature Temperature * .999
tick
end

; Procedures to store (and reverse) states before any change
to Store-Previous-States
ask neighbors4 [ set previous-state state ]
set previous-state state
end

to Reverse-changes
ask neighbors4 [ set state previous-state ]
set state previous-state
end

; Report to accept or not the changes. It is the usual method
to-report Accept-Change? [ new old ]
ifelse new < old
[ report true ]
[
let prob exp ( ( old - new ) / Temperature )
report random-float 1.0 < prob
]
end

; Report to evaluate the energy in the context (row and column) of the cell at (x,y)
to-report Compute-Context-Energy [ x y ]
report sum map Energy-Row (Context y) + sum map Energy-Column (Context x)
end

; Auxiliary report to take the context of a location
to-report Context [x]
report (list (x - 1) x (x + 1))
end
`