# A General A* Solver in NetLogo

Última modificación: 16 de Octubre de 2018, y ha tenido 4832 vistas

Etiquetas utilizadas: || || || ||

In a previous post we have explored the idea of solving problems by projecting them on state spaces and then using a search algorithm to find the solution. Usually, we will work with problems that look for solutions as sequences of actions that transform initial states, that are not solutions of the problem, into final states, that are solutions of the problem.

In that post we presented a very simple algorithm called Breath First Search (BFS) providing a sorted way of browsing the space of states in a blind way. For this reason, although the algorithm returns an optimal sequence of actions that reachs the solution (in the sense that it has the minimum number of actions), in the process to build this sequence the algorithm can perform a huge number of steps (and, consequently, spend a huge number of time to reach the solution).

If all the information we have about the state space is local, that is, we only know how to reach new states from previous ones by direct application of transitions, BFS algorithm (or similar ones) is the best we can do... we have no knowledge about the space, and we make a blind search. But if we would have any global information about the structure of the space, maybe we could take decisions of the correct direction to go faster form the initial state to the solution. In this case we say that we make an informed search.

When we are looking for a "short" path connecting the starting point and the solution, we count how many transitions have been applied in the path (the length of the path), but in other more general cases we need some way to measure the cost of the different options to choose the best one. The ideal situation is that we can manage a complete information about the best way to reach the solution, but in almost every interesting case all we can get is an intuition/approximation about how much cost to reach the goal. In these cases, we say to be working with an heuristic: a practical method not guaranteed to be optimal or perfect, but sufficient for the immediate goals.

The idea behind a heuristic search is that we explore the node that is most likely to be nearest to a goal state. To do this we use a heuristic function which tells us how close we are to a goal state. There is no guarantee this heuristic function will return a particular state that is closer to a goal state than any other one. If we could do that we would not need to be doing a search. We would simply be moving through the various states computed by the heuristic in order to reach the goal state from the starting one. It is important having in mind that the heuristic function tries to estimate how far we are from the goal state, not the cost of the solution under construction.

To implement a heuristic function we need to have some knowledge about the domain, since it needs to have information about the problem in order to judge how close the current state is to the goal.

Among the different search algorithms that make use of partial information by using heuristics, the most famous is the A* algorithm, and in this post we will present several implementations of this algorithm in NetLogo, from a basic application of A* to be used as a pathfinding algorithm in 2D grids to a more general version where we give some reports to be used as a General Problem Solver.

In this other post you can find (in spanish) a more theoretical explanation of this algorithm (and there are a lot of different resources out there in other languages). Here we will only focus in several implementations in NetLogo. In the title sections you can find links to download the NetLogo models (the last versions can be downloaded from this Github repository). Right now we only need to remember that the A* algorithm is a combination of the ideas of a Greedy algorithm (in every step we choose the best local option) and the heuristic information, in such a way that for every state $s$ we compute:

$f(s) = g(s) + h(s)$

where $g(s)$ provides the cost of the path from the start state to current state, $s$, and $h(s)$ estimates the cost of the cheapest path from the current node to the goal. And the best of all, this strategy can be proved to be optimal and complete, under simple restrictions on $h$ (it must be an admisible heuristic, optimistic, that is, it must never overestimate the real cost to reach the goal).

In this first implementation we will only make use of patches as agents that controls the execution of the algorithm. The model where A* will be applied consists on a 2D grid of patches where some of them represent walls that we can't cross.

We will use some global variables in our implementation:

• p-valids will contain the agentset of patches that are valid for moving,
• Start will be the initial patch were the search starts from, and
• Final-Cost will store the cost of the path returned by the algorithm.

During the execution of A* the patches need to store some extra information, and we define some properties for that:

• father, to store the previous patch in the partial path that pass through it,
• Cost-path, to store the cost of the path from the starting point to the current patch,
• visited?, a boolean that shows if the patch has been previously visited,
• active?, a boolean that says if the patch is active in the search process, that is, if it has been reached, but its children have not been explored yet.

The setup of the model is the following:

to setup
ca
; Initial values of patches for A*
[
set father nobody
set Cost-path 0
set visited? false
set active? false
]
; Generation of random obstacles
[
set pcolor brown
]
; Se the valid patches (not wall)
set p-valids patches with [pcolor != brown]
; Create a random start
set Start one-of p-valids
; Create a turtle to draw the path at the end (if any)
crt 1
[
ht
set size 1
set pen-size 2
set shape "square"
]
end


In any moment every patch has a total expected cost to reach the goal that is calculated in two parts:

• the first one is the cost of the real path that goes from the starting point to this patch (if any),
• and the second one is the expected cost that the heuristic provides from this patch to the goal.
to-report Total-expected-cost [#Goal]
report Cost-path + Heuristic #Goal
end


This cost will be the key to decide how to build an efficient path. The first part is determined by the partial path to reach the patch, and the second one reflects our knowledge about the global information we can infer from the world. It measures how much left to reach the goal from here... the better this heuristic, the faster we will find the shorter path.

The heuristic report is something that must be adapted for every problem. In this case, as we are trying to find the shortest path, it makes sense if we take the heuristic to be the euclidean distance to the goal (that NetLogo language provides).

to-report Heuristic [#Goal]
report distance #Goal
end


Finally, we present the A* algorithm, that receives as input:

• #Start : starting point of the search.
• #Goal : the goal to reach.
• #valid-map : set of agents (patches) valid to visit.

and returns:

• a path (list) with the patches that goes from the #Start to the #Goal (if any), or
• false, otherwise.

The explanation of the report is given by comments in the code:

to-report A* [#Start #Goal #valid-map]
; clear all the information in the agents, and reset them
[
set father nobody
set Cost-path 0
set visited? false
set active? false
]
; Active the starting point to begin the searching loop
[
set father self
set visited? true
set active? true
]
; exists? indicates if in some instant of the search there are no options to continue.   ; In this case, there is no path connecting #Start and #Goal
let exists? true
; The searching loop is executed while we don't reach the #Goal and we think a path exists
while [not [visited?] of #Goal and exists?]
[
; We only work on the valid pacthes that are active
let options #valid-map with [active?]
; If any
ifelse any? options
[
; Take one of the active patches with minimal expected cost
[
; Store its real cost (to reach it) to compute the real cost of its children
let Cost-path-father Cost-path
; and deactivate it, because its children will be computed right now
set active? false
; Compute its valid neighbors and look for an extension of the path
let valid-neighbors neighbors with [member? self #valid-map]
[
; There are 2 types of valid neighbors:
;   - Those that have never been visited (therefore, the path we are building is the
;       best for them right now)
;   - Those that have been visited previously (therefore we must check if the path we           ;       are building is better or not, by comparing its expected length with the one           ;       stored in the patch)
; One trick to work with both type uniformly is to give for the first case an upper           ;   bound big enough to be sure that the new path will always be smaller.
let t ifelse-value visited? [ Total-expected-cost #Goal] [2 ^ 20]          ; If this temporal cost is worse than the new one, we substitute the information in           ;   the patch to store the new one (with the neighbors of the first case, it will be           ;   always the case)
if t > (Cost-path-father + distance myself + Heuristic #Goal)
[
; The current patch becomes the father of its neighbor in the new path
set father myself
set visited? true
set active? true
; and store the real cost in the neighbor from the real cost of its father
set Cost-path Cost-path-father + distance father
set Final-Cost precision Cost-path 3
] ] ] ]
; If there are no more options, there is no path between #Start and #Goal
[
set exists? false
] ]
; After the searching loop, if there exists a path
ifelse exists?
[
; We extract the list of patches in the path, form #Start to #Goal by jumping back from     ;   #Goal to #Start by using the fathers of every patch
let current #Goal
set Final-Cost (precision [Cost-path] of #Goal 3)
let rep (list current)
While [current != #Start]
[
set current [father] of current
set rep fput current rep
]
report rep
]
[
; Otherwise, there is no path, and we return False
report false
]
end


In order to test the algorithm we prepare one procedure that will take a sequence of random patches to compute the shorter paths that connect them (if possible). After every path is calculated, we use the drawer turtle (see above) to highlight the path.

to Look-for-Goal
; Take one random Goal
let Goal one-of p-valids
; Compute the path between Start and Goal
let path  A* Start Goal p-valids
; If any...
if path != false [
; Take a random color to the drawer turtle
ask turtle 0 [set color (lput 150 (n-values 3 [100 + random 155]))]
; Move the turtle on the path stamping its shape in every patch
foreach path [      p ->
move-to p
stamp]]
; Set the Goal and the new Start point
set Start Goal
]
end

After having presented the A* for pathfinding on 2D grids (in fact, it works the same for any dimension if you consider the neighbours patches in that high dimensional space) we will present how to use the same ideas to find shortest paths in any kind of network (not only grids). for example, think on a road network or a street map of a city.

In this case, we will suppose that we have a network (a set of nodes connected by links), and given a start and a final node, our goal is to find a shortest path in the network connecting them. Since we are looking for pathfinding on networks, the cost of moving from one node to one of its neighbors will be the length of the link between them.

Although an implementation of A* can be made by using only the nodes of the network, we will give a version where one more breed of agents is used to perform the search on the network.

In this case, we will work with turtles, not patches, specifically with two types of turtles:

breed[nodes node]         ; to represent the nodes of the network
breed[searchers searcher] ; to represent the agents that will make the search


We don't need any extra property in the nodes. All the information will be stored in the searchers, and to know if a node has been explored it is enough to see if there is a searcher on it.

Nevertheless, searchers will have some additional properties for their functioning.

searchers-own [
memory               ; Stores the path from the start node to here
cost                 ; Stores the real cost from the start node
total-expected-cost  ; Stores the total expected cost from Start to the Goal that is being                        ;   computed
localization         ; The node where the searcher is
active?              ; Is the searcher active? That is, we have reached the node, but we must                        ;   consider it because its neighbors have not been explored
]


The setup procedure simply create the geometric network based on the number of random located nodes and the maximum radius to connect two any nodes of the network (two nodes are connected if the distance between them is less or equal to the radius).

to setup
ca
create-nodes Num-nodes [
setxy random-xcor random-ycor
set shape "circle"
set size .5
set color blue]
end


As in the previous case, we need to define the heuristic function that reports an estimation of the cost form the current node to the final one. Again, because we are looking for a shortest path, a good option for the heuristic can be the distance from the location of the searcher and the goal. You can try with different heuristics to study how this estimation affects the behaviour of the algorithm.

to-report heuristic [#Goal]
report [distance [localization] of myself] of #Goal
end


The A* Algorithm for networks is very similar to the previous one (patches). Since it is supposed that the network is accesible by the algorithm, we don't need to pass it as input. Therefore, it will only receive the initial and final nodes.

to-report A* [#Start #Goal]
; Create a searcher for the Start node
[
hatch-searchers 1
[
set shape "circle"
set color red
set localization myself
set memory (list localization) ; the partial path will have only this node at the beginning
set cost 0
set total-expected-cost cost + heuristic #Goal ; Compute the expected cost
set active? true ; It is active, because we didn't calculate its neighbors yet
] ]
; The main loop will run while the Goal has not been reached and we have active   ;   searchers to inspect. That means that a path connecting start and goal is   ;   still possible. In a next, and more general version, of this algorithm we   ;   will change the main loop and remove the reaching stop, but in the   ;   "geometrical" case, where the heuristic is exactly the same measure as the
;   length link, we can stop as soon as we reach the goal in the first time.
while [not any? searchers with [localization = #Goal] and any? searchers with [active?]]
[
; From the active searchers we take one with the minimal expected total cost to the goal
ask min-one-of (searchers with [active?]) [total-expected-cost]
[
; We will explore its neighbors in this block, so we deactivated it
set active? false
; Store this searcher and its localization in temporal variables to facilitate their use
let this-searcher self
let Lorig localization
; For every neighbor node of this location...
[
; Take the link that connect it to the Location of the searcher
let connection link-with Lorig        ; Compute the cost to reach the neighbor in this path as the previous cost plus the         ;   length of the link
let c ([cost] of this-searcher) + [link-length] of connection        ; Maybe in this node there are other searchers (comming from other nodes).
; If this new path is better than others to reach this node, then we put a         ;   new searcher and remove the old ones. Search-in-loc is an auxiliary         ;   report that you can find bellow.
if not any? searchers-in-loc with [cost < c]
[
hatch-searchers 1
[
set shape "circle"
set color red
set localization myself                  ; The location of the new
;   searcher is this neighbor node
set memory lput localization ([memory] of this-searcher) ; The path is                                                      ; built from the original searcher
set cost c                               ; Real cost to reach this node
set total-expected-cost cost + heuristic #Goal ; Expected cost to reach the
;   goal by using this path
set active? true                         ; It is active to be explored
ask other searchers-in-loc [die]         ; Remove other searchers in this node
] ] ] ] ]
; When the loop has finished, we have two options:
;   - no path has been built,
;   - or a searcher has reached the goal
; By default the return will be false (no path has been built)
let res false
; But if it is the second option...
if any? searchers with [localization = #Goal]
[
; we will return the path stored in the memory of the searcher that reached the goal
let lucky-searcher one-of searchers with [localization = #Goal]
set res [memory] of lucky-searcher
]
; Remove the searchers (we don't want the A* report to leave any trash)
; And report the result
report res
end


The searcher-in-loc report is an auxiliary node report to return the searchers located in it (it is like a version of turtles-here, but for the network).

to-report searchers-in-loc
report searchers with [localization = myself]
end


Together with the main procedure, we can write some more auxiliary reports and procedures to make visible how to use it. Some of them are in charge of highlight the path when it is found (take into account that the path is stored as a list of nodes, but we could make it by storing the links in between, or both). Its functioning is very similar to the one we used for the BFS.

to highlight-path [path]
let a reduce highlight path
end

to-report highlight [x y]
[
]
report y
end


And finally we give an auxiliary procedure to test the A* algorithm between two random nodes of the network. It simply takes two random nodes in the network and calculates the shortest path between them (if it exists):

to test
ask nodes [set color blue set size .5]
let start one-of nodes
ask start [set color green set size 1]
let goal one-of nodes with [distance start > max-pxcor]
ask goal [set color green set size 1]
; We compute the path with A*
let path (A* start goal)
; if any, we highlight it
if path != false [highlight-path path]
end

In the previous sections we have used A* in the most usual way, as a pathfinding algorithm for physical environments (in fact, for a long time it has been the most used method for pathfinding in videogames). Now, we will use it in a more general form, as a General Problem Solver as we did with BFS.

For that, remember that we need to project our problem as a search in a suitable state space where finding a solution is transformed in searching a path (a "good" one, preferably) that says how to go from an initial state to a final state.

As we mention above, we can use A*-like algorithm when we can get some global information about the searching space and, consequently, we can redirect the search based on some heuristic that measures the fitness of each option. In the previous cases, where a "geometrical shortest" path was intended, this heuristic was usually related with the geometrical distance between the state and the goal (we have used the euclidean distance, but other distances or variants can be equally suitable). In a more general search, where maybe there is no vector space behind the problem, one so direct heuristic could be unknown, and we must try with different options until we get good results (in a heuristic way, quite literally). Later, we will explore how good or bad heuristics can affect dramatically the performance of this algorithm.

In order to adjust our algorithm to a general purpose, the first "make up" adaptation will be to change the name of the underlying breed to reflect their nature:

breed[states state]       ; to represent the states of the problem
breed[searchers searcher] ; to represent the agents that will make the search


And we need one property in the states to store its content (the information related with the problem to be solved).

states-own [
content
]


Although it is possible to give an A* implementation by using only one breed (searchers and states in a common breed) we have preferred to maintain the state space and the searchers agents in different layers, in order to give a clearer distinction of their functionalities.

If we had have all the state space projected as a network (states as nodes, and transitions as links) we could use (almost) directly the previous version of A*, but usually the state space is too big to prebuild the associated network. For this reason, the essential change we introduce will be the dynamical construction of the states network as the algorithm will need it. That is the reason we will use the same reports for transitions, children-states, final-state?, ... that were introduced in the BFS post.

We will only note on the main changes of the code. The whole commented code can be seen in the model.

The links of the network will store the applied rule between states and the cost of the application. Usually, our goal is to minimize the total cost of the sequence of transformations to reach the solution from the starting point.

links-own [
rule
]


To focus in the main ideas of the algorithm we will use the same numerical problem as in the BFS post (reach a number from an initial one by using only some permitted operations). In this way, the auxiliary functions to manage the states and transitions will be (almost) the same.

Rules are represented by using lists of the form [ "representation" cost f], where:

• f allows to transform states (it is the transition function, and in this case we will make use of tasks),
• cost is the cost of applying the transition on a state (in this case, we will give a cost of 1 to every action, but it can be any value to reflect it),
• and "representation" is a string to identify the rule.
to-report applicable-transitions
report (list
(list "*3" 1 [x -> x * 3])
(list "+7" 1 [x -> x + 7])
(list "-2" 1 [x -> x - 2])
)
end


valid? is a boolean report to say which states are valid:

to-report valid? [x]
report (x > 0)
end


Children-states is a state report that returns the children for the current state. It will return a list of pairs [ns tran], where ns is the content of the children state (new state), and tran is the applicable transition to get it. It maps the applicable transitions on the current content, and then filters those states that are valid.

to-report children-states
report filter [ns -> valid? (first ns)]
(map [r -> (list (run-result (last r) content) r)]
applicable-transitions)
end


Final-state? is a state report that identifies the final states for the problem. In this case, it will simply compare the content with the goal (that is passed as parameter).

to-report final-state? [params]
report ( content = params)
end


As we will work with searchers that move on states, we will define an auxiliary report to decide (via its current-state) if a searcher has reached the goal:

to-report final-searcher? [#Goal]
report [final-state? #Goal] of current-state
end


As a first approximation of how far a state is from the goal, we define a very simple heuristic as the difference between the content of the current state and the goal:

to-report heuristic [#Goal]
let d abs (([content] of current-state) - #Goal)
report d
end


All the previous procedures must be changed to adapt the algorithm to other problems and state spaces, but it is supposed that the main A* algorithm have not to be touched.

Remember that the main difference with the previous versions will be that we must dynamically create the state network on-the-fly, as soon as we need it to explore it.

One more main difference will be in the main loop. In the previous cases, because we are working with an euclidean (or equivalent) distance to get the "shortest" path, we could stop the search as soon as the first searcher found it. Now, because the space is general, there is no relation between the connected states and the "distance" between them. That is, it is possible that paths exist longer in step applications (with more nodes) but shorter in cost than the direct link, as the following figure shows: state 2 can be reached directly from state 0 directly, with a cost of 10; or by crossing state 1 with a total cost of 2.

For this, reaching the goal in less steps is not a guarantee of lower-cost path, and we will have to continue the searching of other paths after reaching the goal. Only after checking that all the possible paths starting from the initial state are worse than the one we have reached, we can stop the algorithm.

Sometimes (for example, in games that need real time response) we don't need the best solution, but only a good one obtained in a faster way. In this cases, we can stop in a "good enough" path, and not to wait for the best one.

to-report A* [#Start #Goal]
; Create a state with the #Start content, and a searcher in it. Create-start is a procedure   ;   that creates an initial state and put a searcher in it.
create-start #Start #Goal
; The main loop will run while we have active searchers to inspect. That means that a path   ;   connecting start and goal is still possible
while [ any? searchers with [active?]]
[
; From the active searchers we take one of the minimal expected cost to the goal
ask min-one-of (searchers with [active?]) [total-expected-cost]
[
; We will explore its neighbors, so we deactivated it
set active? false
; Store this searcher and its current state in temporal variables to facilitate their use
let this-searcher self
let previous-cost cost
let C-S current-state
; Next, we create the neighbors of the state. Create-neighbor-states is the state procedure       ;   that expands the network by adding the children-states of the curent-state
create-neighbor-states C-S
; For every neighbor state of this state...
[
; Take the link that connect it to the current-state of the searcher
; The cost to reach the neighbor in this path is the previous cost plus the cost of         ;   the link
let c previous-cost + [cost-link] of connection
; Maybe in this state there are other searchers (comming from other states). If this new         ;   path is better than the other, then we put a new searcher and remove the old ones
if not any? searchers-in-state with [cost < c]
[
hatch-searchers 1
[
ht
set shape "circle"
set color red
set current-state myself                ; The current-state of the new
;   searcher is this neighbor state
set memory lput connection ([memory] of this-searcher) ; The path is built                                                     ; from the original searcher's path
set cost c                              ; Real cost to reach this state
set total-expected-cost cost + heuristic #Goal ; Expected cost to reach the                                                            ;   goal with this path
set active? true                        ; It is active to be explored
ask other searchers-in-state [die]      ; Remove other searchers in this state
] ] ] ]
; If some of the searchers have reached the goal, we have an upper bound for the     ;   cost, and we deactivate all the searchers with cost over this bound. But we     ;   must continue with the search because maybe there is one other path with lower     ;   cost.
; If you want a fast calculated path but maybe not the shortest in cost, you can     ;   remove this part and stop the main loop as soon as the first searcher has     ;   reached the goal:
;    change
;       while [ any? searchers with [active?]]
;    by
;       while [ not any? searchers with [final-searcher? #Goal] and
;               any? searchers with [active?]]
if any? searchers with [final-searcher? #Goal] [
let c min [cost] of (searchers with [final-searcher? #Goal])
ask searchers with [active? and cost > c] [set active? false]
]
]
; When the loop has finished, we have two options:
;   no path, or a searcher has reached the goal
; By default the return will be false (no path)
let res false
; But if it is the second option...
if any? searchers with [final-searcher? #Goal]
[
; Return the path in the memory of one of the searchers that reached
;   the goal with minimal cost
let minimal-searcher min-one-of (searchers with [final-searcher? #Goal]) [cost]
show [cost] of minimal-searcher
set res [memory] of minimal-searcher
]
; Remove the searchers
; and report the result
report res
end


The auxiliary procedures we have used in A* are the following:

to-report searchers-in-state
report searchers with [current-state = myself]
end

to create-start [#start #Goal]
create-states 1 [
set content #Start
set color blue
set shape "circle"
set label content
hatch-searchers 1
[
set current-state myself
set memory []
set cost 0
set total-expected-cost cost + heuristic #Goal
set active? true
] ]
end


The procedure to create dynamically the neighbors of a state s takes the children-states of s and creates a new node of the network for each one of them (if the states has been previously created, then it only connects them). In the link to connect the nodes it stores the rule that transforms one in the other.

to create-neighbor-states [s]
foreach children-states [      cs ->
let ns first cs
let r last cs
ifelse not any? states with [content = ns]
[
hatch-states 1 [
set content ns
set label content
set rule r
set label first r
] ] ]
[
ask one-of states with [content = ns] [
set rule r
set label first r
] ] ] ] ]
end


As we have stored the paths through the links, the auxiliary procedure to highlight the path when it is found is slightly changed:

to highlight-path [path]
foreach path [    p -> ask p [ set color yellow set thickness .4 ]
]
end


Finally, we provide a procedure to test the A* algorithm between two numbers (they correspond to slider-controls in the interface of the model).

to test
ca
; We compute the path with A*
let path (A* Initial Final)
; if any, we highlight it
if path != false [
highlight-path path
show map [edge -> first [rule] of edge] path]
show max [who] of turtles - count states
end


This case is a good example about the complexity of defining a good heuristic function. If you compare the results of the execution of A* (with this heuristic, that we will call h1) with those from BFS we can see something strange. For example, trying to go from 2 to 93 we get (there can be different solutions, but the length of them must be the same):

 BFS A* (h1) A*(h2) Length Solution 6 6 6 N.Searchers 230 1152 811 N.States 230 1149 477

It results that A* works worse then BFS... How is this possible if it is supposed to work with more information about the search space? Well, one thing is to have information and one other thing is to use it in a correct way. Let's see how A* behaves when we change the heuristic.

Note that sometimes the searchers are duplicated because we create a new one in an already visited node, that's the reason we show the number of states that the algorithms produce too.

In this problem, looking only at how far we are from the goal is not so realistic. Maybe when we are far away it gives some idea, but in short distances it gives wrong results. For example, 9 is reachable from 2 by using one only operation (+7) and the distance between 2 and 9 is 7, nevertheless the distance form 2 to 3 is only 1, but you need more operations to reach it.

Let's define one more function (not a good one, but maybe better) as heuristic in this case. Since the fastest way to "walk" big distances in this problem is by multiplying by 3, in long distances maybe using repetitively *3 is a good option. For this reason we will work with a variant, h2, that will measure how many operations *3 we need to go from one point to the other:

to-report heuristic [#Goal]
let d abs (([content] of current-state) - #Goal)
report ifelse-value (d > 1) [log d 3][d]
end


The previous comparison shows that h2 is better than h1 (at least for this example), but it is far away from the BFS... could you think in one different heuristic that guides the search in a more efficient way?

Independently if you get it or not, this example shows that A* is good enough only if you have a clever way to use the global information of the space, but not otherwise, and this "cleverness" comes from a very good understanding of the problem and its context.

Let's provide now one problem where we can easily define a heuristic that works better than the blind search. In this case we will try to give the minimal number of transpositions to sort a numerical list. Let's start with some definitions about the problem.

Let $L=[a_0\ a_1 ... a_n]$ a list of numbers. For every $i \in \{0,\dots, n-1\}$ the transposition $T_i$ is the function that swaps $a_i$ and $a_{i+1}$, that is: $t_i(L)=[a_0\ a_1 ...a_{i+1}\ a_i ... a_n]$

If we have a list, the problem is to sort the list in an efficient way by using only transpositions.

We will use our A* algorithm to solve the problem. For that we only need to define the procedures and reports that manage the representation of the problem in the states space. We start defining the report that applies the transposition $T_i$ to a list L:

to-report transp [i L]
let a item i l
let b item (i + 1) l
let l1 replace-item i l b
let l2 replace-item (i + 1) l1 a
report l2
end


Now, the children-states report can be easily defined by applying all possible transpositions to the content of the state (remember that we need to return a list of pairs [ns rule], and the rule is a list [representation cost f], but in this case f will not be in use, so we can put anything there):

to-report children-states
report (map [i -> (list (transp i content) (list (word "T-" i) 1 i))] [0 1 2])
end


Final-state? will be true if the list is sorted, something that can ve verified if for every index we have $a_i \leq a_{i+1}$:

to-report final-state? [params]
let indexes [0 1 2]
report ( reduce and (map [i -> (item i content) <= (item (i+ 1) content)] indexes))
end


And finally we have to provide a good heuristic to select adequately the next state to be explored. We will try with the number of errors we get in the list (an error is a non sorted pair $(a_i, a_{i+1})$).:

to-report heuristic [#Goal]
let indexes [0 1 2]
let c [content] of current-state
report length filter [x -> x] (map [i -> (item i c) > (item (i + 1) c)] indexes)
end


We can create a procedure to test how the algorithm is working with this heuristic. Take into account that, as the final-state? report doesn't depend on a specific state, we can pass anything as second parameter (goal) to A*, because it will not use it:

to test
ca
let I shuffle [1 2 3 4]
print (word "Initial State: " I)
; We compute the path with A*
let path (A* I true)
; if any, we highlight it
if path != false [
highlight-path path
print (word "Actions to sort it: " (map [first [rule] of ?] path))
]
print (word (max [who] of turtles - count states) " searchers used")
print (word (count states) " states created")
end