My Secret Life as a Spaghetti Coder
home | about | contact | privacy statement
For background and history on partial order planners, see What is Partial Order Planning?, Selected History of POP Part 1, and POP History Part 2. Or, you can read the entire thing in PDF format.

Our goal is to give commands to the partial order planner, telling it what the goal is, the initial state (if it exists), and actions it can perform. The actions contain the name of the action, any preconditions that must be fulfilled before that action can be performed, and a set of effects the action has on the world state. After giving this information to the planner, it should output a plan if one exists.

For simplicity's sake, I've used a STRIPS-like notation, without the complexity of existentially or universally quantified variables, among other things. Further, only one possible plan is returned, rather than attempting to find all plans. The one returned is not guaranteed to be optimal, though (inadequate) tests have shown that it is correct. Plans are to improve these limitations in the future, moving to a less restrictive ADL-style syntax, and adding support for returning multiple plans.

In the meantime, the first task is to allow a user to enter commands in a syntax that looks like:

PlanName("Put on Shoes")
Goal(:RightShoeOn ^ :LeftShoeOn)

Action(:RightShoe, EFFECT => :RightShoeOn, PRECOND => :RightSockOn)
Action(:RightSock, EFFECT => :RightSockOn)
Action(:LeftShoe, EFFECT => :LeftShoeOn, PRECOND => :LeftSockOn)
Action(:LeftSock, EFFECT => :LeftSockOn)

Also, it should allow function-like notation, such as:

PlanName("Change Tire")
Init(At(:Flat,:Axle) ^ At(:Spare,:Trunk))

       PRECOND => At(:Spare,:Trunk),
       EFFECT => NotAt(:Spare,:Trunk) ^ At(:Spare,:Ground))
       PRECOND => At(:Flat,:Axle),
       EFFECT => NotAt(:Flat,:Axle) ^ At(:Flat,:Ground))
       PRECOND => At(:Spare,:Ground) ^ NotAt(:Flat,:Axle),
       EFFECT => NotAt(:Spare,:Ground) ^ At(:Spare,:Axle))
       EFFECT => NotAt(:Spare, :Ground) ^ NotAt(:Spare,:Axle) ^
       NotAt(:Spare,:Trunk) ^ NotAt(:Flat,:Ground) ^

The domain described in Code Sample 1 should produce a plan such as: LeftSock → LeftShoe → RightSock → RightShoe and RightSock → LeftSock → LeftShoe → RightShoe. As one can surmise from looking at the domain as it is written, any plan where the socks are on before the shoes is sufficient.

Finally, the domain given in Code Sample 2 should render plans like Remove(Flat,Axle) → Remove(Spare,Trunk) → PutOn(Spare,Axle), switching the first two actions depending on which it decides to do first (since either one would work).

Implementing (or allowing) such syntax in Ruby turns out to be simple. To get the conjunction operator ^, we simply define a module with ^ as a method, and include that module in Ruby's String, Symbol, and Array classes, since we'll be using each of these as symbols in our "new" language (See Code Sample 3).

module Logic
   def ^(condition)
     [self, condition]
#modify the symbol class to include the ^ operation
class Symbol
   include Logic
#modify the array class to include the ^ operation
class Array
   include Logic
#modify the string class to include the ^ operation
class String
   include Logic

Next, we need to allow the use of function-style symbols, such as Remove(:Spare,:Trunk). As with most things in Ruby, this is also simple. We just use the method_missing method in our module:

# when the user enters a function, turn it into an action
def method_missing(method_id, *args)
   symbol_name = "#{method_id}("
   args.each { |arg| symbol_name += arg.to_s + "," }
   symbol_name[0,symbol_name.length-1] + ")"

We now have the ability to use the syntax we laid forth in the first two blocks of code to define our problems that need planning. All that remains are to implement the functions in our "language" that allow us to define the problem domain, and an algorithm to solve for plans.

To do so, first, we initialize the start state with an Init() function that simply stores the conditions it is passed. Similarly, the goal state and initial open preconditions are simply stored into member variables as they are passed via the Goal() method. Finally, actions are constructed from a name and a hash with keys PRECOND and EFFECT.

#constants to use to store hash for precondition and effect
#(only for purposes of keeping the DSL looking close to the original)
PRECOND = :precondition
EFFECT = :effect

#store the start-state conditions
def Init(conditions)
   @start_state = conditions
alias init Init

#store the goal defined by the user
def Goal(conditions)
   @goal = conditions
   @open_preconditions = @goal
alias goal Goal

# store actions defined by the user
def Action(name, precondition_effect)
   action= ["name" => name,
             "precondition" => precondition_effect[PRECOND],
             "effect" => precondition_effect[EFFECT]]
   @actions = [] if !@actions
   @actions = @actions + action
alias action Action

Finally, we come to the meat of the problem, the partial-order planning algorithm. The algorithm itself follows a fairly simple path:
  1. From the list of open preconditions, choose one.
  2. Find an action whose effect is the same as the precondition we chose and add it to the plan.
  3. Add to the list of preconditions any requirements for that action.
  4. Remove from the list of preconditions any that match the effects for the chosen action.
  5. Repeat steps 1-4 until the set of open preconditions is empty, or no action that satisfies a precondition can be found.
  6. Remove any preconditions from the open list that match the starting state.
  7. If the set of open preconditions is empty, return the plan. Otherwise, fail.
The algorithm in Ruby follows:

def make_plan
   action_plan = []
   fail = false
   while (@open_preconditions.size > 0 && !fail)
     #randomize the open_preconditions and actions to show order
     #doesn't matter
     @open_preconditions=@open_preconditions.sort_by { rand }

     #find an action that solves it the first open precondition
     attempted_precondition = @open_preconditions.shift
     action_to_take = find_action_for attempted_precondition
     if (action_to_take != nil)
       add_preconditions_for action_to_take
       remove_preconditions_fulfilled_by action_to_take
       #add the action to the plan
       #put the precondition back on the open_preconditions, since
       #it wasn't fulfilled by an action
       fail = true if @open_preconditions.size == 0
       @open_preconditions.push attempted_precondition
       fail = false if @open_preconditions.size == 0
   if @open_preconditions.size > 0 || fail
     puts "There appears to be no plan that satisfies the problem."
     puts "Open preconditions: "
     puts @open_preconditions
     action_plan = []

Most of the code is aptly named where there are functions (see for the complete source), but two issues in this algorithm immediately jump to the forefront. The first is: why aren't we also randomizing the list of actions? Clearly, if there are two actions that satisfy the same precondition, the first one encountered will always win. This was done because randomizing the list of actions (if two or more satisfy the same precondition) has the potential to cause a loop of preconditions/effects, and thus cause incorrect plans to be generated. Since no attempt was made at finding the optimal plan, I didn't want to clutter the code and make it harder to follow. Correct plans are still generated, and a future version meant for more demanding environments would indeed allow a random action to be chosen.

The second issue that is not immediately clear begs the question: "just what is that sanitize_plan method doing there?" Some actions may add duplicate preconditions to the set of open preconditions. The algorithm as it stands allows this to happen for readability purposes, and simply cleans up the plan later with the sanitize_plan function.

Finally, it is also clear that a more "elegant" solution may have been to take actions as functions, which receive preconditions as their parameters, and whose output are effects. The thought of such an implementation is interesting and worthy of exploration, though time constraints prevented me from doing so in this case.

As mentioned above, a complete version of the code and three manual tests can be found in the file.

Hey! Why don't you make your life easier and subscribe to the full post or short blurb RSS feed? I'm so confident you'll love my smelly pasta plate wisdom that I'm offering a no-strings-attached, lifetime money back guarantee!

Leave a comment

Nice introductory discussion of planning on linked pages. I'm not fluent in Ruby, but in this implementation, it's not clear to me how the partial ordering of actions is maintained. Will it handling interleaved achievement of goals, as is needed for something like the Sussman anomaly?

Posted by RSA on May 10, 2007 at 02:53 PM UTC - 6 hrs

You would have thought that would be the first test I ran, but I completely missed it. I knew there were holes in my implementation too: for instance, multiple preconditions and effects for actions are spotty because I chose to just use an array, so its not a true POP, but it makes a good beginning for one, as it is a matter of rewriting a couple of functions to take advantage of the new implementation details.

With all that said, I did just now test it on Sussman's anomaly, and it does not appear to work. Even trying to convert to single preconditions and effects rather than multiple, I run into problems (though, it could be that I haven't defined the problem well in that case). And, there is another bug I've found in it (thank you for helping me find it!).

In essence, this one has a lot of the problems found in Sacerdoti's first work, as I understood them from reading Tate (referenced in the links RSA mentioned).

Thanks for the interest. I'll have to improve this when I get some spare time to make sure it /at least/ works on the problem that sort of spanned the whole field =).

Posted by Sam on May 10, 2007 at 05:33 PM UTC - 6 hrs

hgsfd shgfdh gfshf hgsfdhg ggshfdh sfdhgf shgfdshssssssssssssssssssssss

Posted by sakthi on Jun 13, 2007 at 05:17 AM UTC - 6 hrs

I've just read the threaded articles on partial order planning. Interesting stuff. I've had a quick look through the code and looked at the comments about the Sussman problem. I'm wondering, given you are prepared to accept one good solution from this program, why you didn't use Tsort for the partial ordering work?

I was able to solve the "shoes and socks" problem using this, but don't have the code any more...No, it seems not, but it didn't cope with resulting (subsequently generated) outcomes anyway.

It would be good to extend this to more than one plan. Actually, with tsort() one can do that by permuting the array one is sorting. The sort makes sure the bits that need to be are in the right order.

A curious application of it would be narrative generation...

Posted by Hugh Sasse on Jun 14, 2007 at 12:40 PM UTC - 6 hrs

Sorry, that link should have been:

Posted by Hugh Sasse on Jun 14, 2007 at 12:43 PM UTC - 6 hrs

@Hugh: Thanks for the comments.

Using TSort is a great idea. I haven't used it simply because I didn't think about it. When I revisit this problem, I'll give it a try, as I have a feeling if I were to create a graph out of the problem space, using this would easily provide an answer, and my guess is that it would solve Sussman's problem too. Good catch!

Regarding extending this to do multiple plans, I agree that it would be good. In fact I do plan to extend it in the future.

I don't do any work where it would be needed (right now, this is just a hobby), but at some point I will indeed come back to this and make it work correctly.

If there was enough interest, I wouldn't mind making this a legitimate project some place, but its such a small niche who knows? =)

Posted by Sam on Jun 14, 2007 at 01:13 PM UTC - 6 hrs

Instead of implementing actual POP, I suggest just changing the title to "Sequential Planning." It's way less work. Although, even then I still think you need some way to backtrack, either with recursion or an open list, to handle cases where an action is added that has preconditions which cannot be achieved. It would hopefully then use another action with the same effects, but achievable preconditions.

check out the Game Dev. Conference presentation by jeff orkin

or if you really gotta do the sussman thing, UCPOP (easily googleable) is what most classical POPs are based on, and without variable bindings, might not be tooooooo bad. It also has a lisp implementation, but who wants to read that crap?!

Your internet friend that is doing anything but working on his planning thesis.

Posted by bcg on Jun 29, 2010 at 06:13 PM UTC - 6 hrs

@bcg, thanks for the link and comment. I have this on my list of things to finish at some point, and it's interesting enough that I'll wait until I've got the time and inclination to do the full POP implementation.

Enjoy your thesis!

Posted by Sammy Larbi on Jun 30, 2010 at 08:10 AM UTC - 6 hrs

Leave a comment

Leave this field empty
Your Name
Email (not displayed, more info?)


Subcribe to this comment thread
Remember my details

Picture of me

.NET (19)
AI/Machine Learning (14)
Answers To 100 Interview Questions (10)
Bioinformatics (2)
Business (1)
C and C++ (6)
cfrails (22)
ColdFusion (78)
Customer Relations (15)
Databases (3)
DRY (18)
DSLs (11)
Future Tech (5)
Games (5)
Groovy/Grails (8)
Hardware (1)
IDEs (9)
Java (38)
JavaScript (4)
Linux (2)
Lisp (1)
Mac OS (4)
Management (15)
MediaServerX (1)
Miscellany (76)
OOAD (37)
Productivity (11)
Programming (168)
Programming Quotables (9)
Rails (31)
Ruby (67)
Save Your Job (58)
scriptaGulous (4)
Software Development Process (23)
TDD (41)
TDDing xorblog (6)
Tools (5)
Web Development (8)
Windows (1)
With (1)
YAGNI (10)

Agile Manifesto & Principles
Principles Of OOD
Ruby on Rails

RSS 2.0: Full Post | Short Blurb
Subscribe by email:

Delivered by FeedBurner