Parallel Strategies for Best-First Generalized Planning

Parallel programming
AI
Automated planning

We applied parallel search techniques to Best-First Generalized Planning (BFGP), a Generalized Planning algorithm based on heuristic search.

Blocks-world planning problem example, with one gripper and two towers of blocks.
Blocks-world planning problem example, with one gripper and two towers of blocks.

This was my Bachelor’s Thesis project, developed at the School of Engineering of Universitat Pompeu Fabra. The project was tutored by Javier Segovia-Aguas, from the Artificial Intelligence and Machine Learning Research Group. If you are interested in the project, you can also check the full thesis document. Below, you can find a copy of the paper presented in the Highlights of Reasoning about Actions, Planning and Reactive Synthesis workshop.

Abstract

In recent years, there has been renewed interest in closing the performance gap between state-of-the-art planning solvers and generalized planning (GP), a research area of AI that studies the automated synthesis of algorithmic-like solutions capable of solving multiple classical planning instances. One of the current advancements has been the introduction of Best-First Generalized Planning (BFGP), a GP algorithm based on a novel solution space that can be explored with heuristic search, one of the foundations of modern planners. This paper evaluates the application of parallel search techniques to BFGP, another critical component in closing the performance gap. We first discuss why BFGP is well suited for parallelization and some of its differentiating characteristics from classical planners. Then, we propose two simple shared-memory parallel strategies with good scaling with the number of cores.

Introduction

Generalized planning (GP) has been a longstanding area of research in Artificial Intelligence (AI) (Jiménez et al., 2019; Srivastava et al., 2011). The foundation of GP is automated planning, which studies how to construct sequences of actions (commonly known as plans) to go from a specific initial state to a goal (Ghallab et al., 2016). Since planning is a hard problem (PSPACE-complete) (Baz et al., 2021), solving multiple problem instances from the same domain is computationally expensive. Given this realization, GP studies the synthesis of general plans that can solve multiple problem instances from the same domain, reducing the computational complexity to a one-time up-front cost (Hu & De Giacomo, 2011; Srivastava, 2011).

The state-of-the-art planning solvers are often heuristic-based planners (Taitler et al., 2024). These planners guide the combinatorial search of reaching a goal state from the initial state with heuristics, usually based on computing the cost of solving a relaxed plan as an estimate of the actual cost of reaching the solution (Bonet & Geffner, 2001). Given the success of heuristic search in planning, Segovia-Aguas et al. (Segovia-Aguas et al., 2021, 2024) propose a heuristic-based approach to generalized planning, which they call Best-First Generalized Planning (BFGP). BFGP leverages a set of novel GP native heuristics and a new solution space, independent of the number of input instances, to compute general algorithmic solutions.

Parallel programming is tightly coupled with AI’s recent success, as CPU manufacturers have transitioned to multi-core processors due to single-core performance stagnation (Kishimoto et al., 2009). As a consequence, there has been a notable effort to parallelize fundamental algorithms like Best-First Search (BFS), and these techniques have already been successfully applied to planning domains (Baz et al., 2021; Kishimoto et al., 2009; Kuroiwa & Fukunaga, 2021). One of the most impactful algorithms has been Hash Distributed A* (HDA*) (Kishimoto et al., 2009), which efficiently handles the two most challenging communication overheads of Parallel BFS: distributing search nodes and keeping a consistent closed list of reached nodes. To achieve this, they use a hash function to assign each state to a unique process (which allows local duplicate detection) and asynchronous communication.

While using parallel techniques for classical planners is an active research topic, applications to generalized planning are much less explored. In this paper, we discuss how the BFGP algorithm can be easily parallelized by design. Furthermore, we present two shared-memory parallelization strategies that can scale linearly with the number of cores.

Suitness of Best-First Generalized Planning for parallelization

A classical planning problem (Haslum et al., 2019) is defined as , where is the domain that comprises the set of lifted predicates and actions , and is the instance that specifies the set of constant objects, the initial state , and the goal condition. A solution to is a sequence of actions or plan that maps the initial state to a goal state where the goal condition holds.

GP is formalized as the problem of finding an algorithm-like solution (also known as generalized plan) to a set of planning problems, that is , where all planning problems share the domain, but may differ in the instances (different objects, initial state and/or goal condition). We focus on a special kind of generalized plans named planning programs. Formally, a planning program (Segovia Aguas et al., 2016) is a sequence of instructions , where each instruction is associated with a program line and can be of one of the following three types:

  • A planning action , where is the set of deterministic functions of the planning domain.

  • A goto instruction , where is a program line such that and , and is a proposition. Proposition can be the result of an arbitrary expression on state variables.

  • A termination instruction . The last instruction of a planning program is always a termination instruction.

A planning program is a solution to iff the execution of on every generates a classical plan that is a solution to the original planning problem.

In this work, we use the generalized planner BFGP (Segovia-Aguas et al., 2024), which has shown good performance for computing planning programs via heuristic search. BFGP is a frontier-search algorithm (Korf et al., 2005), which means that it does not repeat states during the search (i.e., all expanded planning programs are different). This unique characteristic eliminates the need to maintain a closed list of states to prevent search loops. Notably, this simplifies the algorithm’s parallelization, as duplicate detection is a significant source of synchronization and communication overheads in parallel BFS.

Another relevant detail about BFGP is that it is a Greedy Best-First Search (GBFS). Unlike A*, the GBFS is guided by an evaluation function that only takes into account the estimated cost of reaching the goal from node the current node (instead of also considering the solution cost up to the current node, ) (Kuroiwa & Fukunaga, 2021). In a GBFS, we are only interested in finding a solution to the problem, not necessarily the optimal one. Likewise, this trait eases the task of parallelizing BFGP since once we find a solution, it is unnecessary to check for optimality.

Parallel Best-First Generalized Planning

The following section presents our two strategies for parallelizing BFGP1 and evaluates their performance in different classical planning domains; of them are propositional (corridor, gripper, and visitall) and the other are numeric domains (fibonacci, find, reverse, select, sorting, and triangular sum).

Parallel strategy #1. It sequentially expands nodes until there are at least nodes per thread2. Then, it starts a parallel search in which each thread is independent and does not share nodes with other threads. To ensure a balanced workload distribution, should be larger for planning domains that require more program lines to reach a solution. Table 1 evaluates the scaling of this strategy (the single-threaded execution corresponds to the original BFGP implementation).

Execution time (seconds) of the first parallel strategy for different numbers of threads (). is the original BFGP. Best results in bold.

Domain

Execution time

Corridor

1212.37619.8296.215.2412.37

Fibonacci

0.460.240.130.080.03

Find

0.00.00.00.00.0

Gripper

12.926.723.161.280.59

Reverse

0.080.040.020.010.01

Select

0.020.020.020.010.01

Sorting

0.010.010.010.010.01

Triangular Sum

0.00.00.00.00.0

Visitall

338.14227.85157.91114.9179.48

Parallel strategy #2. In this strategy, threads distribute promising nodes during the parallel search phase, so there is a tradeoff between searching the most promising states and minimizing the communication overhead. In our solution, we compute the cost-to-go value of each generated node, and if it is equal to or better than the last expanded node, we send the new node to another thread. In contrast to HDA*, where a hash function determines which process will receive the node, we simply cycle between all threads. This approach is viable because BFGP does not need to perform duplicate detection. Table 2 evaluates the scaling of this strategy.3

Execution time (seconds) of the second parallel strategy for different numbers of threads (). Best results in bold.

Domain

Execution time

Corridor

768.89442.140.81

Fibonacci

0.130.10.05

Find

0.00.00.0

Gripper

5.72.883.96

Reverse

0.040.020.02

Select

0.020.020.01

Sorting

0.010.010.01

Triangular Sum

0.00.00.0

Visitall

118.82113.1620.84

Discussion

The first strategy performs very well, with speedups ranging from ~4x to ~98x in the most complex domains. Furthermore, increasing the number of threads always results in better performance. On the other hand, no parallel strategy strictly dominates. In some domains (like Visitall), the second strategy gets better scaling and performance than the first strategy, but in others, it gets slower execution times. We believe that a better prioritization of promising nodes and the use of asynchronous communication would help the second strategy perform better than the first one. To conclude, our results show that BFGP is well-suited for parallelization, and further developments could make BFGP capable of handling more complex problems from IPC planning domains (Taitler et al., 2024).

References

Baz, D. E., Fakih, B., Nigenda, R. S., & Boyer, V. (2021). Parallel best-first search algorithms for planning problems on multi-core processors. The Journal of Supercomputing, 78(3), 3122–3151.
Bonet, B., & Geffner, H. (2001). Planning as heuristic search. Artificial Intelligence, 129(1), 5–33.
Ghallab, M., Nau, D., & Traverso, P. (2016). Automated Planning and Acting. Cambridge University Press.
Haslum, P., Lipovetzky, N., Magazzeni, D., & Muise, C. (2019). An introduction to the planning domain definition language (Vol. 13). Springer.
Hu, Y., & De Giacomo, G. (2011). Generalized planning: Synthesizing plans that work for multiple environments. IJCAI, 22(1), 918.
Jiménez, S., Segovia-Aguas, J., & Jonsson, A. (2019). A review of generalized planning. The Knowledge Engineering Review, 34, e5.
Kishimoto, A., Fukunaga, A., & Botea, A. (2009). Scalable, Parallel Best-First Search for Optimal Sequential Planning. ICAPS, 19(1), 201–208.
Korf, R. E., Zhang, W., Thayer, I., & Hohwald, H. (2005). Frontier search. Journal of the ACM (JACM), 52(5), 715–748.
Kuroiwa, R., & Fukunaga, A. (2021). On the Pathological Search Behavior of Distributed Greedy Best-First Search. ICAPS, 29, 255–263.
Segovia Aguas, J., Jimenez Celorrio, S., & Jonsson, A. (2016). Generalized Planning with Procedural Domain Control Knowledge. ICAPS, 26(1), 285–293.
Segovia-Aguas, J., E-Martín, Y., & Jiménez, S. (2022). Representation and Synthesis of C++ Programs for Generalized Planning. In GenPlan.
Segovia-Aguas, J., Jiménez, S., & Jonsson, A. (2021). Generalized Planning as Heuristic Search. ICAPS, 31(1), 569–577.
Segovia-Aguas, J., Jiménez, S., & Jonsson, A. (2024). Generalized Planning as Heuristic Search: A new planning search-space that leverages pointers over objects. Artificial Intelligence, 104097.
Srivastava, S. (2011). Foundations and applications of generalized planning. AI Communications, 24(4), 349–351.
Srivastava, S., Immerman, N., & Zilberstein, S. (2011). A new representation and associated algorithms for generalized planning. Artificial Intelligence, 175(2), 615–647.
Taitler, A., Alford, R., Espasa, J., Behnke, G., Fišer, D., Gimelfarb, M., Pommerening, F., Sanner, S., Scala, E., Schreiber, D., & others. (2024). The 2023 International Planning Competition. AI Magazine, 453–460.

Footnotes

  1. All experiments used BFGP settings in (Segovia-Aguas et al., 2022) and were run on a 12-core Intel® Core™ i7-12700 Processor (20 threads) and 32G of RAM.

  2. is a parameter of the algorithm.

  3. The single-threaded results of Table 1 also apply to Table 2.