The code and full question for this project is attracted below
In this project, you will design agents for the classic version of Pacman, including ghosts. Along the way, you will implement both minimax and expectimax search and try your hand at evaluation function design.
The code base has not changed much from the previous project, but please start with a fresh installation, rather than intermingling files from project 1.
As in project 1, this project includes an autograder for you to grade your answers on your machine. This can be run on all questions with the command:
<code>python autograder.py </code>
It can be run for one particular question, such as q2, by:
<code>python autograder.py <span class="nt">-q</span> q2 </code>
It can be run for one particular test by commands of the form:
<code>python autograder.py <span class="nt">-t</span> test_cases/q2/0-small-tree </code>
By default, the autograder displays graphics with the
-t option, but doesn’t with the
-q option. You can force graphics by using the
--graphics flag, or force no graphics by using the
See the autograder tutorial in Project 0 for more information about using the autograder.
Question 1 Reflex Agent
multiAgents.py to play respectably. The provided reflex agent code provides some helpful examples of methods that query the
GameState for information. A capable reflex agent will have to consider both food locations and ghost locations to perform well. Your agent should easily and reliably clear the
Question 2 Minimax
Now you will write an adversarial search agent in the provided
MinimaxAgent class stub in
multiAgents.py. Your minimax agent should work with any number of ghosts, so you’ll have to write an algorithm that is slightly more general than what you’ve previously seen in lecture. In particular, your minimax tree will have multiple min layers (one for each ghost) for every max layer.
Your code should also expand the game tree to an arbitrary depth. Score the leaves of your minimax tree with the supplied
self.evaluationFunction, which defaults to
MultiAgentSearchAgent, which gives access to
self.evaluationFunction. Make sure your minimax code makes reference to these two variables where appropriate as these variables are populated in response to command line options.
Question 3 Alpha-Beta Pruning
Make a new agent that uses alpha-beta pruning to more efficiently explore the minimax tree, in
AlphaBetaAgent. Again, your algorithm will be slightly more general than the pseudocode from lecture, so part of the challenge is to extend the alpha-beta pruning logic appropriately to multiple minimizer agents.
You should see a speed-up (perhaps depth 3 alpha-beta will run as fast as depth 2 minimax). Ideally, depth 3 on
smallClassic should run in just a few seconds per move or faster.
Question 4 Expectimax
Minimax and alpha-beta are great, but they both assume that you are playing against an adversary who makes optimal decisions. As anyone who has ever won tic-tac-toe can tell you, this is not always the case. In this question, you will implement the
ExpectimaxAgent, which is useful for modeling probabilistic behavior of agents who may make suboptimal choices.
As with the search and constraint satisfaction problems covered so far in this class, the beauty of these algorithms is their general applicability. To expedite your own development, we’ve supplied some test cases based on generic trees. You can debug your implementation on small game trees using the command:
Question 5 Evaluation Function
Write a better evaluation function for Pacman in the provided function
betterEvaluationFunction. The evaluation function should evaluate states, rather than actions like your reflex agent evaluation function did. With a depth 2 search, your evaluation function should clear the
smallClassic layout with one random ghost more than half the time and still run at a reasonable rate (to get full credit, Pacman should be averaging around 1000 points when he’s winning).