I didn't realize it but there have been entire books written on developing Artificial Intelligence for the game of Risk. Here is an excerpt of one conclusion reached.
http://www.google.com/url?sa=t&rct=j&q= ... kvQ4QCpzhQRisk is a very complex game, with both infinite game-state and game-tree complexities.
Even calculating a good approximation of the average complexity of the game proves to
be complex.
My work has shown that it is crucial in Risk to combine several actions to achieve a
high-level goal. The addition of just a single pre-defined high-level goal paired with the
means to combine several actions to achieve it, has increased the probability to win a
game by more than
60 times (Section 7.3.2), while the addition of just a high-level goal
has increased the probability by almost
20 times.I believe that the key to building very strong Risk players are dynamic plans. Instead
of having three pre-defined high-level goals, the player would have to define reasonable
goals1 on its own using the evaluation function. Then a plan would have to be created
that would connect the actions of the player in a way that provides the highest probability
of achieving that goal. Of course, after each decision the situation would have to be re-
evaluated, though that might prove to be infeasible.
If the evaluation function determined for which high-level goal a plan would be made,
the evaluation function, and subsequently TD learning, too, would have a major effect on the playing strength.