Skip to content

Competition II

February 11, 2008

I couldn’t let this one alone.  While reading Case’s book on Competition, I decided to simulate the theoretical game Iterated Prisoner’s Dilemma. The explanation on Wikipedia is great.  I use the canonical PD payoff matrix {(3,3), (0,5), (5,0) (1,1)} where the scores represent points awarded to (agent1, agent2) for combinations of play {(mum, mum),(mum, snitch),(snitch, mum),(snitch, snitch)} respectively.

The simulation runs 200-game matches for 40 rounds of play.  I have coded both everybody-plays-once-per-round and everybody-plays-everybody-per-round tournaments, although the results here are only from  everybody-plays-once-per-round tournaments. Tournaments start with 25 agents of each strategy type.

I programmed 7 agent strategies:

  • Defect (always snitch)
  • Random (fair coin toss)
  • Tit-for-tat (Do what your opponent did last play–T4T is best overall according to the literature)
  • T4T with occasional snitches
  • T4T with occasional forgiveness
  • Prey on cooperators
  • Average T4T (T4T based on the average of all plays to date)

Below are the results of 40 rounds of play. After each round, the total points earned by each strategy are tallied, then the population is redistributed according to the point breakdown.  Predatory strategies tend to become extinct after 15 to 20 rounds.

 

IPD Population Plot

Average T4T seems to be the population winner in my simulations, settling in at about 40% of the population.  I haven’t seen any documentation on this effect.  It may be an artifact of my particular mix of agents when the tournament starts.  Or I may have discovered a new, effective agent strategy.  This doesn’t seem likely as it is such an obvious strategy. Has anyone seen this effect before?  Please drop me a line.

The game win-loss-tie record between agent’s strategies was a little surprising to me.  Defection wins most of the matches. But it is clear that winning games doesn’t correspond closely to achieving the highest scores per round.  Coming in a close second against a wide range of strategies is what explains a particular population’s survival and growth.

Match Win-Loss-Tie

 

AvgT4T-Random: 0.492, 0.508, 0.000
T4TDefect-Random: 1.000, 0.000, 0.000
T4TDefect-T4TForgive: 1.000, 0.000, 0.000
AvgT4T-T4TForgive: 0.000, 0.000, 1.000
Defect-T4TDefect: 1.000, 0.000, 0.000
Defect-Pred1: 1.000, 0.000, 0.000
T4T-AvgT4T: 0.000, 0.000, 1.000
Pred1-T4TForgive: 1.000, 0.000, 0.000
T4TDefect-T4T: 0.974, 0.000, 0.026
Pred1-AvgT4T: 1.000, 0.000, 0.000
T4T-T4TForgive: 0.000, 0.000, 1.000
Defect-AvgT4T: 1.000, 0.000, 0.000
T4TDefect-AvgT4T: 1.000, 0.000, 0.000
Random-T4TForgive: 1.000, 0.000, 0.000
Defect-Random: 1.000, 0.000, 0.000
Pred1-Random: 0.250, 0.750, 0.000
Defect-T4T: 1.000, 0.000, 0.000
Defect-T4TForgive: 1.000, 0.000, 0.000
Pred1-T4T: 1.000, 0.000, 0.000
Random-T4T: 0.750, 0.000, 0.250
T4TDefect-Pred1: 1.000, 0.000, 0.000

If you compare the percentage of points earned in the (7!)/(2!)(5!) = 21 possible pairings, it is clear that the point leaders fairly evening split the points with all opponents, while the agent strategies the result in rapid extinction lose to some strategies by wider margins.

Match Scores (Fraction of Points)

T4T-T4TForgive: 0.500, 0.500
T4TDefect-T4TForgive: 0.510, 0.490
Random-T4TDefect: 0.492, 0.508
Defect-AvgT4T: 0.506, 0.494
T4TForgive-Random: 0.491, 0.509
Pred1-AvgT4T: 0.522, 0.478
Defect-T4T: 0.506, 0.494
T4TForgive-AvgT4T: 0.500, 0.500
T4TDefect-AvgT4T: 0.518, 0.482
Pred1-Random: 0.397, 0.603
Defect-Random: 0.858, 0.142
T4T-T4TDefect: 0.496, 0.504
AvgT4T-T4T: 0.500, 0.500
T4TDefect-Pred1: 0.512, 0.488
Pred1-T4T: 0.503, 0.497
Pred1-T4TForgive: 0.510, 0.490
Random-T4T: 0.502, 0.498
Random-AvgT4T: 0.503, 0.497
Defect-T4TDefect: 0.506, 0.494
Defect-T4TForgive: 0.537, 0.463
Defect-Pred1: 0.506, 0.494

You can download the code (zip, gzip) here to play with ideas for strategies or see how the dynamics of the system change based on the mix of agents. There is a Readme file explaining how to create new agents with your favorite strategies. Adding new agent types is fairly straight forward and requires only a few lines of new code in most cases. Python 2.5  is required to run the simulations.  The analysis programs require MatPlotLib to create the plots.  This software is available under a non-commercial Creative Commons License.  Have fun!

Advertisement
No comments yet

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: