Decision Theory Paradoxes

I’ve been researching decision theory paradoxes because in my Bounty’s task allocation system I am considering the benefits of giving the bondsman the decision to allocate particular tasks through an auction (ie exclusive) or through its normal bounty (non-exclusive) method.  This brings with it a potentially multi-criteria decision which has me interested about the theoretical ramifications of such a decision.

In my search I have encountered quite a few paradoxes in decision theory, utility theory, and social choice (ie voting).  However, other than the wikipedia listing I haven’t found an authoritative reference (a book) on the subject!  I find this quite strange.  I know probably there are very few people that like this sort of thing, but I believe that knowing about them and that they exist might be of use to multiagent systems researchers who use such theories as the basis for their work.  So, if I can’t find a book or a survey paper on the subject I might write a paper on them.  That would be fun.  I would relate them all to problems in MAS and AI or computer science.  So, it would be Paradoxes that are relevant to an AI researcher.