Co-Opetition!

I finally found the name that describes what I wanted to know if agents could do back in 2010!!!!  My original question was:

I was wondering: if multiple cooperative agent teams are competing, could they learn when it would be in their best interest to cooperate with a competitor.

Now I know I was really wondering if learning agents could learn coopetition!!!

 

I always love to find the right terminology.  It makes life so much easier when doing research.

Thoughts on old research question

Back in undergrad (July 2010!) my professor, Dr. Babcock, posed the following question in order to help expand my narrow focus:

The really interesting question is how does making the individual rules more complex affect the aggregate behavior patterns (or does the society no longer have patterns).

As I am studying MAL, swarms and emergence I am thoroughly intrigued with this idea.  I am hoping that with a better mathematical framework for understanding emergence and swarms I can formulate an answer.

Based on nature patterns generally emerge from either coordination or cooperation.  So, a better question might be can coordination and cooperation be so far abstracted due to the complexity of the rules that aggregate behavior patterns do not emerge?  Then at that point we may still observe emergent behavior only due to some more complex social notion.  So, what is it and how can we observe it :).

 

*On a side note: Since I found that email, I noticed the date I sent, July 2 2010, and he replied July 6 2010.  I really had a cool professor to take his time to reply even around July 4.

Some swarm thoughts

UPenn researchers have collected examples of swarms or group behaviors found in nature.  More than half of the papers describing the collective behavior of swarms exhibited coordination.  The cool thing was that they found that coordination behavior is present from killer whales to cancer cell populations!  It definitely seems that the ideas of coordination and cooperation are the keys to characterizing swarms.

So far, I have been assuming that swarms relied on cooperation and that was why I thought that the Shapley value would be very useful in characterizing them.  However, it can only describe half the cases.  Because in nature, coordination without cooperation happens.  Some good examples I found in the powerpoint here are:

”A group of people are sitting in a park. As a result of a sudden downpour, all of them run to a tree in the middle of the park because it is the only source of shelter.”

or

“Individual drivers in traffic following traffic rules”

The difference between having cooperation and not is responsibility.  They give the counter example of a convoy of drivers.  They are cooperating.

So, I believe that there are at least three types of swarms.

joint coordination cooperation

Only coordination

Only cooperation

The good news is that joint intentions are useful in describing coordination.  So, I need to read up on that.

 

This paper seems very interesting because it takes a wholly physics approach to describing the swarm phenomena.  Also this paper is interesting because it determines the most influential k nodes in a social graph using the shapley value.

Characterization of the Culture of Swarms

One of my goals over the winter break was to determine a good paper idea that had a lot of math and that would move the field of large multiagent learning systems forward.
Our contributions would be:

1. Alternative shapley value formula for swarms
2. Integration of the swarm value with k-order additive fuzzy measure theory
3. Solving the inverse problem to determine if a large system of agents is a swarm
4. Using the information to create autonomous hybrid swarm/multiagent learning environments

 

However, we might determine that the amount of work might be too much for one paper.

Integrating Agent Models

In my earlier post I was considering combining game theory and measure theory methodologies of MAS and swarm systems.  From my research into this topic I found this paper https://jacobstechnology.com/acs/pdf/ESOA06Hybrid.pdf.  Although it seems like the people who published it are respectable, the paper has never been cited.  Which is strange since it is an interesting topic.  They came to the same questions I have about combining agent/swarm methodologies.

To what extent can we construct a unified development methodology that supports
both extremes (and thus intermediate agent types as well)? It is clumsy to have to use
one methodology [38] for BDI agents, and a completely separate one [26] for
swarming agents. Hybrid systems will only become commonplace when both kinds of
agents can be developed within an integrated framework.

 

They believe that integrating different agent models is an important problem.

 

Also, before that paper was published, GMU had published a paper Cooperative Multiagent Learning: The State of the Art.  In their conclusion they stated the need for Team Heterogeneity.  Meaning that the team is composed of agents with differing abilities.  They also make the case for when the team is too large that the agent’s abilities can’t all be different.  This is the conclusion I came to.  For example, when there are swarms interacting with more complex multiagent systems.  How do they cooperate and learn from each other?  What is the framework for such interaction?  How can the agent’s mathematical underpinnings act as a catalyst for communication for teaching/learning?  Can a universal framework be developed?

Swarms

Innovation and learning seemed to develop more quickly with a larger population. Like comparing europe to the native americans. So I think that it would be wise to try to see how large groups of agents who were grouped doing different tasks whether innovation would emerge.

Like have a huge “soup” of agents and have different goals for different populations of agents. Will those with similar goals learn to cooperate.

If we limit resources will innovation emerge by having different populations cooperating to create a hirarchy to accomplish their respective goals?

Essentially, when a large population with differing goals but limited by the same resources will they learn to innovate in order to accomplish their goal or will they fail.

In the macro the micro complex actions apear to be simple effects. So, in essence we have a swarm.
Start listing of large multi agent systems that might also be swarms:

traffic networks including internet, cars, planes, and any other type of transportation routing.

learning in multi agent systems when others don’t learn?