Agent-Robot Interaction and app idea

I think that the AI would be able to jump around and that it would have access to various other AI like functions like vision, planning, scheduling, communication that reside on the system that it is interacting with.  However, its behavior, decision making, goals, history, higher-level learning ability would be part of itself. Therefore, it would interface with the rest of the system that way.  Therefore, the agent-robot-interaction would be simplified and would allow the system to be seamless between real and virtual environments and would allow code reuse and modularity in the system.

The interface may be agent based (meaning the interface itself has goals, priorities, behaviors and an environment).

It also lends itself to distributed AI when there are multiple agents trying to interact with the same system.  Meaning how does the system determine who gets to use what when resources are limited.  Then there is the multiagent system aspect of when the agents are trying to do things that require cooperation.  Trying to determine who does what with what systems. Especially when you are all possibly different and some may have more experience doing certain tasks than other agents. Also, forming hierarchies would be important.

I think that this might have applications in areas like smart grid.  Where there are robots that are repairmen and essentially what happens is this:  We have some robots they are specialized (heterogeneous) and there is a hierarchy within the grid that can send out …  more on this later.

You should be able to txt your order to fast food and sub shops.  Or a universal ordering app.  The place would just subscribe to the website as a service and the site would have a standard way to order things.

Antiviruses and Cancer

Well I diagnosed myself with an URI (upper respiratory infection I finished the initial 3 day first phase and am in the second phase).  Meaning I have a virus.  Most likely it is a Rhinovirus.  So, I was thinking about how to remove it…  I first thought of changing the cells so that they have some sort of “firewall.”  Probably implemented as a white list for the cell so that they wouldn’t be able to accept anything other than the things on the list into itself.  Then I was thinking about a honeypot cell that would attract the virus and would then destroy it.   All of this of course I am taking from computer security, so probably I will find that doctors have already tried these things.

Well my honeypot idea was implemented and published in 2011 at NIST!  http://www.sciencedaily.com/releases/2011/03/110302121842.htm.  The paper http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0016874.  Also more in 2012 but you need Science Direct access http://www.sciencedirect.com/science/article/pii/S0167779912000327.

However, my whitelist idea doesn’t seem feasible because the cell would have to be wrapped in this “firewall” which essentially acts as the decoy and this “atrium” per say can discard the virus if it determines that it is a virus.  So, can we stick natural cells inside the artificial cells and everything work ok?  I doubt it.

So, it seems like all we are doing is tricking the virus.  It is theorized by the 2011 paper that even though the virus can evolve to not try and go after the particular decoy that this also means it will reduce its ability to invade natural cells.  This is an interesting theory.  It would be neat to get funding to try and see if this is true.  I wonder how do you get viruses to evolve?  (http://en.wikipedia.org/wiki/Viral_evolution)  It seems like its not too hard.  They do a good job of it already.

This made me think.  Could the cell accept it and just produce duds?  So essentially during the replication process the cell would incorrectly replicate the virus so that it was useless?  I think this would have to be a reaction mechanism of the cell to being attacked.  Essentially the virus kills the cell such that it won’t be able to replicate anything correctly?  Sort of like an all or nothing sort of thing?  If cells were like this we could attack cancer this way!  We would just send a bunch of viruses in and destroy the cancerous cells.

Now this I need to think about…  (I have noticed I become very single minded and focused on random things for intermittent periods when I am sick…)  Well I looked it up and this seems to also have been an area of research in 2010 (http://www.ncbi.nlm.nih.gov/pubmed/20433575).  They looked at cell death as a way to stop viruses it seems.  A “genetically controlled cell death programme” as they call it.  So, that is cool.  They even mentioned my idea about it stopping cancer.  Cool.

Another thing, it would seem to me that if the cells are the ones who are doing all the work of replicating the virus that they would learn and augment themselves and communicate that info to the other cells.  I mean they are the ones that have all of the information about the virus right.  The cell that creates the virus is privy to detailed information about the mutation and change of the virus.  So, why not adapt itself accordingly?  Meaning why doesn’t the cell augment its DNA to protect itself from the viruses that it has replicated.  And then create a new cell that has that info and in the process leave the original cell to die?

Some viruses can prevent programmed cell death (apoptosis) (http://pathmicro.med.sc.edu/mhunt/replicat.htm).  Now that is interesting.

So, now I need a virus that can infect other viruses.  I googled that and found the virophage!

Hopefully my business takes off and I can have a bioengineering research department.

Neural net idea

Well this is a merging of neural networks from computer science and actual neuroscience.  So, the idea probably doesn’t entirely meld.

I was reading a while back how Google was studying neural nets and they learned that they were able to pick any random number of neurons and use them as the output rather than the standard output nodes and obtain similar results from training.  So, this morning I was reading an article about how neuroscientists have found that it was not important that they stuck the electrode in a particular neighborhood, but rather that they stuck it close to the white matter, the neural highways, that made the difference.

So, I am thinking along these lines.  Just like the Google scientists read output from inner nodes.  What about providing input to nodes to not just the input nodes, but also interior nodes as well?  So, usually we have the structure input -> hidden layers -> output.  What if input nodes had other input nodes as connections coming in or even multiple inputs?  Would this still work?  And what types of applications would it be useful for?  Well it just wouldn’t be a complicated neuralnet, I would look at feed forward ann.  And essentially try to mimic the white matter.

Ethics of Self-driving cars

Well I just read something pretty disturbing that GM for 10 years basically hid the fact that the ignition system was faulty and can cause the car to stall, leaving the passengers with no airbags. (http://finance.yahoo.com/news/engineers-switch-hell-began-gm-recall-woes-071538074–finance.html)

Ethically how are selfdriving/autonomous cars and the software in them going to do better?  It is already known that bugs are essentially inherent to software.  Will we rely on transparency of software? Who will be responsible?  There will probably be many people working on the bad piece of code.  These are questions that I think I should take the time to answer.

Large MAS app. Parking lots

So the idea is that parking lots and local walmarts are not going to be able to afford remaking their parking lots so that when we have autonomous cars we can get dropped off at the door and the robot park somewhere and then pick us up at the entrance.  This is because structurally there is not enough room for a line of cars to wait for the customers to get dropped off.  Also, since most walmarts the entrances and exits are the same door we would also   The line would become like the one when picking up kids from school.  So, it is probably logistically best if the walmarts that are already built have the customers walk from the parking spot of the car.  The problem then becomes one of optimization.  One, I don’t want to walk far going into the store and coming out.  So, the idea is that the emmergent behavior that I want out of the bounties is one of a cycle where initially I want to park as close to the entrance as possible.  Then as time goes on the cars must move to spaces further from the entrance so that those just coming in can park and those leaving can move closer to the exit to pick up the leaving customers.  We want to have the robots learn patterns of behavior and adapt them rather than specific

Want to create behavior patterns and learn the patterns of the particular environment to adapt and choose which pattern of behavior should be used.  The dynamics change as you move through the parking lot since it all depends on the number of cars in the parking lot, the layout of the parking lot, the heterogeneity of the cars (trucks, tractor trailers, RVs, cars, motor cycles etc) and the uncertainty about how long each car is expecting its passengers to be in the store.  How long you are going to be in the store is not something you want to share with the other cars.  That is private information.  We only want our car to know this.  And this number is only an estimate since it is doesn’t know if you are going to stop and talk with a friend you happen to spot while in the store.  Of course to improve the accuracy there would be need to be an interface via your smart phone to be able to tell it your progression.  However, this seems like a nice area for those wearable sensors to be able to predict your progress in your trip through the store and make adjustments based on you seeing something you like that wasn’t on your list.

So the idea is that with the bounties is that the cars would have a distributed mechanism solving the constraint satisfaction problem of everyone needs a parking spot and the optimization problem of specific spots wanted.

So, I think that coalitions of agents will emerge due to common exit times of the customers.  Also I think we want to minimize the number of movements that the cars make.  So, essentially if you know that your passenger is going to be in the store for a while make room closer to the entrance.  Essentially you place a bounty out for a parking spot closer to the entrance.

To pay the bounties would your client have to collect tokens for parking far away and then you would be able to use them in order to park closer.

A case where a lot of vehicles are trying to park at the same time is at distribution centers such as Utz.  Many drivers leave the center and come back to the center to park all around the same time.  This might be interesting as well…  I don’t know…

This would also be useful in the case where we have a system where there are both autonomous and human driven cars.

Another problem is the size of the vehicles and finding parking spots that take up multiples.

That is why I think it would be awesome if the cars were sort of like the bike share program then there would always be a car waiting at the front.  The part that would yours could be stored in a locker…

Secure Task Allocation Scheme

In the future the robots that are being used will most likely be common enough that anyone could purchase one and given enough knowledge infiltrate a system of robots.  Currently the level of security in task allocation methods for robots is like that of communication of localization data between large ships, nonexistent.  Methods for attack:

1. Steal a robot already being used either physically or remote hijacking

2. Wifi comm interference device (take out communication)

3. Plant an identical robot in the group.  Requires the ability of the robot to quickly integrate an become part of the group.  Once involved reconnaissance and

4.  Mirroring (man in the middle), essentially take control of the entire swarm and provide limited, looped or custom access/control to legal owners.

I’m sure there are many more.  These are the first four that came to mind.  I know that this is not something that needs to happen right now, but it is something that needs to be considered as more and more people and institutions start using robots.  Just as people hack smart phones and regular computers there will be even more incentive to hack and infiltrate robots.

So, how would you go about securing robots?  This seems like a very tough problem without constant surveillance to notify you of such things.  But then who watches the watchers as they like to say…

Fuzzy Teams (3/3/14)

I wrote this on a forum for a class I had.

I was thinking about framing team membership with fuzzy sets.

1. Need a good way to keep up-to-date on how likely each of the other agents are part of your team

2. How to tell how others perceive you being in particular teams.

The fuzzy set is a framework for describing how likely you are part of a particular set (ie team).

This could then inform decision making, communication, coordination, etc.

Some links a friend in class suggested.

http://link.springer.com/chapter/10.1007%2F978-3-540-85863-8_11

http://tavana.us/publications/SOCCER-TEAM.pdf

My response:

So, the reason I was thinking this might be useful is when an agent (robot) is part of multiple teams. Using various weighted metrics such as number of interactions, proximity to other members, communication and availability (metrics might be overlapping…) the “fuzziness” of being part of the team could be established.

Now just need to figure out a specific case where we have dynamic teams that having this info helps.

http://dl.acm.org/citation.cfm?id=756932

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.14.9562&rep=rep1&type=pdf

assuming a team is a coalition.

Current (5/10/14): Now I am thinking that this will go well with my “Friendship Network” idea.  Especially when using the bounties.

Google glass idea

I sitting at a counter watching the cook make the orders.  For each item he would have to look back at the slip of paper with what to make next.  This seems like the perfect situation to have google glass.  The orders would appear on the google glass in an optimized order and allow the cook to not have to keep glancing at the slips of paper.  Another cool thing would be to let the customers get a first person view from the google glass camera showing the cook make their meal.  This could be a form of advertising other meals as well.

I wonder if that is how the cooks at Chilis take orders, from slips of paper.

Dynamic lane reversal and Braess’ Paradox

http://www.cs.utexas.edu/~pstone/Papers/bib2html-links/ITSC11-hausknecht.pdf   Dynamic Lane Reversal in Traffic Management Peter Stone et al.

This paper is very interesting.  Based on my recent learning about the Price of Anarchy and Braess’ paradox I wonder why their method works.  Certainly it is an immediate solution.  However, I would imagine that as people learn the system it would degrade and Braess’ paradox would manifest.  I’m sure there is probably some sort of critical mass that would have to be reached in the number of cars before this happens.  Have to think about this…  I’m hoping to sit in on a urban transport design class maybe I can ask the teacher his opinion…

https://www.oasys-software.com/blog/2012/05/braess%E2%80%99-paradox-or-why-improving-something-can-make-it-worse/ is a nice post about the Braess’ paradox.

So, it would be interesting to be able to predict when Braess’ paradox will affect the system and as the above link suggests to automatically find instances of the paradox.  I think this is something that would be useful in route planning for commuters and possibly designing road systems.  This might be framed as a multiagent systems and distributed optimization problem.   Multiagent in the case of the modeling the uncertainty of the other drivers and coordinating with the other drivers using the app.