Agent-Robot Interaction and app idea

I think that the AI would be able to jump around and that it would have access to various other AI like functions like vision, planning, scheduling, communication that reside on the system that it is interacting with.  However, its behavior, decision making, goals, history, higher-level learning ability would be part of itself. Therefore, it would interface with the rest of the system that way.  Therefore, the agent-robot-interaction would be simplified and would allow the system to be seamless between real and virtual environments and would allow code reuse and modularity in the system.

The interface may be agent based (meaning the interface itself has goals, priorities, behaviors and an environment).

It also lends itself to distributed AI when there are multiple agents trying to interact with the same system.  Meaning how does the system determine who gets to use what when resources are limited.  Then there is the multiagent system aspect of when the agents are trying to do things that require cooperation.  Trying to determine who does what with what systems. Especially when you are all possibly different and some may have more experience doing certain tasks than other agents. Also, forming hierarchies would be important.

I think that this might have applications in areas like smart grid.  Where there are robots that are repairmen and essentially what happens is this:  We have some robots they are specialized (heterogeneous) and there is a hierarchy within the grid that can send out …  more on this later.

You should be able to txt your order to fast food and sub shops.  Or a universal ordering app.  The place would just subscribe to the website as a service and the site would have a standard way to order things.

Crowd source Bird ID APP

I was taking a walk tonight and I was listening to the birds and wishing I had an app that could ID the birds by their call/song/chirp.  I got back home and did a little digging and I couldn’t find an app that did this.  So, either it is too hard or too expensive or both.  However, I think that it would be really cool.

So, it would work like this if I were to make it…  I would have the user’s location and I would know the date and the user would start recording.  The program would try and pick out possible spots in the recording where it thought there were bird sounds and ask the user which one if any were actually birds.  There would be a neat little slider for them to pick out the time frame where the bird actually chirped.  Then that sound bite would be classified using our database using machine learning algorithms.  They would then get back a description of the bird, picture of it, and other sounds it makes.  If it wasn’t able to id the bird then it would ask you if you happened to know it and you could add that bird to our database.  The more people that contributed the better the database would get and more accurate the results would be.  You could take a pic and link to maybe the wikipedia article or other article on it.

Then, as more and more people used the app we could have a game where it could sort of be like a scavenger hunt for birds.  Also, it would be cool to make a map of all the birds around the world that you could explore and would be built by the gps coordinates of the people using the app.