I think that the AI would be able to jump around and that it would have access to various other AI like functions like vision, planning, scheduling, communication that reside on the system that it is interacting with. However, its behavior, decision making, goals, history, higher-level learning ability would be part of itself. Therefore, it would interface with the rest of the system that way. Therefore, the agent-robot-interaction would be simplified and would allow the system to be seamless between real and virtual environments and would allow code reuse and modularity in the system.
The interface may be agent based (meaning the interface itself has goals, priorities, behaviors and an environment).
It also lends itself to distributed AI when there are multiple agents trying to interact with the same system. Meaning how does the system determine who gets to use what when resources are limited. Then there is the multiagent system aspect of when the agents are trying to do things that require cooperation. Trying to determine who does what with what systems. Especially when you are all possibly different and some may have more experience doing certain tasks than other agents. Also, forming hierarchies would be important.
I think that this might have applications in areas like smart grid. Where there are robots that are repairmen and essentially what happens is this: We have some robots they are specialized (heterogeneous) and there is a hierarchy within the grid that can send out … more on this later.
You should be able to txt your order to fast food and sub shops. Or a universal ordering app. The place would just subscribe to the website as a service and the site would have a standard way to order things.