Emotions and AI

Love and multiagent systems and AI in general…  Love can motivate and inspire us.  The affects are incorporated into an agent’s utility function and it affects the perceived costs versus actually costs.  It is a lens with which agents could view the world.  Is this emotion necessary for AI to function appropriately?  What emotions are necessary?

I think this could be an allegory.  And maybe it could be related somehow to autism and people that have a hard time with emotions.  It might already be a movie or book.
I found an article on what its like to have never felt an emotion:

Spelling

A friend of mine shared a link to an article about why children have a hard time learning to spell.  They argue that we are teaching spelling incorrectly.  We should be teaching how words are formed and the etymology not how to sound the word out to spell it.  They claim that about “12% of words in english are spelt the way they sound” [1].  No wonder I’m bad at spelling in english because that is usually how I spell things.  At least in Italian, words are spelt the way they sound!  Also, I was taught how to spell mainly as they described with spelling tests and memorization.  However, I did also learn definitions for each word.  However, I was really good at remembering the definitions, I guess I just didn’t think to associate the definitions with the spelling!  Although, I do sometimes do that.

So, maybe I should try learning to spell again using this method as I still struggle to spell and rely heavily on spell check.

[1] https://theconversation.com/why-some-kids-cant-spell-and-why-spelling-tests-wont-help-20497

Amazing last week or so

So, I thought I’d write about the past few days as they were quite exciting, eventful, and since I’m forgetful I don’t want to forget.  I’ll begin of course at the beginning.  Which probably isn’t really the beginning but oh well.  The excitement started 12/12/15.  I went home to hanover that Friday (12/11) and had supper at the Landing with Logan, Rebekah, Josh, Terri, and Mom and Dad.  Food was great, I got a bbq burger and it was massive and delicious.  But, more importantly it was the last meal I was eating with Logan before he became a married man!!!  The next day (12/12) on a beautiful sunny and probably around 60 degrees (practically spring in December!) at around 10am Logan and Rebekah were married!!!!!!!!!!!  The rest of the day was a bit uneventful.  But I wore a suit so that was sort of different (planning on wearing it again Jan 3…).  But I had a fun evening/night taking apart Mom’s new laptop and replacing the hard drive with an SSD.  Before replacing I did a quick boot speed test and it took about a 1.5 minutes to do a cold boot with the hard drive.  With the SSD it took about 12 seconds.  The really cool part was when I installed all of the stuff (anti-virus, dropbox, etc) it still only took about 15 seconds to boot (I wait until all the icons in the task tray are loaded).  Word and excel start pretty much immediately as soon as click on the icon.  Which is amazing :).  So, I was satisfied with the $80 250 GB SSD over the 1TB hdd.

But wait that’s not all :).  Wed. 12/16 I finished my last class for my PhD!!  I only have to do comps, propose, and defend!

And then to celebrate (not really but I like to think) the next day (12/17) I went with some friends to the DC zoo lights :)!  Basically my bible study group went to Kings Dominion and we Wesley couldn’t come and we wanted to do something with him.  So, Stephen Kuhl told me about the Zoo lights and I got everyone together and figured out the schedule etc.  This was really fun.  It was my biggest group of people I’ve organized into doing something together ever (9 people).  The group consisted of David, Kelsey, Cameron, Addie, Stephen Kuhl, Ashley, Stephen Emerick, Wesley and me!  We metroed in which was quite slow as we all tried to get there together.  I should try to get better at driving in DC.  But, altogether the lights were pretty and we got to see bison, monkeys and gorillas.  We then ate at Chipotle afterwards.

Next, came Saturday (12/19)!  I really packed all the fun into a short period of time didn’t I haha.  Saturday was New Hope’s 25th anniversary dinner hosted by the lovely Katy and Jonathan of The Pixie and the Scout (which now that I’ve met them I think the name is based off of her and him).  I got the opportunity to work in the kitchen with them plating the food and cutting sausage and learning all kinds of tricks.  I got to direct a group of people to set up the 18 tables so that they looked nice and like the model table (I’ve actually gotten good at giving orders since having to direct undergrads with robots).  The food of course was sublime!  Never had anything taste soo good.  I really need to get the recipes for the dishes.  I however ate in the kitchen because I couldn’t find a seat.  It was ok because Katy and Jonathan mostly were in the kitchen too so I got to talk to them a little.  They actually cater for Redeemer!  Tim Keller knows them by name!!!  Totally awesome :).  Through this experience I’ve developed a challenge problem for multirobot task allocation and that is serving tables.  It is a multi-robot task problem (coalitions may need to be formed), we need multi-task robots (robots that can do more than one task), time extended tasks, and some of the tasks have dependencies!  So, it is a very difficult problem and having a real life problem to try and solve is always better than just trying to solve it in the abstract and then looking for a problem to use your solution on.  So, I’m ecstatic!  I’m hoping to send them an email or facebook message to say thanks!

What do you know thats not all yet!  Sunday 12/20 I saw Star Wars VII The Force Awakens!!!!!!!!!!  It was totally awesome, I actually might go and see it again tomorrow (12/22).  I got some people from New Hope to come with me to the IMAX at the Air and Space museum in DC to see it.  We drove in and waited in line for about an hour and got pretty bad seats (right in the front).  But, the movie was excellent.  Went with Corrie, Cameron, Addie, Stephen Emerick, Stephen Kuhl, and Ashley.  The cool part was that Steven and Pastor Scott came with some people from HCC.

So, was Star Wars the pinnacle?  I doubt it.  But it will be hard to beat.

Autonomous Vehicles

So, I’ve been thinking rather small lately.  Especially with that autonomous mixer idea, I mean pathetic, am I right :p.  I really want to go back to the reason I wanted to get into AI and multiagent systems which is making autonomous vehicles!  So, I’m sure everyone knows that car manufacturers and even Google and Baidu are attempting to make cars specifically that are autonomous.  This is great!  I’m arguing that before full consumer acceptance of this happens and to make it affordable and economical we need to make it possible for consumers to modify their existing car to make it autonomous!  This seemed to be the direction that the DARPA grand challenge was heading in.  So, I found that some graduate from MIT also had this idea a year or so ago and have already made a company with a product (their website, wired article, machine learning job at their company).  Obviously I’m excited about their product because of its simplicity and the fact that they are doing this now!  Seems like its is meant currently for highways though.  So, it still needs a lot of work.

Automatic Dependency Injection

We need a source code analyzer that looks at the includes, imports, requires etc. in the source code and the functions that are used is able to extract the correct dependencies in order to compile and run the program.  This would save a ton of time in open source development.  I think this is possible too.  This could also be used to help developers to move to new versions of their dependencies.

Then in the future for dependency injection, users would not even need to specify the specific libraries or versions.  Could be inferred from how the methods are used in the code.  I think this is essential for more complicated programs.

Modern Jukebox

The modern jukebox should be an app and people can pick the song to listen too from their personal device.  Could have a voting based or purchase based.  Then I can listen to the music I want when I’m at the school cafeteria, at a restaurant, or the mall.  Could get “tokens” for playing music while in the store or restaurant.  In the future your device could even do the song requests for you.

Natural Language LfD & RL

So, I’m working with Ermo on applying reinforcement learning to text based games.  So, I was wondering if eventually if our method works if we could do text based learning from demonstration with reinforcement learning?  Basically instead of the user pressing buttons they would describe what they wanted the system to do using english sentences.  The user could then be able to say yes or no to what they are doing.  Using natural language to train a multiagent system seems like it would be better.  Especially since once it works for text, it could naturally be extended to speech!  Telling the robots what to do and what to pay attention to would be even better.

Bounty Hunting and Cloud Robotics

Cloud robotics needs very stringent QoS guarantees and in certain cases is highly reliant on location to satisfy some of the requirements.
So, I was thinking a while back that maybe a bounty hunting based cloud robotics system could work like:
The robot registers with the bounty hunting service the bondsman (highly distributed might have multiple bondsmen, the robot could be the bondsman, this could be explored).  The service then posts bounties out describing the tasks requested by the robot, its location/ip, QoS reqs.  Then as the bounty rises the different cloud services will tell the bounty hunting service that they will go after the particular bounty.  The cloud service will then contact the robot for the information required to complete the task (there could be a few bounty hunters and the bondsman could limit them etc.).  If the robot replies with the needed info the bounty hunter will then proceed to complete the task.  If they are able to complete the task before other bounty hunters then they will get the reward.  If they do not then they learn not to go after the task (exactly how the current bounty hunters learn).  These tasks are repeated and there are particular task classes due to the attributes of the types of tasks the robot needs processed (from control to high level planning).
The other neat thing is that many of the tasks are repeating.  So, the tasks could be to provide a plan to get to a particular location along with a standard performance metric.  Quality of the solution should also matter.  That is something that bounty hunting did not consider at first.  However, this is something that could be integrated.  What if there was a metric that was included in the solution that the bounty hunter provides that is standard across the bounty hunters and is quickly verifiable.  The winner would be the one that is able to produce a solution in the time requirements and has the highest quality.
So, the robot sort of acts as an arbiter.  So, if the robot put a bounty out on control level task (like give me low level actuator commands for doing this particular thing for the next 5 seconds) then there are a two options:
1. whoever starts giving the commands first is the winner
2. there are multiple winners as they are able to produce parts of the task.  Basically this is the case where it is good that you are getting commands from multiple sources and if the current winner for some reason looses connection then you have the other bounty hunter who is providing an equivalent solution but is faster or exists or whatnot.
The bounty seems like a good fit due to the variety of price structures and what not of different cloud services.  The different cloud services can decide if it is worth their time to go after the particular bounty or not.  The bondsman would also be able to learn how to adjust the base bounty and the rate of bounty increase based on the type of problem and its interaction with the different cloud providers.  Another reason that the bounty model is good is due to the fact that most likely the different cloud providers will complete the tasks using temporary resources where the prices are highly elastic.  So, having the bounty would work due to the nature of the pricing structures on the bounty hunters end.
I don’t know who to compare against.  Just show that the QoS guarantees were met/exceeded on the tasks even in a dynamic environment, the cost was kept within an acceptable range, the cloud providers and bondsmen could adapt to scale to large numbers of robots etc..
This could be used for autonomous vehicles (millions of cars) for example by putting out a bounty for the fastest/scenic/etc route, parking spot, charging location (for EVs), down to the most low level control of the car itself.  And of course for other robots. Would be interesting if co-located robots   This seems very interesting and exciting.
Some other ideas related to cloud robotics.  One is the ability for modular robotics to really stand-out.  You have the case here where the robot itself could modify its physical structure and abilities and instantly be able to adjust its behavior due to all the modules being in the “cloud”.

AI and Creativity

So I just read an article stating that AI is nowhere near supplanting artists due to computers inability to “decide what is relevant”.  I think that might be giving us AI researchers too much credit or going too soft on us.  We have yet to develop non-noisy inputs in order to simulate the emotional and non-functional aspects of the brain.  The closest we could get is to teach a computer based off of an FMRI of the brain while experiencing art/music etc.  Somewhat simpler is being able to recognize emotions and correlate what is happening with that emotion.  That is even more difficult.  That is when we are at the point that the machine can put itself in “another’s shoes,” as it were.  That is at an entirely different level than where we are at now.  So, I don’t disagree with the author, I just think that she is just scratching the surface of what AI is unable to do currently, especially in a general, non-lab setting.  However, I believe given better inputs (and of course better algorithms) that machines may develop human like emotions and ability to simulate others situations and thus develop a connection and be inspired to create art.  But, I’m pretty sure that won’t happen in my time :(.

http://www.technologyreview.com/view/542281/artificial-creativity/

Politics and MAS

Politics seems like a good real world example of the multi-agent inverse problem and trying to get agents to coordinate at a massive (country) scale.  Basically the multi-agent inverse problem is determining rules and behaviors at the low level that achieve a higher level objective.  This problem is made more difficult because the low level behaviors of agents interact with each other causing possibly unexpected emergent behavior.

Another thing that politics has is hierarchies…

Mainly this was prompted by the article on laws that pertain to the constitution and how they are interpreted.