AI and Creativity

So I just read an article stating that AI is nowhere near supplanting artists due to computers inability to “decide what is relevant”.  I think that might be giving us AI researchers too much credit or going too soft on us.  We have yet to develop non-noisy inputs in order to simulate the emotional and non-functional aspects of the brain.  The closest we could get is to teach a computer based off of an FMRI of the brain while experiencing art/music etc.  Somewhat simpler is being able to recognize emotions and correlate what is happening with that emotion.  That is even more difficult.  That is when we are at the point that the machine can put itself in “another’s shoes,” as it were.  That is at an entirely different level than where we are at now.  So, I don’t disagree with the author, I just think that she is just scratching the surface of what AI is unable to do currently, especially in a general, non-lab setting.  However, I believe given better inputs (and of course better algorithms) that machines may develop human like emotions and ability to simulate others situations and thus develop a connection and be inspired to create art.  But, I’m pretty sure that won’t happen in my time :(.

http://www.technologyreview.com/view/542281/artificial-creativity/

Touchscreen for touchpads

Would be neat to have a color touch screen that acted as the touchpad for a laptop.  It could make gestures easier, allow you to maybe see your clipboard, show your password here instead of on your monitor (less easy for someone to observe this way).  I’m sure there would be a lot of superficial and gimmicky things you could do.

 

Braille with ultrasound

So, 2 ideas 1. use that ultrasound based display to act as a braille display, so blind people could use it as a way to read and interact with the device.

Then the other idea is to have a smartphone app that converts pictures of braille into english alphabet words…  I think computer vision would be able to handle that.

Generating human friendly plans?

How can plans that a computer is constantly changing be effectively and without disruption to the workflow be communicated? In a dynamic environment an online planning algorithm could be continuously improving plans.  However, we humans are not that flexible. So, one way is to be able to set a threshold on how much the new plan improves the current one in order to be presented to the humans.  However, this threshold would need to be adaptive to the situations.

Another question is whether we present only the changes to the old plan or the entire new plan? Do we give the humans a choice?

If there are multiple alternative plans do we show them all to the user and let them chose or should the algorithm just pick one for them? Or do we let that as an option to the user?