Autonomous Mixer

Ingredients 🙂

  1. 3D printer
  2. Motors
  3. Touchscreen with Raspberry-pi
  4. Leap Motion (for gesture recognition)
  5. Pressure Sensor (to know when the bowl is on the turn table)
  6. Kinect (for facial response feedback, mixing monitoring)

So, this would be the coolest mixer everrr (lol)!  Would 3d print the housing and the mixing blades.  Would self-clean (maybe) and automatically mix the ingredients using the right speed and mixing blades without a mess.  Could use learning from demonstration to teach it how to mix different ingredients.  Then could eventually optimize it for minimizing different things like number of turns etc to mix depending on the ingredients and end product.  Would automatically stop when finished mixing of course.  Leap Motion so you don’t need to touch any buttons.  Connected to the internet to share learning data and settings.  Learn to print new mixing blades.

I have a ton more ideas but I’ll come back.  This seems like it should be on Kickstarter.  Most mixers are very lame and need you to touch them to be able to adjust settings etc.  This could be easily automated and keep the machine clean.  They are also very very expensive for some of the “nice” ones (around $400).  So, could sell this for a lot less and have a ton more features.

Automatic Dependency Injection

We need a source code analyzer that looks at the includes, imports, requires etc. in the source code and the functions that are used is able to extract the correct dependencies in order to compile and run the program.  This would save a ton of time in open source development.  I think this is possible too.  This could also be used to help developers to move to new versions of their dependencies.

Then in the future for dependency injection, users would not even need to specify the specific libraries or versions.  Could be inferred from how the methods are used in the code.  I think this is essential for more complicated programs.

Modern Jukebox

The modern jukebox should be an app and people can pick the song to listen too from their personal device.  Could have a voting based or purchase based.  Then I can listen to the music I want when I’m at the school cafeteria, at a restaurant, or the mall.  Could get “tokens” for playing music while in the store or restaurant.  In the future your device could even do the song requests for you.

Natural Language LfD & RL

So, I’m working with Ermo on applying reinforcement learning to text based games.  So, I was wondering if eventually if our method works if we could do text based learning from demonstration with reinforcement learning?  Basically instead of the user pressing buttons they would describe what they wanted the system to do using english sentences.  The user could then be able to say yes or no to what they are doing.  Using natural language to train a multiagent system seems like it would be better.  Especially since once it works for text, it could naturally be extended to speech!  Telling the robots what to do and what to pay attention to would be even better.