Autonomous Mixer

Ingredients 🙂

  1. 3D printer
  2. Motors
  3. Touchscreen with Raspberry-pi
  4. Leap Motion (for gesture recognition)
  5. Pressure Sensor (to know when the bowl is on the turn table)
  6. Kinect (for facial response feedback, mixing monitoring)

So, this would be the coolest mixer everrr (lol)!  Would 3d print the housing and the mixing blades.  Would self-clean (maybe) and automatically mix the ingredients using the right speed and mixing blades without a mess.  Could use learning from demonstration to teach it how to mix different ingredients.  Then could eventually optimize it for minimizing different things like number of turns etc to mix depending on the ingredients and end product.  Would automatically stop when finished mixing of course.  Leap Motion so you don’t need to touch any buttons.  Connected to the internet to share learning data and settings.  Learn to print new mixing blades.

I have a ton more ideas but I’ll come back.  This seems like it should be on Kickstarter.  Most mixers are very lame and need you to touch them to be able to adjust settings etc.  This could be easily automated and keep the machine clean.  They are also very very expensive for some of the “nice” ones (around $400).  So, could sell this for a lot less and have a ton more features.

Autonomous Smart Faucets

So, i want to make a device that will turn on and off the bathroom sink for me…

Basically I’m too lazy to turn off the sink while i’m brushing my teeth. So,
I want to connect a RP to a Kinect/leapmotion to a couple motors that can move the knobs.

Then it would be cool to train it… Like to know i want cold water when i’m brushing my teeth i want hot water when washing my hands i want cold water when i’m filling by water bottle when to turn off the faucet etc.  Might be able to integrate HiTAB somehow so that the end user can develop new behaviors like hot and cold…  Essentially come up with different applications that I haven’t thought of yet.  It could also estimate water usage and sync with your phone to let you know that your children have brushed their teeth etc.

This would save soo much water too.  Therefore, it would be cost effective…  Really I could sell this and if you have a somewhat large family then you would see the savings…

TurboJpeg to get yuyv images

This uses the turboJpeg library to do a decoding of a compressed jpeg to yuyv.  TurboJpeg already converts the jpeg image to YUYV which is YUV 4:2:2 which is totally awesome since that is what we want!!

Can get libjpeg which comes with turboJpeg here and then do:

sudo dpkg -i libjpeg-turbo-official_1.4.1_i386.deb

To compile your stuff check out the makefile. Mainly do:

gcc drew.c /opt/libjpeg-turbo/lib32/libturbojpeg.a -I /opt/libjpeg-turbo/include/

So, it looks like it should be easy to add in this to get the speed up for our camera.

It was easy to use, however I only got 15 frames/second!  We need around 30 fps. 🙁

UPDATE: I got 30fps we are doing auto exposure which was slowing it down!!

My Attempt

The main issue is that of white on white which I can’t seem to get past other than extreme changes to the color as described in an earlier post.  Probably, if that turns out to be a thing that would just be applied if I couldn’t see any posts and only if I believed that I probably would be looking toward the goal posts anyway.

First do the most obvious a houghp and maybe try a contour to try and get the goal posts etc.  Also, use the fact that there is a horizon and that I only need to see the goal, ball and lines.  I don’t need to really see the field.  If I don’t see lines, goal, ball then it really doesn’t matter if I see green for a field.  So, a houghp and reasonable settings should get the field lines, goal posts and the ball as long as the horizon is used correctly.

A big problem with the ball and the field lines is that they are both white and that with other robots obstructing the ball from view.  This probably won’t be a problem in the competition, but it is for the goalie.  So, I think we will need to add some logic.  Like assume ball if not “line like” meaning it doesn’t extend.

Structural Analysis

The next step would be to look into the structural analysis in opencv and convexity defects… Nevermind…. that is after you have the points do you do this.  I need the points.  It might be useful for identifying hands or fingers, but don’t need it for goal posts.

Competitors Goal code

The NUbots goal detection code uses something called RANSAC.  I wonder if they have tried their algorithm on a white background yet…  It wouldn’t seems so since they are still using a LUT which I don’t think will work with white on white.  Of course this might be their old code from last year.

Breaking Countershading

So, I just had to play around with the colors of the picture that Canny couldn’t detect (and still can’t) but now there is a distinct green color for the white “goal post”!  That could then be id’ed by blob detection which we have now.

wall3ExtremeWithColorData

 

However, how to do this programmatically is beyond me!  Because if I could get this automatically, in any light setting and in real time I could detect the goal posts easily since this is the most extreme case.

Essentially this is trying to solve the countershading problem.  Where the object is camouflaged.  Only due to extreme manual manipulation of the image color settings was I able to produce this.  How do we create computer vision that can pick out countershaded objects?  This paper maybe could be used to automate this if I understand it correctly.

Also, there are some wikipedia things like tone mapping and the other related articles.  Being able to adjust the color and contrast seems to be important.  However, this is only one image and NOT thus not really saying much.  For example, with similar settings on the wall cluttered version I get a totally different color (pink) for the post:

wallcluterExtreme

 

So, to conclude, I think the way to go is what Sean had recommended from the beginning which is what Peter Stone does (I think).  Learn standard background images for the camouflaged objects!  Like in this.  For one thing, I think dynamically adjusting the contrast and other color settings would be pretty time intensive since it is a pixel-by-pixel change.  So, this seems more of a hack.  Learning the background and subtracking it from the current image to extract the goal posts seems like a viable solution?  Still not sure how that will work if the background is white and the posts are white.  So far, no solution…

Goal net

So, I grabbed some images from robocup 2014 that we took of the goal with the netting.  One has no people behind it and the other had people behind it.  I ran both Canny and Hough on it to see if the netting would stand out.

Canny with people behind the net:

cannyPeopleCanny with no people behind:

CannyNoPeople

 

Then just to show that this pattern just doesn’t show up everywhere I took a random sample of the image with some people:

randomWithPeople

 

And one without people which included the net:

noPeopleRandomWithNet

 

So, it doesn’t look like the net will make the difference.  So, what kind of algorithm will be able to see something even I can’t see?

 

 

White Goal Post Experiment

wall3As you probably can’t see is the fact that I have placed a white pvc pipe in an orange vice grip and put in front of a white background.  I tried running the image through opencv’s HoughLinesP (image below) and Canny and neither can extract the pipe.  However, I’m not surprised as when I zoom into the image the pixels are pretty much identical in color.

wall hough

So, the only option in my opinion is to use the background netting and us that to find the posts.  Otherwise there is really no hope.  Of course that is my rudimentary findings.