Distributed 3d modeling using smartphones

You and your friends want to make a 3d model of something big!!  Not just a toy or person.  Rather your neighborhood or house.  Something big.  So the idea is to stitch together distributed 2d video to create a 3d model.  You would use your smartphones that have gps, bluetooth, accelerometer, altitude data and compass.  Send that data (of course would have to some sort of heuristic) back to a server which processes it and creates the model.

From researching this idea I found this article about how the navy developed malware for android to create a 3d model of the victims house.  So the idea of creating 3d model from cellphone data is doable.  I just want to distribute it and make it applicable to larger environments in real time.

You could become part of an initiative or start one.  Essentially someone defines the boundaries of the model via gps coordinates.  Then users who want to contribute data can join the network.  Bluetooth could be used to enhance the quality of the mesh by providing another source of location data.

The mesh could then be imported into games like second life or maybe into google maps.  Could use the 3d model to create the basis for an augmented reality platform.  So if you tag something while you are viewing the 3d model on your computer and then you go to that place you could view it through your phone.  The uses for a 3d model are endless.

Since we are distributing the data and the mesh will be updating, we could develop a highly robust to change mesh.  Essentially free from dynamic objects.  So if in one frame there is like a bike and in another there isn’t then I could determine to pick the frame without the bike.

This idea is really starting to sound like a huge project.  For my advanced AI class I think I need to focus on one thing.  Like the distributed 2d->3d modeling.  This post became a lot longer than my initial idea.

This blog looks interesting.

Future: year 2213

Combine augmented reality, nano robot swarms and mind-device communication.  So the idea is you first imagine a structure.  This imagined structure is displayed (could be on glasses contact lens etc), augmenting reality.  Using the image you can then improve and change the design.  Could use your hands to manipulate it or just use your thoughts which ever is easier.  The nano robots can then form the imagined structure.  You could then store useful structures for later use and the system could learn to predict and suggest new structures based on your preferences and behavior.

You all knew I am a bit crazy ;).