You and your friends want to make a 3d model of something big!! Not just a toy or person. Rather your neighborhood or house. Something big. So the idea is to stitch together distributed 2d video to create a 3d model. You would use your smartphones that have gps, bluetooth, accelerometer, altitude data and compass. Send that data (of course would have to some sort of heuristic) back to a server which processes it and creates the model.
From researching this idea I found this article about how the navy developed malware for android to create a 3d model of the victims house. So the idea of creating 3d model from cellphone data is doable. I just want to distribute it and make it applicable to larger environments in real time.
You could become part of an initiative or start one. Essentially someone defines the boundaries of the model via gps coordinates. Then users who want to contribute data can join the network. Bluetooth could be used to enhance the quality of the mesh by providing another source of location data.
The mesh could then be imported into games like second life or maybe into google maps. Could use the 3d model to create the basis for an augmented reality platform. So if you tag something while you are viewing the 3d model on your computer and then you go to that place you could view it through your phone. The uses for a 3d model are endless.
Since we are distributing the data and the mesh will be updating, we could develop a highly robust to change mesh. Essentially free from dynamic objects. So if in one frame there is like a bike and in another there isn’t then I could determine to pick the frame without the bike.
This idea is really starting to sound like a huge project. For my advanced AI class I think I need to focus on one thing. Like the distributed 2d->3d modeling. This post became a lot longer than my initial idea.
This blog looks interesting.