ScraperWiki

I found this cool service call ScraperWiki.  It allows you to write python, ruby or php scraper as well as a view.  You can make the data public so others can view or private.  You can also make a view for your data!  So I think this is awesome!!!  I think for my Adv. AI class I think we could either use someone else’s data or make a scraper and mine our own data.

I also found Gravatar.  Cool service that allows its users to upload an avatar so that whenever they post to a site that uses Gravatar, their avatar will be used!  Really cool.  The cool part is that you can get the avatar image based on a hash of the user’s email address!  So maybe I could make an email address scraper and use that data to collect users Gravatar images and profiles!  Really crazy all the info that users can put into their profile.  Or I could maybe scrape for gravatar email hashes…  Then directly get the user’s profile…

Not sure what I could do with that data though…  Maybe try and find faces in the gravatar since their avatar may be their face.  This could be useful if I can’t get access to their profile.  Since then I could try and find them on facebook, twitter, etc..

I’ll have to get to you on what it would be good for…  Sounds super creepy though that this could be done.

Distributed 3d modeling using smartphones

You and your friends want to make a 3d model of something big!!  Not just a toy or person.  Rather your neighborhood or house.  Something big.  So the idea is to stitch together distributed 2d video to create a 3d model.  You would use your smartphones that have gps, bluetooth, accelerometer, altitude data and compass.  Send that data (of course would have to some sort of heuristic) back to a server which processes it and creates the model.

From researching this idea I found this article about how the navy developed malware for android to create a 3d model of the victims house.  So the idea of creating 3d model from cellphone data is doable.  I just want to distribute it and make it applicable to larger environments in real time.

You could become part of an initiative or start one.  Essentially someone defines the boundaries of the model via gps coordinates.  Then users who want to contribute data can join the network.  Bluetooth could be used to enhance the quality of the mesh by providing another source of location data.

The mesh could then be imported into games like second life or maybe into google maps.  Could use the 3d model to create the basis for an augmented reality platform.  So if you tag something while you are viewing the 3d model on your computer and then you go to that place you could view it through your phone.  The uses for a 3d model are endless.

Since we are distributing the data and the mesh will be updating, we could develop a highly robust to change mesh.  Essentially free from dynamic objects.  So if in one frame there is like a bike and in another there isn’t then I could determine to pick the frame without the bike.

This idea is really starting to sound like a huge project.  For my advanced AI class I think I need to focus on one thing.  Like the distributed 2d->3d modeling.  This post became a lot longer than my initial idea.

This blog looks interesting.

Xp-Dev web app

I made a php app to automate the task of creating group projects on xp-dev.  As a GTA for CS321, I had to create 70 accounts, around 16 projects and then add the students to their project!  That is a lot of clicking if you do it manually.  So, I made a php web app that utilizes Xp-Dev’s web api to do the work for me :).  Their api is very easy to use.  However, in order to use the code you must generate a developer api key and then request, through xp-dev’s ticket system, permission to add sub accounts.  Then all you need to do is put the names of the projects in one file and the email addresses of the group members with a blank line between project group in another.  Then run the app and voila all done :).

New Theme

I really am liking this new theme.  The text is easier to read and I like being able to choose the background.  The background used to be my wallpaper for my computer.  It seemed to go with the tech feel.

Adaptive websites

In AI terms the user agent’s environment when browsing the internet is their browser. So, if the browser could construct a model of the user’s behavior, when the user enters a website the model that has been constructed could be sent to the website in order to construct a personalized layout etc. Websites could possibly manipulate the user based on the provided model. I think that google already pretty much does this already with its search, but why stop there?

Usually when I go to a restaurant’s website I am looking for their menu. The browser could learn that is what I click when I go to those types of sites.

I could create a website that acts as a proxy to websites and could dynamically change the website for them! So, people go to my website to view other websites. I would create a plugin for chrome/firefox that would develop the profile when they search regular websites when they aren’t…

No, forget the website, just make the plugin! When they look at a website with the plugin running the website content will be rearranged and possibly home page will be displayed! Like in my personal example above when I go to a restaurant website it could show the contact info and menu!

This would mean that the user’s profile could be stored in the cloud or on there computer. If in the cloud I could compare the graphs of users visits and when a user goes to a website they have never been to, people that have that are similar to this person could help to reformat it! Sort of like Amazon or Netflix.

Of course the user could turn off the changing of the website and leave on the learning of the user.

We already have plugins to remove the ads why not do this? Is it possible to do this quick enough that the user doesn’t experience too much of a lag?