Twitter and Facebook seem like a great place to find potential user base for a new app. They both group people into such nice niche markets.
How can plans that a computer is constantly changing be effectively and without disruption to the workflow be communicated? In a dynamic environment an online planning algorithm could be continuously improving plans. However, we humans are not that flexible. So, one way is to be able to set a threshold on how much the new plan improves the current one in order to be presented to the humans. However, this threshold would need to be adaptive to the situations.
Another question is whether we present only the changes to the old plan or the entire new plan? Do we give the humans a choice?
If there are multiple alternative plans do we show them all to the user and let them chose or should the algorithm just pick one for them? Or do we let that as an option to the user?
Also, I think Godel might have some things like this…
Seems like an interesting idea for Courier World. Description of a Double Dutch Auction from (Introducing interaction-based auctions into a model agent-based e-commerce system—preliminary considerations):
This auction is relatively counterintuitive and in
its basic version works like this (based on ): a buyer
price clock starts ticking at a very high price and continues
downward. At some point the buyer stops the clock
and bids on the unit at a favorable price. At this point
a seller clock starts upward from a very low price and
continues to ascend until stopped by a seller (who offers
product at that price). Then the buyer clock resumes in a
downward direction, followed by the seller clock moving
upward. Trading is over when the two prices cross
(purchase is made at the crossover point).
So a reverse version would be the above just in reverse.
Today I went to a security talk and the speaker gave a demonstration of how when you tweet a link various “robots” will follow the link and index it and do other things. He gave a couple of example of what could be done. One example was tweeting links to login forms with valid user id but invalid password. Since most login forms will lock the user out after some number of tries this will annoy users. Especially since based on his research the links will be revisited long into the future. So, the user may have to reset their password multiple times. If you had all of the usernames one could lockout all of the users without anyone knowing it was you who did it.
He also was trying to actually figure out whether it was a human that actually attempted to load the link. So, he discovered that different browsers have a certain cut off on the number of 403 redirects it will handle before quitting. However, he has found that many bots will just continue to follow the redirects.
He also showed some other cool hacks. The spring lunch group went out with a bang :)!