So the idea is that parking lots and local walmarts are not going to be able to afford remaking their parking lots so that when we have autonomous cars we can get dropped off at the door and the robot park somewhere and then pick us up at the entrance. This is because structurally there is not enough room for a line of cars to wait for the customers to get dropped off. Also, since most walmarts the entrances and exits are the same door we would also The line would become like the one when picking up kids from school. So, it is probably logistically best if the walmarts that are already built have the customers walk from the parking spot of the car. The problem then becomes one of optimization. One, I don’t want to walk far going into the store and coming out. So, the idea is that the emmergent behavior that I want out of the bounties is one of a cycle where initially I want to park as close to the entrance as possible. Then as time goes on the cars must move to spaces further from the entrance so that those just coming in can park and those leaving can move closer to the exit to pick up the leaving customers. We want to have the robots learn patterns of behavior and adapt them rather than specific
Want to create behavior patterns and learn the patterns of the particular environment to adapt and choose which pattern of behavior should be used. The dynamics change as you move through the parking lot since it all depends on the number of cars in the parking lot, the layout of the parking lot, the heterogeneity of the cars (trucks, tractor trailers, RVs, cars, motor cycles etc) and the uncertainty about how long each car is expecting its passengers to be in the store. How long you are going to be in the store is not something you want to share with the other cars. That is private information. We only want our car to know this. And this number is only an estimate since it is doesn’t know if you are going to stop and talk with a friend you happen to spot while in the store. Of course to improve the accuracy there would be need to be an interface via your smart phone to be able to tell it your progression. However, this seems like a nice area for those wearable sensors to be able to predict your progress in your trip through the store and make adjustments based on you seeing something you like that wasn’t on your list.
So the idea is that with the bounties is that the cars would have a distributed mechanism solving the constraint satisfaction problem of everyone needs a parking spot and the optimization problem of specific spots wanted.
So, I think that coalitions of agents will emerge due to common exit times of the customers. Also I think we want to minimize the number of movements that the cars make. So, essentially if you know that your passenger is going to be in the store for a while make room closer to the entrance. Essentially you place a bounty out for a parking spot closer to the entrance.
To pay the bounties would your client have to collect tokens for parking far away and then you would be able to use them in order to park closer.
A case where a lot of vehicles are trying to park at the same time is at distribution centers such as Utz. Many drivers leave the center and come back to the center to park all around the same time. This might be interesting as well… I don’t know…
This would also be useful in the case where we have a system where there are both autonomous and human driven cars.
Another problem is the size of the vehicles and finding parking spots that take up multiples.
That is why I think it would be awesome if the cars were sort of like the bike share program then there would always be a car waiting at the front. The part that would yours could be stored in a locker…