Data with built-in functions

I think it might be helpful in the future to not just have json data be just the dataset but also contain pickled functions that can be used by the end user to easily access the data in a way that works for their application.  Dill can be used to serialize a python function or class (though no security is assumed).  Then stick that serialized function into the json and use that to read the dataset.  Would be much nicer to just say that everyone needs to provide functions to their dataset that are easily obtained.  Since these are arbitrary functions this is very very dangerous so I’d only recommend using for data that you wrote yourself…  So, this sort of defeats the purpose…

Bounty Hunting as Highest Response Ratio Next

My original bounty hunting paper could actually be considered a market implementation of the Highest Response Ratio Next.

\text{Priority}=\frac{\text{Waiting Time} + \text{Estimated Run Time}}{Estimated Run Time}

The bounty assigned to tasks is set to some base bounty B_0 and a bounty rate $latex r$ which in the first bounty hunting paper was set to 100 and 1 respectively.  So, as each tasks was left undone the bounty on it would rise.  Tasks belong to particular “task classes” which basically means that there location is drawn from the same gaussian distribution.  In the paper we used 20 task classes and there were four agents.  The four agents were located at each of the four corners of a 40×60 rectangular grid.  The agents decide which task to go after based on which task has the highest bounty per time step which works out to be:

B(t) = P_i\frac{B_0 + rt}{\bar{T}}

This is for the case when agents commit to tasks and are not allowed to abandon them.  Essentially non-preemptive.  When the agents are allowed to abandon tasks we then have:

B(t) = P_i\frac{B_0 + rt + r\bar{T}}{\bar{T}}

Both of these equations are stating that the agents are going after the task in an HRRN order.  Now, the key part that bounty hunting added was that it made it work in a multiagent setting.  This is where they learned some probability of success P_i of going after the particular task class i.  Also, the paper experimentally demonstrated some other nice properties of bounty hunting based task allocation in a dynamic setting.

Presently I’m taking this approach and moving it to dynamic vehicle routing setting where I use it to minimize the average waiting time of tasks where the agent doesn’t get teleported back to a home base after each task completion.  Namely the dynamic multiagent traveling repairman problem.  This is another setting where the Shortest Job Next (Nearest Neighbor in euclidean space) is a descent heuristic and because the agents are not reset causes interesting behavior with a non-zero bounty rate.

Spatial Queuing Theory with Holes

So, consider a dynamic wireless sensor network.  We wish to minimize the average wait time for each of the nodes in the network to be serviced by new information.  We however do not want to increase the   By using the bounty hunting algorithm we can do this.  I might want to look into routing algorithms.

Consider poison point processes with holes.  When we have a single neighborhood we have a poisson point process, when we have multiple neighborhoods we will have non-overlapping regions where no tasks are generated.  This is where the “holes” are.  Stochastic Geometry is the area of mathmatics which is interested in this.

But, I’ve not really been focused on wireless sensor networks, and it is a bit of a stretch to fit bounty hunting to it (at least as far as I can tell.  My first papers might suggest otherwise).  But with my current direction I have more interest with spatial queues, I have queues rather than wireless sensor networks.  So, there is spatial queuing theory, but there is not a spatial queuing theory with holes!  The paper “Risk and Reward in Spatial Queuing Theory” deals with spatial queuing theory for the dynamic traveling repairman problem.  All of these systems assume a region without holes or space where no tasks will be generated.  This is an important thing in the real world as there are generally spaces where there won’t actually be tasks.  Therefore, I think I need to incorporate the concept of Poisson Point Processes with Holes.  Then build from that what to expect based on the size of the holes and locations.  The holes matter because the distance the servers must travel between the next task is dependent on the size of these holes!

So, I think this is important.  Actually I think that holes might not be general enough.  It would be better if I could generalize to any space.

Algorithmic trading

Wow so it has been a long time.  I’ve recently been looking at stocks again and just two days ago I found a stock and I was like I should buy that.  Then I didn’t.  But I really really should have because it then proceeded to go up by 20% in 2 days.  So, this made me look again into algorithmic trading.  I found a couple really good resources:

https://www.quantopian.com

https://www.interactivebrokers.com/en/index.php?f=1325

Quantopian will let you design your own algorithms for trading on old data and will also let you run it through robinhood.io or interactivebrokers.  I think I like IB better but should start with robinhood.io as there are no fees.  But this is awesome!

Build tools

So, I’ve been learning about some software engineering tools so that I can make my life easier while developing:

jitpack.io and https://travis-ci.org

jitpack makes it super easy to depend on a repo that is able to be built using gradle or maven.

travis-ci does continuous integration for your projects.  This means that if you are testing your code you will be doing it all the time.  And well I’m still not sure how to create test cases for mason as it doesn’t necessarily create the same output every time.  So…  But it is still cool.

Speed, Resiliency, and Sustainability these are important to me

So, I have been reading about the effectiveness of the current methods for determining the efficiency of fire stations.  Most fire stations look at efficiency by cutting there budgets.  Their argument is that basically lowering input (the budget) is not a good measure of efficiency.  They argue that you have to look at the value of the property saved

“the mission of the fire service is to be resilient and fast, not necessarily efficient”

https://medium.com/@esaylors/fire-departments-are-response-models-not-production-models-f7943d5c623d#.uybf5afb8

Efficiency would mean minimizing costs.  Essentially, this is reducing the budget.  In disasters we need speed, resiliency, and sustainability.  Currently bounty hunting doesn’t minimize the budget.

A bounty hunting system gives the agents more autonomy to chose the task they want to do rather than being governed by the results of the auction.

Would incorporating different types of bounty hunting strategies rather than just maximize the current bounty alone be the most effective approach?

I’ve already since starting this post have created a measure for speed and the jumpship bounty hunters are quite good at being speedy even under adverse situations in comparison to auction methods.

Ethics and Incompleteness

So, I was talking to my wife 🙂 about ethics and the types of ethics.  She told me that there is this form called consequential ethics and deontological ethics.  And there are I’m sure a bunch more.  But, what I have discovered is that they each have a set of axioms and from these they base all of there conclusions.  However, I believe that we here are encountering a similar problem as Godel’s incompleteness theorem.  Basically, a complete and consistent set of axioms for all ethics is impossible.

http://philosophy.stackexchange.com/questions/31726/can-g%C3%B6dels-incompleteness-theorems-be-applied-to-ethics is a link I just found and he references Tarski’s undefinability theorem.