Data with built-in functions

I think it might be helpful in the future to not just have json data be just the dataset but also contain pickled functions that can be used by the end user to easily access the data in a way that works for their application.  Dill can be used to serialize a python function or class (though no security is assumed).  Then stick that serialized function into the json and use that to read the dataset.  Would be much nicer to just say that everyone needs to provide functions to their dataset that are easily obtained.  Since these are arbitrary functions this is very very dangerous so I’d only recommend using for data that you wrote yourself…  So, this sort of defeats the purpose…

Macro Scale Agent Based Modeling

I was reading about this book called Factfulness: Ten Reasons We’re Wrong About the World—and Why Things Are Better Than You Think by Hans Rosling that Bill Gates recommend reading. From this I found the Gapminder (which is a spin off from Han’s work) and their tool:

https://www.gapminder.org/tools/#$chart-type=bubbles

which lets you explore a dizzying number of statistics in order to get a better idea of the world from a macro perspective.

Open Numbers is a cool organization that has a lot of data and is where gapminder pulls its data to put into their tool.  Particularly this dataset:

https://github.com/open-numbers/ddf–gapminder–systema_globalis

As I am into multiagent systems and agent based modeling this seems like an amazing resource for providing real world data to back up simulations.  There are so many interesting things to try and model this data and then with those models be able to code “what if” scenarios.  Like say what if we taxed all the millionaires and billionaires 1% every year and redistributed it somehow to the poorest 6 billion?  With this data we could see how nations could change and populations grow.  We might even find that the people we tax grow even richer due to the increase in the number of people that would be buying things.  So many other things we could study with sort of simulation.  We could consider what would happen if we had trade tariffs, or natural disasters, or famines… We would see what would happen globally not just locally and not just to a particular sector but to a variety of variables.  Clearly this would require a massive amount of research and more data than is currently available.  It would be awesome just modeling the behavior of these datasets would be beneficial to understanding how the world works and possibly aiding decision making to possibly reveal outcomes previously not thought of.

Nix

I have been thinking about being able to reproduce results easily and quickly.  As you can read in my previous post about jupyter notebooks.  They will, at least in python let you do so.  However, when attempting to reproduce entire dependencies for your software so that you can easily install on another machine there is Nix:

https://nixos.org/nix/about.html

There are obviously other ways of managing packages, but NIX install packages within build environments so that you can isolate packages to particular projects.  Then you can easily know that you have the list of packages that are needed for reproducing your project build on another computer.  It is pretty neat, but as you may have guessed it works with linux and max os, not windows.

Jupyter notebooks

Some interesting projects:

Google has there own modified jupyter notebook that integrates into google drive:

https://colab.research.google.com/

And there is Binder (beta) that will create an executable jupyter environment from a github repo with jupyter notebooks.  Then anyone can easily run your code.

https://mybinder.org/

Bounty Hunting as Highest Response Ratio Next

My original bounty hunting paper could actually be considered a market implementation of the Highest Response Ratio Next.

\text{Priority}=\frac{\text{Waiting Time} + \text{Estimated Run Time}}{Estimated Run Time}

The bounty assigned to tasks is set to some base bounty B_0 and a bounty rate $latex r$ which in the first bounty hunting paper was set to 100 and 1 respectively.  So, as each tasks was left undone the bounty on it would rise.  Tasks belong to particular “task classes” which basically means that there location is drawn from the same gaussian distribution.  In the paper we used 20 task classes and there were four agents.  The four agents were located at each of the four corners of a 40×60 rectangular grid.  The agents decide which task to go after based on which task has the highest bounty per time step which works out to be:

B(t) = P_i\frac{B_0 + rt}{\bar{T}}

This is for the case when agents commit to tasks and are not allowed to abandon them.  Essentially non-preemptive.  When the agents are allowed to abandon tasks we then have:

B(t) = P_i\frac{B_0 + rt + r\bar{T}}{\bar{T}}

Both of these equations are stating that the agents are going after the task in an HRRN order.  Now, the key part that bounty hunting added was that it made it work in a multiagent setting.  This is where they learned some probability of success P_i of going after the particular task class i.  Also, the paper experimentally demonstrated some other nice properties of bounty hunting based task allocation in a dynamic setting.

Presently I’m taking this approach and moving it to dynamic vehicle routing setting where I use it to minimize the average waiting time of tasks where the agent doesn’t get teleported back to a home base after each task completion.  Namely the dynamic multiagent traveling repairman problem.  This is another setting where the Shortest Job Next (Nearest Neighbor in euclidean space) is a descent heuristic and because the agents are not reset causes interesting behavior with a non-zero bounty rate.