Neural net idea

Well this is a merging of neural networks from computer science and actual neuroscience.  So, the idea probably doesn’t entirely meld.

I was reading a while back how Google was studying neural nets and they learned that they were able to pick any random number of neurons and use them as the output rather than the standard output nodes and obtain similar results from training.  So, this morning I was reading an article about how neuroscientists have found that it was not important that they stuck the electrode in a particular neighborhood, but rather that they stuck it close to the white matter, the neural highways, that made the difference.

So, I am thinking along these lines.  Just like the Google scientists read output from inner nodes.  What about providing input to nodes to not just the input nodes, but also interior nodes as well?  So, usually we have the structure input -> hidden layers -> output.  What if input nodes had other input nodes as connections coming in or even multiple inputs?  Would this still work?  And what types of applications would it be useful for?  Well it just wouldn’t be a complicated neuralnet, I would look at feed forward ann.  And essentially try to mimic the white matter.