Thursday, May 26, 2011

Network Theory

I've been finally noticing the blogger stats option, and in particular, the traffic sources. Who do I know in Iran? Nigeria? Sri Lanka?
Lotka-Volterra Equations

Funny how the brain works. We magically assume certain associations, not realizing it's not about us, is it? It's about the network. It's not the nodes, it's the loops.

Hey, speaking of which, did I ever tell you how I started out in college going for an art degree and came out the other end with a math degree? And how I almost used it?

You see, back in 1977, I took a course in mathematical models. I had taken a calculus class, and had found it fun and easy, so I decided to try some other courses. Intro to Mathematical models was loosely tied to another course in computer simulations, using FORTRAN. By coincidence, I was also taking an Intro to FORTRAN class as well.

Just so you have a flavor of the times, the computer facility was still using punch cards. You'd sit at a gigantic console, and type in commands on these chunky indestructible keyboards, and meanwhile a bunch of little holes would be punched out on a thick paycheck sized cardboard card. And when you had typed out your program on the cards, you would hand those through a slot in a glassed-in booth to an operator wearing a white lab coat.

Yes, a fucking white lab coat.

And so, you'd hand your shoebox full of cards over like a supplicant to a high priest. (I can remember one time a kid tripped and his shoebox full of punch cards spilled all over the floor. He started crying. To my credit, I helped him scoop up the cards before they were trampled). And then about a half hour later, they would stuff a printout into a mail slot with your results, which were usually not program output, but a list of programming errors you had committed. And so, this bullshit went on for several hours until you finally got a result.

Then one day they installed big boxy terminals in the basement of the math and physics building, and you didn't need to deal with the snob dickheads at the computer center. So, I would get all stoked up on dope, and visit the basement to code in front of an amber screen through the wee hours of the morning. The only other people there would be dorks playing a game of StarTrek or early primitive role playing games.

Anyway, one of the simulations that I got sucked into was predator/prey simulations which were first worked out in a simple version called the Lotka-Volterra equations. This LV model is also called the Foxes and Rabbits Game.  And basically what's going down here is that you have two main parameters used to determine the population numbers of foxes and rabbits. Obviously, the two numbers are linked to each other, and the parameters are reproduction rate of predators and prey versus destruction rate of prey by predators. It is a system of nonlinear equations, and these are notoriously difficult to find a general solution to. The trick is (and in math and science circles "trick" means a clever hack, not a trick as in trying to cheat) that if you can simplify some terms so that the nonlinear equation becomes linear, why, then, it is easy as pie to find a solution. Problem is, you typically take all the reality out of your simulation by doung so, and it ends up being a toy.

The LV model doesn't even come close to reflecting real life numbers, but it is, after all, only the simplest modelling. Once not included would be the carrying capacity of the environment, in other words, the recognition that lots of rabbits will be self-limiting, once they eat all the grass. So, having foxes prey upon the rabbits helps out the grass, and allows more rabbits. So you would expect a cycle, but not to the extreme seen in the LV equations. It was fairly easy to plug and play the equations into the computer via FORTRAN, but after awhile I realized this was all just a circle jerk. Nowadays, we call this "Chaos Theory".

I informed my professor of the situation, and his reply was quite snippy. "Well, figure out a better way" he sniffed.

So, I created networks with agents. I doubt I pioneered this. (Actually, my simulation preceded Andrew Watson's and James James Lovelock's Daisyworld simulation by at least a decade, but I never published).

But it just seemed an easier way to messy up the conditions without ending up with a really hideously complex set of equations. Local rules (the links and loops) affecting each agent (the nodes representing foxes and rabbits), all tied together by local conditions and rules. Kind of like John Conway's Game of Life, but with three levels of boardplay - one board being for the ground (grass, water, etc), one for the rabbits, and the top level for foxes. And the one thing I noticed right away was that the more inefficient the networks were, the closer the numbers looked to empirical results. In other words, the worse the transmission of data was on the links between nodes, the better the simulation played out.

Inefficient networks played out. Why? Well, foxes don't always catch rabbits. Rabbits don't always get to eat grass at their leisure. Thus was my justification. But as it turns out, it seems that inefficient networks may be more efficient than efficient ones. So say the ants, at any rate.

I ended up getting a C from snippy old Professor Poopbreath in the math models class. I didn't really care at that point. I had discovered von Neumann self-replicators by then, and had moved on to simulating robot farms.

But that's another story.

No comments:

Post a Comment