We who follow technology and have an interest in both the mechanisms and rate of technologic change, often fall prey to what I’ve seen referenced as the “gee whiz syndrome”; that is, we become so enamored with some new-seeming aspect of technology that we forget (or dismiss) the preceding capability. Perhaps more importantly, we fail to realistically take into account both the burden of historical expectations a technology carries forward with it (critical to how it will be received by potential users) as well as the dismissal arising from un-met expectations due to the unachieved potentialities seemingly inherent to the technology. If a technology is perceived as being a previous disappointment (hydrogen as a fuel or power source is only one example), then any subsequent technologic advancement from the previous capability has to overcome an additional market bias against it in addition to the actual physical science-based challenges. Nuclear fusion might be a good example of un-met potentialities I suggest.
Now (OK, last week), comes Alvis Brigis (via Phil Bowermaster) with yet another possible example for us to consider. Full disclosure here; I have exchanged numerous e-mails with Alvis in regards to our mutual posts on Future Blogger, and I’ve been a call-in guest on Phil’s Blog Talk Radio program Fast Forward Radio. I am acquainted with both gentlemen and to a large degree share their interest in matters technologic. That all said, I think they both have taken a heapin’ helpin’ of the kool-aid this time.
Having flung the obligatory monkey feces, let me now proceed to thoroughly mix the metaphors and spread a little oil on the roiled waters (before setting it merrily alight, I expect).
As children we all practice “simulation”. Sticking to stereotype (if only for simplistic example’s sake), little girls “simulate” human interaction with dress-up dolls and little boys build elaborate “simulations” of transport scenarios with a patch of dirt and a model vehicle of some description. Feel free to replace the described technology with a different historical example and the human behavior remains true; humans learn (at least in part) by playing out imaginative constructs that permit variations of circumstance to determine potential for success deriving from future actions. I would submit that this capability is one of the important factors permitting human evolution beyond that achieved by our nearest physiological brethren.
Alvis makes this point:
My personal take on the matter (original article [link redacted, WB]), in alignment with both Cascio and Kurzweil’s views, is that as organisms evolve and life’s complexity increases, new species with brains capable of greater quantification and abstraction (simulation!) emerge at a regular clip. Over time, these organisms discover ways to expand their knowledge by communicating (actively or passively) information to one another and letting the network manage their quantifications and decisions. Then, eventually, the higher-level organisms figure out how to extend their knowledge into the environment through technology that allows them to communicate and retrieve it more easily than before. This is accomplished directly through technologies like language, writing, or classical maps, and indirectly through the hard-technologies like spears, paint, and paper that critically support knowledge externalization.
In other words, I believe that simulation plays a critical role in not only the evolution and development of the human species, but also of all forms of life on this planet and probably in our known universe (as suggested by recent findings that physical matter millions of light years distant closely resembles our own).
Consequently, I find it likely that we will soon discover a proof, power law or other theorem for complex systems that correlates increased simulational ability with increased 1) control over environment and 2) survivability. It may look a little bit like the following diagram, with the added explanation that simulation drives the creation of more knowledge as our informational inputs are expanded by technology that steadily increases the data we mine from withion our environment (inner space) and across the universe (outer space) …
[my bold, WB]
Leaving aside the question of possible examples of these putative alternate intelligences, one particular human developed just such a conceptual mechanism as Alvis pines for about 2,500 years ago as the planet Earth turns. In strategy, all potential points of active contention (commonly individual people or social entities like corporations, armies or countries) are described as position(s). The guiding intellect of each position (traditionally described as “the general”) determines which of the potential actions available best advances the position, relative to itself and others. The means chosen to achieve these actions are known as tactics. The mechanism developed to measure the potential effectiveness of possible tactics is called planning, and most people completely overlook how this mechanism pertains to their own capabilities as much as to other’s. No matter how lovingly a plan might “come together”, the actual implementation always differs from the plan in some (often drastic) respect due to human inability to accurately simulate actual capabilities as well as intentions of possible opposition and/or assistance in carrying out a tactical option.
Phil makes the point:
As a species, and as individuals, we began to create better and better conceptual maps of the world around us and to make better use of those maps. We got better at simulating.It should be obvious that better simulation amounts to better evolutionary success — just take out the word “evolutionary,” and consider some examples …
A very succinct description of the planning process of strategy.
Here comes the “setting alight” bit.
Because we may have, or more accurately be in the process of making, better tools doesn’t necessarily imply that we have some new or even especially different tactic available to us. What is true is that we can make better plans eventually because the simulation technology we are developing permits us more accurate insight into other’s capabilities/intentions than before. Add to this the ability to better express our own position and we have increased potential to reduce misconceptions, making less likely an adverse tactical selection being made to our detriment.
However, while important, none of the foregoing really rises to the level of “cosmic” I’m afraid. The inability to consistently communicate clearly within our own species is only one example of the adverse environment within which we seek to live and succeed in our daily lives. Stipulating there is “someone out there” with which to mis-communicate at all, we do ourselves a serious dis-service thinking we can begin to simulate such contact with any reliability. Closer to home, given the potential for non-human species (whether traditional or wholly artificial) to be “upgraded” to near-human intellect – or, pace Michael Anissimov, beyond – within the relatively near future, strategic theory provides a proven mechanism with which to employ our burgeoning simulation technology to our best mutual advantage.
Finally, because we can simulate possible interactions and outcomes more graphically (and thus more persuasively), we need to consciously avoid the dangerous assumption that “the map is the reality”; that our simulation is the outcome rather than a variably informed guess.
We have a word for that, too.