At this point, I want to explore the second half of the future: the prospect that while the long-term information security environment is likely to become better in obvious ways, it is likely to worsen in subtle ways. The key to understanding this phenomenon lies with how tomorrow's information systems are going to be used.
As noted earlier, the increasing power of hardware and the growing sophistication of software suggests a future in which silicon starts to act more like carbon: that is, computers take on more characteristics of humans while networks take on more characteristics of human organizations.
Tomorrow's information systems are likely to make, or at least set the table for, more judgements than today's do. What is more, the feedstock for these judgements are likely to be information harvested from the world out there. Such applications are likely to appear among institutions that depend on reacting to an outside environment: e.g., politico-military intelligence, business sales-and-marketing, environmental control, or broad-scale social service functions such as law enforcement, traffic regulation, or public health. Institutions that do not react with the outside directly but may learn from external experience may develop some links with the outside world. Process control applications may never have these links, at least not directly. Yet, even process control applications may not be immune to influence. For instance, a toy company may have its agents cruise the Web in search of data to help them predict the hot trend for next Christmas. The results of its search will change what it makes, thus its production mix, and thus how it schedules and supplies its production lines -- a classic process control application.
Today, computers process information using tight algorithmic logic from a carefully controlled set of inputs -- the way they are supposed to. Tomorrow, computers may use techniques of logic processing, or neural network technologies to sift through a trusted set of material and bring things up to human attention. The day after, so to speak, computers may reach and present conclusions based on material of widely varying reliability. Thus a military intelligence system may evaluate the relative security of a village through a combination of authenticated overhead imagery, an analysis of potentially spoofable telephone switch activity and overnight police reports, and the collective detritus of written gossip from electronic community bulletin boards. A traffic controller may set stop-lights based on monitored traffic flows, plus a forecast of the kind of traffic loads selected large events may generate based on what their organizers predict. A drug company may put out usage advisories to doctors based on combing through data-bases on hospital treatments and outcomes. Individuals may plan their evenings out based on the suggestions of their agents trolling web sites that list entertainment offerings cross-correlated by less formally established "what's-hot" lists.
Needless to add, there will be people who find it worthwhile to corrupt such systems. They may not all be criminals; indeed, many of them would classify themselves as basically honest gamesmen eager to put the best face on the world's data-stream in order to influence the judgements of Web users. Some Web sites are crafted so that their URL appears at the top of search lists generated by automatic tools (e.g., Altavista). It would not be surprising to see the less scrupulous engage in many of the same fraudulent techniques of misrepresentation and lying that today's con artists do. Indeed, for a long time to come, computers will be easier to con than people will. Thus, a flow of false information on patient histories that reflect the efficacy of a licensed drug may subtly shift a system's recommendations to another one (bad reports of which are buried). A series of reports on the flow of loans to an otherwise suspect lender may convince rating agencies to boost its reputation and permit it to garner loans that it otherwise would not deserve (and has no intention of paying back).
Once you have this picture of the future, it does not take much imagination to forecast what else may go wrong. So here goes:
Learning Systems and Neural Net Computing: For the most part, one can determine why a system does what it does by examining its source code to learn what and querying coders to learn why. Tomorrow, that process may not be so easy. Computers may learn from experience. If they use traditional AI, they will build and refine a data base of rules that will govern the logic by which they draw conclusions. If they use neural nets, they will steadily tune their association matrices by using real-world test sets and backward propagation.
All this is well and good -- until something goes wrong. If the experience base upon which learning systems or neural nets have been corrupted then their conclusions will be similarly corrupted. Fixing them will be costly. Corruption must be detected, and that may be possible only by observing poor decisions. What if the corrupter alters the input so that the system makes a bad decision only when it encounters a very specific input set? Unless one knows the input set, corruption may be completely masked. Even if the nature of the corruption is determined, when the system was corrupted has to be estimated. Then the system can be reset to its last known uncorrupted state, if one has archived such parameters. Otherwise, the system has to be returned to its original settings, which means losing all the learning that has taken place since it was turned on.
Software Agents: One of the ways integrated open systems work is by letting clients place software agents on servers. These software agents, in effect, query servers, in some cases periodically. Sometimes results go back to the client. In other cases, one agent or a part of an agent may be sent forward to other sites. For instance, a travel agent, so to speak, interested in putting together a meals-lodging-entertainment-and-transportation package may be split into four sub-agents (each of whom then go out looking for the best deals in their area, return candidates, so that the master agent can fit them together).
In theory anything an agent can do may be done by successive queries, but the use of agents saves on message traffic (especially for "tell-me-when" requests) and permits in-server consolidation. Badly designed agents, whether by not-so-benign neglect or malign intent, may cause some servers to cycle forever for an answer, set up a mutually destructive server-to-server harmonic, or may replicate without bounds and clog entire networks. Malevolent agents may look for certain outgoing mail, and then work their way into systems from which it originated. If mischievous enough, such agents could have a chilling effect on free network speech. Indeed, network administrators are already beginning to understand that bringing a network to its knees does not take many automatic E-mail responders (All it takes is one message sent to at least three individuals on vacation that echo an "I am on vacation" message to everyone on the original mailing list).
Hijacked agents or agents forwarded to successively less trustworthy servers ("a friend of a friend of a friend ...") may relay misleading or dangerous information back. In a sufficiently rich network, replicating agents may be carriers of replicating viruses whose eradication may be fiendishly difficult if there are niches in the network ecosystem that cannot be identified or do not cooperate.
Ecologies: The reference to ecosystems was deliberate. Systems, beyond some point of complexity, and many seemed to have reached that point, become extremely difficult to control in the sense that one can predict the performance of the whole by understanding the performance of each part. Even perfect understanding of components cannot predict all fault modes in a succinct way. Although today's hacking requires active intruders, much of the risk and tedium associated with malicious hacking can be transferred to bots -- pieces of code that wander the Net looking for a computer to roost in. A single hacker unleashing a flood of junk to clog a system can be traced and stopped; perhaps even caught. Similar effects, however, may be generated by implanting a virus in thousands of systems. Turning on one or two (e.g., including a key word in otherwise innocent text) can simulate a chain letter on steroids, as each infected system turns on another, with thousands of sites flooding the system, well before their owners know what is going on.
Mother Nature has long ago given up trying to govern complex systems by imposing behaviors on each component. Instead, ecologies carry on from one year to the next through a mix of competition, diverse niches, predator-prey relationships, and, above all, a facility for adaptation and evolution. Tomorrow's networks may need some way to maintain homeostasis by populating themselves with analogous instruments. Needless to add, of course, no one will get the proper combination and distribution of instruments right the first time. It is not much of a leap to see how such instruments may be used by evildoers to bring systems down to the silicon version of extinction, at least temporarily.
A New Approach: All this suggests a completely different approach to information security. Consider how complex humans are compared to computers and how they, like computers, network themselves in order to exchange knowledge. Yet, for the most part, it is difficult for humans to pass disabling bitstreams among each other. Few word combinations can cause a rational person to seize up, spin cycles endlessly, or flood friends and acquaintances with pointless communications. Why? As noted earlier, humans accept inputs at the semantic rather than syntactic level; messages are converted into specific meanings and then processed as such. Information and commands are screened for harmful nonsense.
Yet, as any observer of human affairs can attest, cognition does not prevent certain messages from achieving widespread and deleterious affects. Ideas grip the imagination and spread. Some induce "extraordinarily popular delusions and the madness of crowds" (indeed, "information warfare" is, itself, such a meme). This is where education, notably classical education, comes in. People are trained to externalize and objectivize information and view it as something separate from the self; to examine information critically and thereby avoid the twin hazards of excessive naivety or total cynicism. Rhetoric deals with the proper inference of intentions from statements. Logic deals with generating correct implication therefrom. Language is taught as a precision tool so that information could be passed and understood with clarity. Etiquette governs what people say (and how intrusively) and inculcates propriety so that people keep private information private. Thus can humans interact with others, even strangers, to form functioning organizations and other relationships.
Building such sophistication into silicon will not be easy, both technically and socially. To exchange information requires conformance to shared norms at several levels: of meaning, of intention, of context, and of behavior (e.g., what constitutes a legitimate piece of information). Without shared norms -- standards as it were -- there is no meaning but only programmed reactions. As long as that is true, it will be difficult to defend information systems from a potentially polluted environment except through mechanisms antithetical to the way we would like to see ourselves be. That is, we will have erected castles, moats, gates, and sympathetic brigands to defend ourselves against cyber highwaymen.
Computers cannot be made consistently more powerful and remain safe if they are not endowed with the power not only to input and output bytes but understand them. But, having taught them to listen and speak we must simultaneously teach them proper manners.
Recapitulation: The moral to the story will be, if not familiar, then at least recognizable.
The old model of information security is hard and closed: users, processes, and code are either licit or illicit. There is no grey area. The trick is to differentiate between the two, give the former just enough liberty to do their job, and close the system off to the rest. This model is not entirely bad; indeed, one ought not to run a military C2 net, nuclear power plant, air traffic control system, telephone switch, or funds transfer system any other way.
The new model of information security is open and fuzzy. It is necessarily that way because the systems that we want to both exploit and protect must be open to the outside world and learn from it. One therefore needs a security model that recognizes that fact and makes continual judgements over the validity and reliability of information it ingests. All of this presumes, of course, that the core functions are lock-tight, and that, in turn, requires that the architecture of what must be protected is both simple and transparent. Ironically, that is key to letting systems deal with ambient complexity and opacity. The first challenge ought to be simple; regretfully we have made it complicated, but, one hopes, not too complicated, thereby leaving us with the really fun complex security problems of the future.