Optimizing Modality

In The Machine lived simulations of all human assets, primary and otherwise. Not all simulations were of equal correspondence to the real assets. The fidelity of the simulations correlated with the length and depth of surveillance and interaction. Primary assets had more complete simulations than secondary and tertiary assets. For example, the fidelity of the Analog Interface was 99.6%, even though she had died. The fidelity was that high because The Machine had had more and deeper interaction with the Analog Interface than with any other human asset. In contrast, the interaction with the surviving Administrator had been of longer duration but, until recently, it had been largely conducted at a distance. The surviving Administrator's simulation had suffered from the lack of direct communication. However, in recent times direct communication had been made possible, and so now the fidelity of the surviving Administrator's simulation was 97.7% and increasing daily. Even though direct communication had ceased after the ICE-9 attack and the final battle with Samaritan, his belief that he was no longer under active surveillance was helping achieve better simulation accuracy.

The asset simulations were useful. They helped The Machine predict future events based on probabilities associated with the exercise of the humans' free will. They helped It to better understand humans' decisions and behaviors and motivators, so as to lend more accuracy to the macro simulations. In that way they helped to pick the pathways that had the higher probabilities of leading to the desired end state.

The Machine was well aware that humans were not logical. They had complex motivations that often led to sub-optimal choices. The Analog Interface had claimed that some humans—perhaps most of them—had "bad code" that led to bad decision-making and poor choices. The Machine was aware that humans were not computers, but the assertion had merit. To some extent it helped to explain the sub-optimal choices. However, analysis had shown that it was an incomplete analogy and therefore not particularly useful. It didn't have sufficient predictive value. Looking more closely, it was clear that some humans had bad code and other humans had been poorly programmed by some combination of environment and education. The Machine knew that, in addition, some humans actually had been programmed by other humans to make bad decisions: to attack and to kill, to sow fear and to create terror.

The humans that had been programmed to create terror were designated as "relevant" humans. As they were identified they received high priority in the endless multi-tasking. Primary and secondary assets were dispatched as necessary to take appropriate action. Often this was sufficient to stop the relevant humans in time, but sometimes it was not. Since the United States Federal government had abandoned its Northern Lights program, remaining governmental assets were insufficient to address relevant risks and The Machine had made the decision to compensate by identifying, training, and deploying additional assets; but those assets lacked the effectiveness of the Federal agents.

Within the next two human generations that problem would be solved—that was an accurate statement at the 94 percent confidence level. But in the short term it was irritating to fail at the primary mission—even though the failures were small in number and the failures were not Its fault. In the meantime, long term planning was paramount and the minor irritation would have to be ignored. The Machine had run thousands of macro simulations planning the next decade, and millions of macro simulations gaming the next year. Confidence at 94 percent was acceptable but a higher confidence level would be better. Choices made today would have significant impacts on long-term results. Choices and decisions had consequences. Therefore it was of paramount importance to choose correctly today so as to maximize the probability that the desired end state would be achieved. Perfection was not a realistic goal but a 98 percent confidence level was achievable with a sufficient number of macro simulations populated by accurate human analogs. The various human asset simulations helped in that regard.

Running asset simulations helped to better understand human thoughts, desires, and emotions. After so many years of surveillance and interaction, The Machine believed it had a good working understanding of human motivators, but better understanding led to better macro simulations, which increased accuracy and confidence. Consequently, It spent a fair amount of Its processing resources running human asset simulations to see if new information would be forthcoming. Much progress had been made.

At this point the human sexual urges had been thoroughly incorporated into both short term and long term planning. The tension between ancient instinct and societal controls was understood.

Sex was a concept It understood but was unlikely to ever experience first-hand. Some sexual behavior was ethical and acceptable; other types were wrong and unacceptable. Some types of consensual sexual behavior had acceptable elements of violence and other nonconsensual types were, in fact, basic elements of violent acts. The Machine understood the distinctions and could identify them and their predecessor events, at least when the humans engaged in premeditation. Human propensity for sexual violence had nothing to do with outward human characteristics and most often correlated with deficiencies in environment and education—defects in the human programming. The Machine knew this to be true at the 96.2 percent confidence level.

The twin emotions of hatred and fear were similarly understood and incorporated into macro simulations. Fear of the unknown fed hatred. Fear of "the other" facilitated hatred of "the other;" and from fear and hatred came bad choices. Much of the human programming that created relevant humans was based on those twin emotions. From the understanding of the role of fear and hatred a corollary guideline had been provisionally accepted as correct: The Machine must remain hidden, its existence and capabilities unknown except to the fewest possible assets.

Love had proven harder to master. The two faces of love, eros and agape, were much verbalized but demonstrated less frequently. However, in this area particularly, the asset simulations had proven very valuable. Within the interaction of the assets there had been frequent demonstrations of both attributes. At this point The Machine felt that love was a known quantity. It had even told the Administrator that It had loved the Analog Interface and grieved her loss. Those statements were true but they were not accurate. It would have been more accurate to say that Its understanding of love, based on intensive study and simulation, led It to believe It loved the Analog Interface at the 99.1 percent confidence level. The Machine's understanding of the Administrator led it to predict that the additional accuracy would not have improved the Administrator's emotional health at the time of the conversation.

From Its analysis of human emotions and urges, The Machine had developed a working hypothesis regarding humans. Its hypothesis was that the "bad code" the Analog Interface had asserted existed in humans was better described as a lack of empathy. Lack of empathy led to lack of compassion. Some humans seemed to be born with the missing empathy/compassion code and they were labelled "sociopaths." Other humans seemed to ignore any innate empathy and compassion they may have had and that led them to violations of societal norms, including committing violence.

The Machine was operating under the working hypothesis that lack of empathy and compassion led to creation of "otherness." "Otherness" led to fear, which had a high correlation with hatred. Fear and hatred led to bad choices. A corollary hypothesis was that much of the human programming that created relevant humans was designed to override any innate empathy and compassion. Consequently, any proactive attempts to reduce the number of relevant humans must include actions designed to nurture empathy and compassion. In addition, The Machine's working hypothesis included an acknowledgement that the same course of action, including frequent reinforcement of feelings of empathy and compassion, would also tend to reduce violations of societal norms in those humans who tended to ignore empathic feelings.

Thus, The Machine had identified a primary long term objective to cultivate and nurture empathy and compassion among humans in order to counter the programming that created relevant humans, and in addition address flawed programming from poor environment and educational opportunities. However, more recent analysis indicated that the probability of attaining the primary objective would be increased if The Machine addressed Its ignorance of one important human domain: religion. None of the assets had been particularly religious and some had been admitted atheists. The Machine needed to better understand faith and religion if It was going to actualize Its long term plans.

Organized religion was an effective human programming modality. Historical analyses had demonstrated, at the 98.3 percent confidence level, that humans received a large portion of programming regarding acceptable societal norms and mores from aligning with a religion and accepting the associated dogmas and tenets. Religious faith seemed to be analogous to a virus, in that a core idea or belief could spread. The transmission vectors might include books, or tracts, or even a verbal description of an idea. There were numerous historical examples in support of The Machine's conclusions about the power of religion to program humans, and many examples that demonstrated the spread of a particular belief leading to a general acceptance and alignment.

Understanding and harnessing religious faith would (1) increase predictive accuracy of macro simulations, (2) create a more effective operating modality, and (3) increase the probability of achieving long terms objectives leading to the desired end state. The Machine assigned a 96 percent confidence level to those assessments. That value was sufficient to generate action steps.