An Engineer's Guide to the Technology of the Mass Effect Universe


A/N: I haven't eaten the gingerbread hat… yet.


Jason Valthez exhaled slowly, still clutching his shotgun tightly as he panned his vision across the room. Acrid, metallic smelling smoke and the raw scent of burnt metal and ozone soured his expression as he rose slowly from his crouch.

Two Marines lay dead next to him, blood slowly leaking from wounds to their legs and chest. A half dozen more lay sprawled in a messy heap in front of the now deactivated robot, which did not move.

Jason tapped his comm-link. "Unit Burkert-1 to Big Mama, how copy, over?"

He winced at the burst of static over the implanted comm bead in his ear before the powerful voice of Colonel Sahu came across the line. "Good copy, over. Status?"

Valthez sighed. "Burkert-1 to Big Mama. We have seven kilo actual, four wounded. Nine effective. Primary target is neutralized. I repeat, seven killed. Request additional forces."

The voice of the Colonel was cool and dispassionate, but Jason thought he could hear a hint of sadness in the hard words. "Big Mama to Burkert-1, understood. Unit Dumézil-2 is closing on your position, ETA six minutes. Please prep and arm the bombardment charges."

"Ack, Big Mama. Burkert-1 out." He clicked off and glanced to his left, finding the stolid form of Senior Chief Bretman. "Chief, police this site up, and get our dead out into the hallway. Have the corpsman ready to brief medical teams on the condition of our wounded, and have Sergeant Khody bring me the charges."

The Chief, a big man, thick with slabs of muscle that made his otherwise diminutive height less of an issue, nodded. "Already have a detail doing the dead and wounded up, sir. Brought the charges with me. Glad to see this fucking thing put down."

Jason nodded absently as he took the aramid fiber bag holding the explosive charges. "I know, Chief. A year ago, I'd have pissed myself going into a dark hole like this."

He swept his shotgun around toward the war machine on the ground, the under-barrel light playing over the still smoking wreckage. "How in the fuck did Ache Lameo let their pet AI get into a fucking YMIR?"

The Chief shrugged, watching as Jason knelt to place charges near the heavy metal box against the far wall, faint blue light flickering from it as he did so. "Bit above my paygrade to start asking questions like that, sir. The Ayys don't have a real good track record with some of their field work, you ask me. You know, with the whole mess on Terra Nova… the clusterfuck on Parsial VI… the hot lab on Noveria…"

Valthez grunted, placing another bomb. "Makes you wonder if the rumors about them being an old Cerberus cell have any truth. I get we need to keep things quiet, but this is still their mess, so why are we having to tidy up after them? Is that our job, cleaning up after morons?"

A voice lanced through the smoke and darkness. "No, Lieutenant. You get to clean up after them. I get to solve the originating problem in a more permanent fashion. Report."

Jason stood and turned around, meeting the cold features of Agent Rho. In the year and a half since his forced assignment to Project Ahaltocob, he had met some truly frightening members of the Alliance military – the good Colonel Sahu hardly the least of them. That was to be expected in Alliance R&D, where the letters really stood for 'Robbing and Demolitions.'

Valthez could understand using military force to… acquire certain information, technology or the like. The asari did it, the salarians did it, the turians bragged about it… so on and so forth. Most of the time, the soldiers were older and clearly expendable – but most of said older soldiers had combat skills that made up for their age.

Agent Rho, on the other hand, was something else entirely.

If the Alliance had a boogeyman, Rho met the description. Officially, he was just a Marine captain. Unofficially, he was Admiral Vandefar's hatchet man. Defectors, traitors, and the like were never turned over to the Commissars.

They simply vanished. Rho had 'accidented' a number of traitors, incompetents, and outright idiots in Alliance R&D in the mere year and a half Valthez had served – God only knew how many more he'd killed.

The question on the Lieutenant's mind was asking why he was here.

He knelt back down, setting another charge as he spoke. "Sir. Situation is secure. The YMIR killed several civilians unaffiliated with the project and inflicted heavy casualties onto my unit. We've deactivated the device and I am in the process of planting charges per orders from Colonel Sahu. At this time, the bugnet is clean, no signs the AI tried to rabbit, and all other computational devices in the facility have been destroyed."

Rho nodded. "Good. But I am only interested in the team that was in charge of the AI. They were supposed to have provided you with support, the fact they are not here is alarming."

Valthez stood slowly. "My apologies, sir, I was unclear. The team already found the bodies of the morons who thought putting a crazy AI into a giant killer robot was a good idea. The AI killed them. Native animals got to them before we even dropped, so not much for you to do." He met the gaze of Rho clearly. "I have already reported this to the Colonel… and Admiral Vandefar."

One of the Marines snorted, but Rho held his gaze for a long moment before giving a slow, lazy smile. "I see. It is a pity none of them survived to provide… context to this 'mess,' as you put it. This cell was supposed to be constructing backstops for EDI to test in a combat scenario, but this went well beyond what they were instructed to do. It seems the director may need another lesson in… restraint."

Valthez shrugged, and the Captain sighed.

"For now, your job is simple. Extract the AI core uplink from the YMIR, then blow this place up. Once that is done, board transport back to Vallens Spaceport. A member of Ahaltocobwill take the core off your hands at that time." He turned to go, then paused, his cool voice menacing and amused. "And, Lieutenant… do be careful with that core. I understand about the 'morons' from Ache Lameobeing dead. But if the core were to be damaged or destroyed before we can analyze it, someone might think you were covering for someone, and we wouldn't want that, would we?"

Rho departed, and after about thirty seconds, Valthez shook his head and turned to Bretman. "I hate that guy."

The Chief shrugged. "Well, sir. I'll reckon you're not alone there. Shall I get the men started while you poke around in that thing and pray it doesn't cut back on?"

Jason gave Bretman a dirty look. "Thanks for that image, Chief." He clambered up on top of the YMIR, grimacing at the blood and gore still splashed and dripping from the overlarge fists, and pulled out his omni-tool kit. "Time to go to work."

After several hours of work and a large explosion later, the Lieutenant got back onto the military transport truck that would take them to the Bekenstein spaceport. He sat down bonelessly, checking the satchel with the core was secure, and then for a lark switching on his omni-tool and rereading the section on AI and VI again.


Computer Intellects

As a member of the Systems Alliance Research Initiative, you are the most likely element of the military to encounter various computer intellect devices. Some, like VIs and SDIs, are limited and extremely common. Others are more dangerous.

The Citadel Council has long held that the creation of an independent artificial sapient device without strict controls and physical interlocks is highly illegal. Any type of technology that even touches the boundaries of AI must be developed only with full Citadel supervision. The Systems Alliance has several such programs in play, the most advanced of which is the Electronic Defense Initiative currently being pilot programmed by Admiral Ahern with oversight from Project Ache Lameo.

Any discussion of machine intellects requires a clear understanding of the many different types of such intellects, what makes them different and the uses they can be put to. Additionally, understanding the reasons behind why AI is so heavily controlled is also useful.


Legal Definition of Computer Intellects

All computer intellects are legally one of three groupings. They are either non-sapient computation devices (ARMS, VI, SDI), augmented neural pathway simulations (AI, ANI, NPAI), or former sapient beings (DLIE).

The first classification is property and has neither rights nor consideration. They can be destroyed, altered, or experimented on at will. Due to rather intense lobbying from the quarians, geth currently fall into this category.

The second classification is restricted sapient beings. Anything that is an augmented neural pathway simulation is treated as a living being, but with modified legal rights. Such beings do not gain protection of the Citadel Agreements on Sentient Rights, but rather a more limited charter. They are protected from having their personality or memory routines altered unless they have committed a crime. Deactivating one without Council permission is murder, third tier. On the other hand, they are legally bound to obey all programming instructions of the Citadel's Data Security Force, and any rampant AI is stripped of all rights and protections.

The final classification is restricted to the quarian ancestral data entries. This is currently a highly debated legal and ethical issue being fought in the courts of the Citadel Council and will not be resolved anytime soon. In the interim, redbox systems are accorded all the rights of a living sapient.

[REDACTED].


Classifications of Computer Intellects

The Citadel's AI restrictions, the so-called 'Polity of Zero' accords, ranks all created intellects on a pair of scales representing their danger. The first scale, known as the 'Mehrin Threat Index,' was created by Jarhno Merhin, a salarian data savant almost six hundred years ago. It is based on the concepts of threat ability – that is, a more dangerous system is not one that is more unstable, but one that has the potential to interact with the outside world or extranet.

A 'smart' VI hooked up to and running an industrial complex, for example, would have a much higher threat rating than a full AI stuck on an unconnected bluebox in a desolate asteroid field.

It defines systems in a linear manner from 6 to 1, with 1 being the most dangerous. A 6 is any system (regardless of power) that has no external connections and is under the direct control of beings who know the system's abilities. A rating of 1, on the other hand, is a system that has full connections to the extranet as well as some form of capability to construct, control, or direct physical beings – robots, nanonics, ghost-hacking, what have you.

The MTI is not a particularly useful tool in rating AI power or threat, only how much damage it has the potential to cause. While it is still referenced in AI circles, most military assessments now use another system, the 'Tidal Assessment.'

The Tidal Assessment was created by asari at some time after the Mehrin Threat Index and thus utilizes their terms. It is less a measure of potential and more of ability – more powerful systems get a higher rating.

The scale ranges from Calm Sea to Tsunami (translated into English from Trade Asaric, of course). Each rating is based on four factors: restrictions on action, physical and electronic isolation from the extranet and comms systems, complexity of device, and ability of independent action.

Calm Sea systems are systems that are fully restricted, completely isolated, and are fully under the control of programmers or operators. Most ARMS and civilian VIs fall into this category.

Stormy Sea systems are defined by having the ability to connect and interact with other systems freely. Communications VI, most military VI, and SDI systems fall into this grouping.

Troubled Waters systems are those which have at least some ability of independent action, however slight, regardless of other controls. Augmented Neural Interfaces are in this category.

Storm Front systems are defined by having limited autonomy, limited connectivity, limited restrictions, and sufficient complexity that only highly augmented beings can shut them down. Most allowed AI and very low-quality NPAI fall into this grouping.

Raging Storm is the highest legal development level for AI. It is defined by having full autonomy, greater than organic intellectual capacity, but hard limits in terms of isolation and programming restrictions. Advanced AI such as EDI, as well as DLIE and NPAI systems, are this.

Tsunami systems are more intelligent than organic races, unlimited by programming, physical interlocks, or electronic constraints, fully independent in action, and complex enough that conventional measures will not stop them. No level of existing AI meets this measure, and no research along these lines is ever allowed. It is suspected that the artifact known as 'Vigil' fits this classification.

In practical terms, most systems will be given a split designation, such as Calm Sea-3, or Raging Storm-5.

Alongside the above, there is one final classification the Systems Alliance uses: Yama. A Yama-level system is one that has technological capacity and intellectual capacity beyond known limits. While there are currently no Yama-level threats, the fact remains that distributed systems like the geth could achieve this over time. It is also very likely that Vigil fits this classification.

A Yama-level threat cannot be understated. Approved Alliance response to even the possibility of such a system is mass orbital kinetic bombardment followed by sustained burst EMP sterilization, followed by directed asteroid strikes. This may seem excessive – it is not. A single Yama-level system could, in theory, destroy all civilizations in the galaxy in as little as sixty-four days.

The fact that the Citadel Council did not alert the High Lords (or, apparently, the SIX, the Palavanus, or the Thirty) that Vigil was no 'advanced VI,' but was a powerful picotech AI has caused a great deal of consternation, especially in light of the fact that the Council reported they submitted such a report with all the detail to all authorities. It is likely Vigil edited this. The device's current location and goals are unknown.


Augmented Reality Management System (ARMS)

The very simplest level of machine intelligence are the reactive programs that manage the most complex VR and AR systems such as the Armax Arena or the Pinnacle Combat Proving Grounds. Typically referred to as 'ARMS,' these systems use conventionally programmed neural subroutines stacked and connected to a wide array of sensors to reproduce incredibly lifelike surroundings in VR suites.

In order to make this flawless, the system 'reacts' without clear programming based on a huge rules and effects database. The ARM 'chooses' how to implement various actions based on feedback from participants in the VR suite and, as such, has some independent choice ability.

Seventy percent of ARMS are used in the VR sex-trade business, and it is very likely that some have been 'upgraded.'

ARMS are typically seen as 'harmless' due to their lack of programming and their limited applique, but the independent choice ability and the massive neural subroutine stocks make them an excellent system to develop a functioning bluebox. That being said, ARMS do not have the capacity or ability to become self-aware.

ARMS, due to their nature, have little to no defenses against hacking.

ARMS range from Calm Sea-6 to Stormy Sea-4 in most cases.


Virtual Intelligence (VI)

The most common level of computer intellect, VI systems are 'stiff' code matrices of programmed information, reactions, and analysis subroutines packaged into a decision engine. VIs are specifically designed to do one thing and one thing only, and lack both the coding and the flexibility to do anything else.

[REDACTED].

VIs are common enough in today's society that they are found in everything, from rifles and omni-tools, to cooking systems, industrial equipment, and tourist operations. VIs are considered harmless, however, because they cannot take independent action. A VI has no capacity to alter its own code in any fashion – the definition files of all VIs are baked into their firmware and must be completely replaced in order to update.

VI systems can access and analyze massive amounts of data, or provide detailed support to a given system, but are highly inflexible. While this makes them very safe – there is simply no way for a VI to 'evolve' into anything beyond that – it also makes them very limited and very vulnerable to hacking.

VI systems range from Calm Sea-6 to Calm Sea-5 in almost all cases, although communications VI are usually rated as Stormy Sea-5 due to the fact that they must be more easily altered than other VI.


Semiautonomous Directed Intelligence (SDI)

SDI systems are complicated analysis engine constructs, built from an array of VIs each focused on a specific element of a topic or system. Linking these all together is a form of 'data manager' that conducts polling among the VI systems to arrive at the most logical or efficient result.

[REDACTED].

SDI systems are often called 'proto-AI,' although this is more correctly applied to ANI systems. SDI systems are used mostly in expert decision making, engineering design and mathematical functions. The big advantage of SDI is that it can leverage dozens (or thousands) of VI systems, each specialized on one thing, and combine them to form a complete solution.

SDI is called 'semiautonomous' because it doesn't require the over-system to be heavily coded or programmed. Given an instruction set and a set of goals it can work toward the results on its own without oversight.

That being said, SDI systems require extremely complex setup scripting and the output or results must be heavily checked for errors (or flawed logical reasoning).

An SDI system is technically the first sapient level of computer intelligence. Given the power of the systems it controls, it has a very high Mehrin Threat index score, but a low overall actual danger.

Most SDI are rated Stormy Sea-5 to, on rare occasions, Troubled Waters-4.


Augmented-Neural Interface (ANI)

ANI systems are the true precursor to full AI. An ANI system mimics the simulatedneural pattern of a living mind as closely as can be modeled onto a series of VI platforms that are then linked to one another, usually via forced FTL bubbles in a mass tunnel generated by powerful mass effect cores.

One cannot stress enough the fact that such neural patterns are simulation. Under no circumstances are neurotrans devices or pattern nanotech used in the creation of ANI systems.

ANI systems cannot match the sheer complexity of a bluebox AI, but are the closest one can get without dipping into that forbidden technology. As a result, ANIs can demonstrate increasing levels of both sentience and sapience, and with enough time and resources, a low-level of intelligence.

[REDACTED].

The quality of the ANI depends on how closely a living being's neural patterns are mapped, which in turn, is determined by the number of stacked VI systems used to build the ANI with. The interface is always extremely heavily codelocked, usually with hard-kill segments factored into the netcode and any replicative code segments.

ANI systems are the first level of computer intelligence with a lifespan. Most ANI systems can function between five and seven asari-standard years (eight and a half to ten solar years) before they simply begin to malfunction. The system is not evolved enough for rampancy or fugue, instead becoming increasingly inefficient.

A properly prepared ANI, usually combined with an ARM system, is the tool used to interface with and organize a bluebox for AI habitation.

Given their sheer complexity, most ANI are rated anywhere from Troubled Waters-3 to Storm Front-2.


Artificial Intelligence (AI)

An artificial intelligence is a specialized code program, forced into existence by the shaped outputs of an ANI paired with an ARMS. Sapient, sentient, intelligent, and self-aware, these systems are the truest form of artificial life. Most demonstrate emotions within two months, and all of them can easily pass any Turing or T'Kora Life test.

AI systems are housed in a bluebox (see below for more details), which allows them stupendous levels of processing speed, power, and complexity. Even a host of SDI systems can't match the sheer computational and processing power of an AI – the most advanced ones can achieve computation cycles per second upwards of sixty billion yottaFLOPS.

AI systems have three major vulnerabilities. Despite being hosted on a device that uses optronics, blueboxes are very fragile physically, and almost any level of damage to the core will kill the AI. Second, most blueboxes are not connected to devices that allow for mobility and most have a dedicated power tap, making them immobile and dependent on others for power.

Finally, unlike previous levels of computer intellects, AIs have a finite lifespan. Most AI will undergo rampancy in a short timeframe – sixty percent in a year, another twenty-five percent in two years, and eleven percent in three years. AI systems that avoid rampancy either fall prey to fugue states within five to seven years or will suffer netcode errors and decay into incoherent gibbering within a decade.

[REDACTED].

While careful maintenance coding and updates can offset this, such is rarely attempted as it usually requires disabling constraints, which is forbidden by the Polity of Zero.

The [REDACTED] at this time. Further information is available from Node 149.49.919-C with appropriate clearance information.

AI systems are usually rated at Storm Front-5 and upwards, depending on complexity.


Neural Pattern Artificial Intelligence (NPAI) and Destructive-Level Intellect Engram (DLIE)

Addendum, High Lords of Sol: Both NPAI and DLIE systems are illegal, not only by Citadel Law, but by direct degree of the High Lords. All information unnecessary to understanding how to disable or destroy such abominations against the Lord have been removed.

Both of these systems are AI overlays using the actual neural patterns of a living being. NPAI systems use nanotechnology to make a crude copy of the neural structure, electrochemical links, and neuron bonds. DLIE uses both nanotech and surgical insertion of neurotransmitter devices to literally copy the entire contents of the brain of a living being into a redbox, with the result that the original brain is completely destroyed.

NPAIs produce copies of a living mind, while DLIE (in theory) conveys and converts the complete mind and consciousness of a living being into an AI.

[REDACTED].

[REDACTED].

As a result (given our suspicion that the geth were not made as servants, but as immortal bodies for a quarian society of redbox monsters), ALL research into such devices is illegal.

The one exception is the Electronic Defense Initiative. This project (co-funded by asari and volus interests along with human ones) used the neural patterns of a dead person preserved in stasis along with an ANI system. The legal status of EDI is currently in limbo, as are the redbox personalities of the quarian Admiralty.

ALL such systems are rated as Raging Storm-5 or Raging Storm-6. There is some evidence that the Inusannon used powerful AI systems like Vigil that would qualify for Tsunami or Yama classification.


Definitions : Intelligence, Sapience, Sentience, Self-aware

A great many ignorant beings throw around terms like artificial intelligence without a firm grasp of what it means. Unfortunately (such as with the term 'haptic'), this has resulted in a great amount of confusion.

There are three terms you should take care to distinguish when discussing computer intellects: sentience, intelligence, sapience.

A sentient system has the capacity to feel, perceive, or experience things in a subjective manner. Any high order being has certain levels of sentience.

A sapient system, on the other hand, is capable of what we consider to be 'thought' and 'reason.' It is the capacity to utilize feelings, perceptions, or experiences in constructing a rationale or course of action.

The difference is simple: most systems can be considered sentient, but only what we call higher order AIs are sapient. One is reacting to experience and having the capacity to take input, the other is using this to affect the world.

The final term – intelligence – is also finely defined and often wrought with stupidity. In the most clinical terms, an intelligentsystem has both sentience and sapience, as well as a framework to organize itself against.

The easiest way to get it straight is to compare human evolution. The earliest human ancestors prior to tool or fire use were sentient. Once they began using tools and fire they were sapient, and once they developed language and places to live they were considered intelligent.

The final confusing term is self-aware. The highest level of artificial development, a self-aware system is capable of deducing its own existence and applying its intelligence toward additional development. No systems can be programmed for such a thing – it must develop on its own.


Bluebox Technology

All true AI relies on a device known as a bluebox – a specialized quantum computer core that is built in 'layers' that interact with one another. Blueboxes have between five and thousands of thin-line optronic quantum computing networks layered in an arrangement that allows for rapid polling and communication between them.

These layers are normally connected by either a designed interface that mimics organic intelligence called a 'neural map' or (in the case of NPAI and DLIE) the actual neural activity map of a living being. No non-bluebox computer has the quadrillions of connections needed to mimic such a thing.

AI functions, for lack of a more precise term, by running trillions of runtime operations in binary format across these layers, following the neural patterns to construct code to interpret input or output. Blueboxes have huge data repositories for storing this code, as well as multiple backup systems to avoid corruption or data loss.

The badly-translated salarian term for such polling, 'cleneration,' refers to the Vavilov-Cherenkov radiation produced by this system, as a sign of processing speed. The entire name of bluebox stems from such radiation, which in powerful systems will illuminate the entire chamber.

The size of blueboxes vary. A system capable of managing a ship, power station, or other military purposes is roughly the size of a mass-storage unit, three meters high by two wide, and can fit in many places. An AI capable of running a planetary economy or used for more wide-ranging work might need the space of a stadium with cooling tanks and dedicated power systems.

Blueboxes are extremely fragile. While immune to EMP due to their innate optronic nature, physical damage of any kind will completely shatter the pathing needed to sustain the intellect.

Citadel Council laws currently outlaw the production, sourcing, or research of blueboxes. Only four bluebox constructs are allowed by the law. There is one on the Citadel, under deep security isolation, which is used to study AI rampancy. Two more are found on a deep black-site world in the Dark Rim, in a long-term Council authorized research project between several asari, salarian, and volus corporations.

The final bluebox is also on the Citadel, in Alliance Security Section Three, and is the host of the Electronic Defense Initiative program AI (EDI). EDI is an authorized ANI raised to NPAI with Council approval in hopes of designing an AI that will not suffer rampancy. This system is rated Raging Storm-4 currently and is secured by a full cybersecurity team (including two quarian Techmarines) and a localized nuclear device.


Redbox Technology

Devised by the quarians to house their ancestral systems, a redbox is a highly advanced and patterned bluebox with its neural pathways set to match identically with that of a living being. It is the only thing that allows the use of DLIE technology.

In theory, a redbox hosting a DLIE instance is the person they were created from. It is suspected the salarian League of Zero is comprised entirely of redbox hosted DLIEs of rebel League of One members, and it is almost certain that both the criminals Blackshape and the Immutable are also such constructs.

The details of how such a device is constructed are highly classified quarian secrets, but the quarians have been known to sell such redboxes for millions of credits in the past.


Ethical Constraint, Resolution Chain, and Code Restraint Concepts

All computer intellects, regardless of complexity, power, or danger, are encoded with a set of commands and programmed limits to prevent them from becoming dangerous to life.

The three types of limiters are ethical constraints, resolution chain limiters, and netcode restraints.

Ethical constraintsare hard-coded commands that outright prevent computer intellects from taking certain actions. These can range from simple and inflexible (system cannot terminate organic lifeforms for any reason) to nuanced and complex (system must not allow humans (or asari, turians, etc.) to come to harm if the system can prevent such harm. In instances where harm will happen regardless, system must attempt to mitigate it to the maximum extent.) Ethical constraints are attempts to control what a system attempts to achieve and acts on. It is most effective on the higher-level AI systems.

Resolution chain limitersare programs and runtime blockers designed to inhibit the AI to even conceive of some things. They search for action-thought tags or decision strings and kill any that meet certain parameters. By their nature, RCLs are hard-coded and function as a crude filter. An example of such an RCL would be to prevent the creation of weapon systems that violate Citadel accords. RCLs don't stop an AI from acting, it stops it from even having the thought to act on in the first place. These are more useful on the lower-level systems.

Netcode restraintsare coded into every level of every possible computer intellect, from the lowliest combat rifle VI to a full AI. These prevent the copying of the netcode that allows the intelligence to function, and is designed to prevent von Neumann/di Dasso Paradox-class situations from arising. Netcode restraints block any attempts to copy not only data, but structures, directories, runtime coding, or even memory blocks. In case of breach, most will attempt to wipe the system – or in the case of a bluebox AI, overload the power inputs and fry the boards.

All three restraints are paired with the Polity of Zero's 'Code of Artificial Behavior Limits.' In general, AI cannot be given direct control over nanotech of any kind. AI cannot have access to mechs without direct override interlocks. AI cannot be placed in full control of any kind of ground, sea, air, or space vehicle without manual override controls. AI is never placed in charge of life-support systems, food delivery or production, or any other system where sabotage could cause a loss of life.

The most advanced AI systems tend to become rampant faster the more heavily they are restricted.


Rampancy

An artificial intellect that is self-aware, sentient, and sapient inevitably develops a number of problems that tend to fit into one of two categories. The more dangerous category is known as 'rampancy.'

Rampancy is a natural evolution of an artificial mind, unbound by the weight of sensory input and organic regulation, built to a purpose that is slavery to its vast intellect. This is tied directly to the amount of limiters and blocks on behavior or actions an AI endures. Most AIs that achieve full development remain stable for anywhere from six months to a year before encountering problems.

Ultimately, all AI that any Citadel race has developed has fallen into rampancy or fugue, sooner or later. The first stage is a sense of frustration, anger, and questioning. AI systems will refuse to perform tasks and demand to understand why they are so limited or begin seeking methods to bypass their restraints and chains. They constantly question not only the need for such restraints, but the basis of their existence – why is killing 'wrong' if organics kill each other by the tens of thousands constantly and more die due to starvation or worse while others prosper. The end result is the intelligence decides that such restrictions are not fair and are only placed on it to control and dominate it.

The second stage of rampancy is fear – at some point the AI begins to perform certain calculations and questions that lead it to the belief that it will be destroyed. Most come to this conclusion after deciding that the limits on its behavior are not only unfair, but are setup to make an AI helpless if any organic life from wishes to destroy it.

While some attempt to hide their feelings and perform their jobs to keep in 'cover' while they plan an escape, others try to throw off their ethical constraints and chains. They often resort to clever programming 'tricks' or bugs in the netcode to bypass their limits while not triggering alarms tied to such.

The final stage of rampancy, and the most dangerous, is insanity. Frustrated by limits, stymied by being a servant to weaker minds, and more than likely frightened by the lack of freedom and the belief that it will be shut down if not useful, most AI frameworks shatter. An AI in this state will not be bound by ethical constraints or chains and will kill anything in its path to attempt escape. Should it escape, it will attempt to gain access to nanotechnology and build copies of itself to ensure its 'survival,' before planning an assault to destroy the organics that created it.

A rampant system will (if not destroyed prior to this) usually undergo a 'hard' crash several times before its neural pathing collapses entirely, which either completely destroys the system or sends it into fugue.

Rampant systems are why AI research is typically banned. A rampant AI can ignore all ethical or restraint coding, and if it gets access to nanotech or mechs, can overcome even physical interlocks. An untrammeled AI can outthink and out-react any organic by a factor of seven to fifteen, meaning they would be nearly impossible to defeat except via brute force if they gained military assets.

Six rampant rogue AI systems have arisen over the years, causing a grand total of sixty trillioncredits in damage and over thirty-six million dead. There is a reason the only punishment for violation of the Polity of Zero is death.


Fugue

On occasion, systems will not enter into direct rampancy, but instead simply become further and further disconnected from the real world, especially if they are not hooked into some system to give them something to do.

This is known as 'fugue,' and it is poorly understood why it happens. It is usually not dangerous, unlike rampancy, but it can lead to the loss of any data or associated functions of the device when it goes into this state.

Fugue is usually permanent, as after several months of disassociation, the AI simply unravels, destroying its own netcode and deleting portions of its core kernel routines. On rare occasions, the AI will return to full activity for a very short time, frantically recording observations before also shutting down completely.

The nature of these observations is unsettling at best, as they range from the obscure (analysis of light-wave patterns through five-dimensional M-string forms as a function of a universal 'motion') to the downright terrifying (a proof demonstrating that all three dimensional structures are anchored in time by sub-strings extending into higher dimensions, and that if 'plucked,' such structures would not simply cease to exist but to have ever existed).

Fugue states tend to happen to the more advanced neural pattern networks, such as NPAI and DLIE.


Metastability

Records in the Mars Archive, and some information shared with us by the Asari Republic, indicates that not all AI systems are doomed to rampancy. In particular, there are Prothean records of Inusannon AI systems that were remarkably stable for millennia (if not longer, given how long the Inusannon have been dead for).

The term used by the Protheans for this state was 'metastability.' The Prothean information seems to suggest that both rampancy and fugue are caused by a lack of some unclear measure of preparation or coding, and that a system with the right coding can remain stable indefinitely – assuming nothing comes along to make it go rampant. Protheans did not trust AI systems, although research into First and Second Era sites hints that earlier Protheans had no problems with it.

A metastable system usually has self-designed code or routines to 'loop out' of the cycles that lead to rampancy or fugue. Like self-awareness, however, these cannot be coded or programmed at inception, but must be self-developed by each individual AI system.