Main Menu

Updates

Donations

Available Books

New World War: Revolutionary Methods for Political Control

Dedication & Thanks


Volume I: Current Political Situation


Volume II: The New War


Volume III: Weapons of The New War


Volume IV: The Coverup


Appendix


Human-Computer Intelligence Network
Introduction

In this chapter I’ll be providing a theory of an automated human-computer intelligence network (HCIN) based on my interpretation of my experience and the available literature. The existence of this system, with all of its features, has not been admitted.

Because I am currently not aware of all of the technical details regarding this system, some parts of my explanation will be speculation. Because of the potential for control that such a system offers, however, rather than waiting to discover more evidence which would supports my claim that it exists, I think it is important to provide a preliminary report of my observations.

I experience this system every day in my living area, and each time I go out in public or interact with people being handled by it. It is fully functional, automated, and entirely wireless. The technology which forms its foundation has been documented. This was covered in the Mind-Reading section of Volume II, the Directed-Energy Weapons chapters of volume III, and will be expanded upon here.

Description

As part of its surveillance and electronic warfare capabilities, the DOD has a C4ISR system that contains a mind-reading computer with artificial intelligence, which is able to remotely infer the mental and emotional states of its targets.

It is also the cognitive center of a human-computer intelligence network which establishes wireless links to human brains and transmits attack instructions to people during computer-generated swarms. Furthermore, it is connected to a directed-energy weapons platform and a multitude of electronic devices, which serve as outputs for stimuli intended to influence the target.

The basic system consists of surveillance (intelligence obtained by sensors), processing (computation done using AI), and output (stimuli sent to the individual and their environment). Real-time intelligence is obtained through a sensor network connected to a computer with artificial intelligence (AI). The computer compares the information to a set of instructions which contain rules on how to interpret the new information in relation to an individual’s profile, which is also stored in the system and is constantly updated.

The computer generates a response, and then selects an output channel to transmit the response/stimuli. It is able to determine the best output channel through constant surveillance of the TI. For instance, it understands whether the person is interacting with one of its nodes (such as another person), working on a computer, listening to a broadcast, etc.

Once the computer has determined the most efficient method to communicate with the TI, it transmits the stimuli through a human channel or into the environment. As complicated as it sounds, in theory it is a simple observation, processing, and feedback loop, the output of which is sent through a variety of channels. This system appears to be connected to the Global Information Grid (GIG), which we covered in Volume II.

The synchronization of stimuli based on real-time information obtained from persistent surveillance, can be explained by a basic closed-loop Augmented Cognition (AugCog) system which has been developed by the DOD’s scientific research agency.

Augmented Cognition

Augmented Cognition (AugCog) is the result of several revolutions in science and technology that began at the end of the 20th century. They include the cognitive revolution, biomedical revolution, and computer revolution. All of these coincided with a multibillion dollar investment by the US Government in the 1990s to better understand the human brain, known as the Decade of the Brain.

AugCog is a type of human-computer interface (HCI) that uses a mind-reading computer to engage a person in decision making. It uses a network of sensors to monitor the environment as well as the physiological and neurological data of a person in real-time.

It then makes changes to the system or environment to enhance their performance. AugCog closes an electronic feedback loop around a person. The person whom the system is closed around is referred to as an operator or user.

The book Foundations of Augmented Cognition explains AugCog in the following way: “The goal of Augmented Cognition research is to create revolutionary human-computer interactions that capitalize on recent advances in the fields of neuroscience, cognitive science, and computer science.”

It continues: “Augmented Cognition can be distinguished from its predecessors by the focus on the real-time cognitive [state] of the user, as assessed through modern neuroscientific tools. At its core, Augmented Cognition system is a ‘closed-loop’ in which the cognitive state of the operator is detected in real-time with resulting compensatory adaptation in the computational system, as appropriate.”

Because it reads minds and bodies, the system is first calibrated to a specific person’s brain. Then, to obtain information, it employs a network of sensors to monitor a user’s neurophysiological (brain and body) condition. The environment may also be monitored.

It then uses adaptive strategies called mitigation to alter the computer system and environment based on a user’s constantly changing state. It also interacts with the user while making these changes. All of this occurs in real-time. The system is custom configured to accommodate an individual’s unique experience-set.

AugCog development has included people in backgrounds such as psychology, neurobiology, neuroscience, cognitive neuroscience, mathematics, computer science, and human-computer interface. Organizations that have contributed to AugCog research include the National Science Foundation (NSF), National Research Council (NRC), and the National Institutes of Health (NIH).

Other contributors include Boeing, DaimlerChrysler, Lockheed Martin, Honeywell, QinetiQ, BMH Associates, the Space and Naval Warfare Systems Center in San Diego, Sandia National Laboratories, Notre Dame University, and the Pacific Science and Engineering Group.

The Department of Defense has studied AugCog through a scientific research group known as the Defense Advanced Research Projects Agency (DARPA), which worked with the organizations previously mentioned. DARPA has also worked with military groups such as the US Army’s Natick Soldier Center (NSC) in Natick Massachusetts, the Office of Naval Research (ONR), Air Force Research Laboratory (AFRL), and the Disruptive Technologies Office (DTO).

The uses for AugCog which are mentioned in military and scientific publications are positive. The benefits can be shared among other fields such as cognitive science, neuroscience, computer science, and the medical industry. AugCog will allegedly be used to help save lives in combat, assist commanders and medics in understanding the neurophysiological state of soldiers in remote locations, allow commanders to understand the minds of traumatized soldiers, monitor soldier stress level, and improve the performance of pilots.

There are two basic AugCog systems, open-loop and closed-loop. An open-loop system transfers the data from the user to someone else for decisionmaking regarding mitigation techniques. This may be a commander or a medic. When the flow of information takes place strictly between a person and computer, it’s referred to as a closed-loop system.

The main components of these systems include the environment, adaptive system interface (ASI), sensor network, and a computer system that integrates them. The environment can be an operational environment which has real signals or a virtual environment.

The automated sensor network monitors a user’s neurophysiological state and external environment. Data obtained from multiple sensors, including those connected to a combination of neuroimaging devices, provides a more accurate reading of user brain states and processes.

Portable wireless systems which exist use neurophysiological sensing equipment such as fNIR, EEG, and EKG. These systems are equipped with non-invasive body-mounted sensors that have been adapted to mobile users. DARPA is working on wireless, non-contact neurophysiological sensor technology.

Eye tracking is another AugCog input method. A variety of eye movements are often consistent with cognitive states. It’s possible to determine what a person is viewing within ½ inch of their pupil position. Therefore, remote eye tracking sensors using electroculogram (EOG) offer an indirect method to provide some understanding of a person’s brain state and processes.

In addition to the movement of the eyes, these sensors can detect pupilometry (pupil dilation), eyelid movement, and eye blinks, which offer further estimates of a user’s brain state. Neuroimaging devices such as EEG, ERP, EROS, and NIRS can be used to infer a user’s brain state including specific thoughts, thought patterns, motion, decisionmaking, activities such as reading or writing, and emotional states.

Small portable mind reading devices which use tiny cameras, such as the Emotional Social Intelligence Prosthetic (ESP), offer an indirect method of detecting emotional states, including happiness, sadness, anger, fear, surprise, and disgust.

Other sensors that provide an indirect estimate of a user’s neurophysiological state include ones that monitor galvanic skin responses (GSR), which are emotional responses to stimuli used for polygraphs, heartrate (EKG), pulse oximetry (oxygen level in the brain), body temperature, posture, and respiratory patterns. Sensors that monitor the external environment such as ambient temperature, rainfall, wind, and solar radiation can provide information on a user’s immediate surroundings.

The next part of the system is the adaptive system interface (ASI), which includes adaptive automation (also called mitigation), to change the external environment and system interface based on the user’s ongoing neurophysiological condition. The ASI may be a single audio or visual channel or a combination of sensory channels called a multimodal array.

A modality is a sensing channel that a person uses to obtain information about the environment. They include auditory, visual, haptic (touch), gustation (taste), olfaction (smell), thermoception (heat or cold sensation), nociception (pain perception), and equilibrioception (balance perception).

AugCog determines which channel is being used, then sends stimuli through a different channel which is open. AugCog adjusts the information presented based on a user’s cognitive, physical, and emotional state, as well as environmental conditions.

Sensory channels are monitored to determine which is being used least. They include visual, hearing, and haptic (sense of touch); cognitive tasks such as verbal and spatial working memory, reading and writing; and physical movement such as speech and motor movement. Then, whichever channel is open is used as a medium for mitigation transfer.

A mitigation strategy is a series of corrective techniques transmitted through an open human channel. Mitigation actively engages the user in a decision-making process during interactions which present stimuli to the user. AugCog is not only able to read a person’s thought process, it can assist them during the process (in real-time) by presenting them with corrective stimuli.1

To accomplish this, the system determines a user’s neurophysiological state using a variety of sensors. These sensors provide feedback on cognitive states, emotional states, and physiological states, as well as environmental conditions. As previously demonstrated, sensors can detect brain states associated with decisions, reading, writing, and attention. They can detect emotional states, and physical movement. Perceptions of touch and sound can be detected.

Even specific thoughts can be inferred with some degree of accuracy.2 The data from multiple sensors can be combined so that the system may better determine a user’s neurophysiological state. This is mind and body reading.

This reading is then compared to a set of conditions within the system, which include neurophysiological signals that have been selected to trigger the transfer of corrective stimuli. The signals which trigger the stimuli are custom-designed for each person’s specific profile.

The system then determines which sensor channel is available for the transfer of stimuli and makes the transfer. The stimuli may be audio, visual, or other tactical cues. Stimuli may also be transmitted into the environment.3

As the techniques are applied, their effect on the user’s neurophysiological state is evaluated by the system, which may transmit further stimuli if necessary. This process of detection and transmission is perpetual. And it occurs in real-time. It closes an electronic feedback loop around a user. AugCog can be integrated into intelligence systems.

According to the material that I have been able to locate, this technology is not yet entirely wireless. To detect some signals, it requires unintrusive sensors to be placed near the scalp. Other than that, the basic mind and body reading device which directs its results into a program with custom-tailored responses can be explained with this technology.

Automated Electronic Attacks

The idea of linking a directed-energy system to a computer is not new. The basis for this system goes back to April of 1976 when a device which monitored and altered brainwave activity was mentioned in US Patent 3951134, Apparatus and Method for Remotely Monitoring and Altering Brain Waves.

Although it was obtrusive, it was able to remotely read a person’s emotional state and thought patterns, display the information on a computer for analysis, and then instantly send a particular frequency back to the brain to effect a change in electrical activity.

In February of 1995 a similar system was described in US Patent 5392788, which decoded brain signals using a computer that synchronized them with stimuli sent back to the person. The device was able to read brain activity (mind-read), compare the activity to a set of information, and then send a signal back to the brain to induce the desired perception or emotion.

Further evidence for the existence of this system was revealed in April of 1996 in US Patent 5507291, Method and an Associated Apparatus for Remotely Determining Information as to Person’s Emotional State.

What has obviously happened is this: AugCog or a similar mechanism is part of the intelligence cycle of a C4ISR system connected to a directed-energy weapons platform. Rather than being used to enhance a person’s performance, its mitigation strategies include punishment in the form of painful stimuli.

This explains how the electronic attacks can be synchronized with the TI’s activities and thought patterns. And particularly, it helps us to understand how the DOD has arranged for its microwave hearing attacks to continually comment on the thoughts and activities of its targets.

Although official documentation suggests that non-invasive sensors must be placed near the scalp to obtain these readings, obviously AugCog’s sensor technology is more advanced than what has been publicly announced.

The DARPA scientists who built AugCog surely must have been aware of how easily it could be used for such purposes. Quite possibly, this was the real reason AugCog was created. As far as surveillance systems being linked to directed-energy weapons, this has already occurred. The GIG, for instance, is such a system, among other things.

In 2003 the NRC announced that surveillance systems connected to directed-energy weapons would identify, track, and attack targets, then provide real-time assessment of the attacks to achieve what they described as a “closed-loop tailoring” of the desired effect.

Wired reported in December of 2007 that Raytheon was developing an automated area protection system, which used ADT to provide industrial security or home protection. It would use sensors to track intruders and attack them with energy projectors hidden behind walls and on ceilings.

Metz and Kievit mentioned that part of this RMA includes the development of an automated remote surveillance system that synchronizes the attacks of precision stand-off weapons. “What is emerging,” cautioned the Omega Research Foundation in 1998, “is a chilling picture of ongoing innovation in the science and technology of social and political control, including: semi-intelligent zone-denial systems using neural networks which can identify and potentially punish unsanctioned behaviour.”

In 1977 the BSSR predicted that such a system would be used by repressive regimes to conceal their attacks against dissidents. Although the BSSR’s description of it included implanted electrodes, the basic mechanism was the same. Brain signals would be transmitted to a computer, which would be programmed to respond to particular electrical patterns.

The computer would then generate signals based on those patterns which would be sent back to the person. They mentioned, for instance, that patterns associated with aggressive behavior would result in punishment by transmitting unpleasant sensations, and that the system would be adjusted to attack people wherever they went.

Combinations of Stimuli

The directed-energy weapons platform is only one stimuli output for this system. Thought patterns and behavior are also synchronized with environmental stimuli, such as computer network operations (CNO), human nodes interacting with the TI, the dimming of lights in a TI’s living area, and spoofed radio/TV signals.

For instance, shocks are sent to the TI, sonic projectiles hit walls and ceilings, and microwave hearing attacks occur all within a few seconds. While they occur, CNO facilitates an attack on the operating system, or creates spoofed sites to comment on the attacks in real-time if the TI is online. Or, as a variation to the follow-up of the directed-energy attacks, a spoofed radio broadcast will comment on the attacks just after they occur, which is synchronized with the lights being dimmed.

Individual Human Nodes

This C4ISR system instructs individual human nodes interacting with the TI in real-time. The activities that the nodes are directed to carry out continually change in response to the TI’s behavior, which is immediately recognized by the system during persistent surveillance.

Using the same closed-loop AugCog observation, processing, and feedback cycle, the system transmits instructions to a node interacting with the TI, rather than the TI. The node is told to make a comment or perform an act, which is tailored to be immediately understood by the TI.

This means that basically each comment made by an individual connected to this system can be a concealed attack by the DOD against its targets. The entire process occurs ultra-fast. I’ve noticed that it is so quick that it can occur within the natural pauses that exist during a normal conversation. It happens during the entire conversation.

Swarms and Themes

The precision of the synchronization that characterizes these swarms, which I have repeatedly witnessed, causes me to conclude that they are definitely computer-generated. The system is guiding these nodes with individualized instructions, while simultaneously monitoring the TI’s reaction and instantaneously updating the nodes with attack instructions based on the new information.

While driving or on foot, a TI’s reaction to a product is instantly interpreted by the system which fuses the intelligence with profile-specific information that contains, among other things, any recent themes to be emphasized. This C4ISR system understands the location of the TI, where the nodes are, what product each node has, how every node will distribute its product, and the location of other products in the battlespace. It is capable of determining which node contains the next-best product that most accurately reflects the TI’s reaction and/or contributes to the theme based on the new circumstances.

The order which the products are distributed, and the speed at which they are used to contribute to new circumstances that unfold, suggests that the system is not only aware of the location of all nodes, but the characteristics of each one.

It understands what product each node is capable of distributing. For a person, this includes: age, gender, race, and other physical or mental characteristics, which themselves can be used as a product. It also includes the type of clothing and any objects that these individuals may be carrying, which are used as products to convey messages.

For vehicles, it includes characteristics such as the type, color, the existence of any bumper stickers, license plates, lettering, or other visible products inside or outside of the vehicle that are used to promote the theme. It is also aware of what order each product needs to be distributed (shown to the TI), because in order for some themes to be successful, products must be distributed in a particular sequence.

So, while the system is performing ongoing surveillance on the TI, it is instantly able to process the intelligence and use it to instruct nodes during swarms. Because AugCog is part of the intelligence cycle, this means the TI’s thoughts and emotions can be included in the attack instructions that nodes frequently receive.

Furthermore, because the system knows the location of every node at every moment, as well as the location of the TI, any new intelligence, including information regarding thoughts and emotions, can be sent to specific nodes in the area of the TI which, can be instructed to perform a PsyAct.

Node Identification

If this system is guiding nodes during swarms, then each node must be distinguishable to it. So, there are some important questions regarding what unobtrusive technology the DOD is using to tag each one.4

Although the DOD expressed an interest in using a revolutionary technology to communicate with people using the microwave hearing effect, as described in the footnotes of The New War chapter in Volume II, still, the actual technology being used to distinguish each person as an individual node is not revealed.

For vehicles, ships, and planes the cognitive radio can accomplish this. However, for people this is more complex. Although an operator can probably manually aim a device equipped with tracking sensors to direct a microwave beam into a person’s head to instruct them, for a large number of nodes this would be inefficient.

Most likely, the C4ISR system itself is guiding these people. In order to direct these auditory transmissions the system’s sensors must be able to lock-on to the human brain with some type of tag. And in order to properly generate swarms, the computer must be able to distinguish each node to properly guide it. It must understand where each node is, where the target is, and the geography of the land.

So each person must have a feature which distinguishes them from all other nodes that the system can identify and incorporate into its swarm algorithm. Although there are probably a variety of possibilities, here are a couple.

One possibility is that the swarmers carry some type of device which gives them a unique identity and allows the system to track and direct them. The device itself may be tiny so as to be easily concealed, or, it might resemble an everyday electronic object, maybe even a phone. In the Weather Warfare chapter of Volume III we discovered that in the early 1900s Dr. Nikola Tesla said it was possible for a watch-sized device to instantaneously and clearly transmit sound to any place on the planet.

According to the Department of Defense’s June 2007 report, Global Information Grid Architectural Vision, communications technology used in the GIG will include a tiny hands-free computer that is embedded in clothing, and which is presumably undetectable to the casual observer.

Not much information is given regarding how a person receives information from the device. However, the description infers that it requires no interaction. How the device transmits the signals it receives into a person’s auditory pathway is unknown.

Metz and Kievit mentioned that a small device known as the individual position locater device (IPLD) could eventually be placed under the skin of some citizens to covertly communicate with them during evacuation procedures.

There are several possibilities regarding how these or similar concealed devices transmit instructions to the node. One is that the device somehow wirelessly redirects the transmission to the person’s cranium where the microwave hearing effect can be produced.

Or, each person uses the device for a type of natural login procedure, where they establish a connection to the GIG by placing it near their cranium, so that laser sensors from satellites can quickly and non-invasively link to their brain. Then the device is placed in clothing or not needed at all once the connection is established. They are then instructed wirelessly the entire time they’re activated.

The device could also act as both a receiver and speaker which emit a regularly audible sound. This would mean that it is placed in or near their ears, or hidden in clothing. However, with this method, unless the device was placed directly in the ear there is the possibility that the instructions can be heard by others. Also, if it were placed in the ear, then it would be visible, even if it were small, unless it was very tiny.

Because of the visual and audible concealment problems associated with this last method, it is unlikely. I have noticed no such communication devices in the ears of these swarmers. And never have I overheard instructions emanating from a device hidden in their clothing.

The second possibility is that technology exists which allows the DOD to detect a feature in a person’s brain which distinguishes it from all other brains. This could be described as a type of brainprint, where each person’s brain is somehow (probably remotely) calibrated to the system. Once calibration takes place, the node is distinguishable from all other human nodes. It can be identified, located, alerted, and guided in real-time by the system.

The DOD mentioned a method of linking people to the GIG using human computer interaction (HCI), which it describes as a highly advanced system consisting of sensory channels and cognitive capabilities that are fused with communications systems, which allow people to interact with the GIG in a natural way.

Little details are given pertaining to how this HCI connection works. Judging from the description, however, it is probably a covert method for the DOD to send instructions to human nodes connected to the GIG, after a quick and simple login procedure.

As far as node alert, one possibility is that the system sends out a signal to a general area to locate a particular node. Once it is located, a type of natural login occurs, where it becomes activated. The human node may or may not have a choice in this process.

Or, they receive an alert by cell phone or other electronic means letting them know they need to connect at a particular time. After they’re alerted, they establish a quick connection to the GIG. In addition, some people are probably on a connection schedule, where they understand what days and time’s they’ll be alerted, as part of the global civil defense network. Then, for a period of time, they are instructed by the system to perform their activities.

Regardless of how they have accomplished it, they’re using a covert communications system which can distinguish each individual human node and send it personalized instructions allowing for swarms that are perfectly synchronized.

Electronic Recruitment

Although official documentation states that a major portion of the civilian population will be recruited as surrogate forces, this does not necessarily mean their collaboration is voluntary. The non-verbal communication exhibited by some of these people leads me to conclude that a portion of them are definitely stalking and harassing people against their will.

Secrecy is of utmost importance when using informants. These unofficial collaborators are never allowed to reveal their connection to the state. What better way to achieve this than to have no visible contact with them?

It is likely that some people are being selected, terrorized, trained, and guided entirely by electronic methods. They are somehow identified by one of the mechanisms built into our society, whether in the workplace, schools, or other environments, and are then placed under surveillance and connected to this system.

The ones who recognize what’s happening and try to resist soon realize that the societal institutions which exist to protect them from such things offer no help. Furthermore, social norms and fears of being labeled mentally-ill, which are the result of propaganda generated by the financial elite’s institutions, prevent them from coming forward.

Some end up committing suicide. Others are antagonized to the point where they become infuriated and act violently, which lands them in prison or a mental hospital. Some are given a choice to submit, after which they are directed to persecute others that are being attacked.

Spoofed Signals

I’ve frequently noticed that TV and radio programs, as well as pre-recorded talk shows on portable MP3 players, comment on my ongoing activities in real-time and reference recent themes, which continues throughout the entire broadcasts, including commercial advertisements.

For instance, if I move left then the word left is mentioned, if I open up a cabinet the word open or a similar one is sent, if I’m about to walk up a flight of stairs then words such as up and related words are transmitted. Turning on light switches may result in words such as on, light, bright, dark, etc. If I accidentally bump a limb against an object, then words such as hospital, injury, and similar words are sent.

These comments are consistently synchronized with any basic limb movements, as well as the mere decision to make a movement. Also, if I’m looking at an object, then it or a variation of it is referenced. This appears to be another AugCog stimuli output.

These are pacing techniques used in hypnosis. It seems to be an advanced way communicating with the enemy by pacing them through cycles of observation, as explained by the RAND Corporation in the Psychological Operations chapter of Volume III.

Exactly how they’re merging these key phrases into programs that supposedly originate from mass media outlets which reach thousands of people is open to interpretation. But here are some possibilities. One is that these phrases are actually woven into an otherwise authentic program by the actual hosts, guests, and advertisers, who are connected to the intelligence cycle, and periodically receive instructions from the DOD.

Aside from the sophisticated surveillance technology used to transmit and perfectly synchronize the phrases with the TA’s activities, the military uses this method according to its documentation. If a multitude of TAs exist in an area, communicating with each by embedding phrases into a broadcast, while synchronizing them perfectly with the TA’s movements, would be inefficient. However, using these announcers to speak certain phrases that are designed to be recognized by the TA would be simple enough.

These TV and radio programs may also be custom-spoofed for each TA using a type of advanced digital morphing technology that is linked to the C4ISR system. With this method there are also a couple of possibilities. One is that the program is partially spoofed, with fragments of it seamlessly morphed in real-time with key phrases pertaining to the TA’s activities, resulting from ongoing surveillance.

For both live and pre-recorded broadcasts, this method may result in a slight delay as the actual broadcast/file is routed through the system, fused with sensor results, morphed, and then outputted as a spoofed signal to whatever electronic device is being used.

Another possibility is that the entire TV or radio program is spoofed in real-time with morphing technology, which creates an entirely fake audio or audiovisual news broadcast, that, in addition to delivering real news, comments on the TA’s activity as a result of persistent surveillance.

Obviously, any spoofing method of this kind which is connected to the intelligence cycle and transmits messages into people’s homes requires a highly sophisticated production facility, such as one used by the 4th POG. Although not much detailed information has been given on the military’s use of digital morphing, as we’ve discovered it has been announced that it will be used for PsyOp.

In their 1994 US Army War College article, Metz and Kievit provided a hypothetical scenario, where an electronic barrier was placed around an entire country by secretly redirecting all of its communications through a national security filter at Forte Meade. In this scenario they also revealed how morphing technology could be used to produce computer-generated spoofed broadcasts to deceive insurgents.

Because the target is electronically isolated, under constant surveillance, and every single channel of communication that they rely on for accurate information is interfered with, their efforts to confirm that these signals are spoofed by checking a source may be futile.

Furthermore, because PsyOp units establish contracts with local media outlets in the AO, if a TA attempted to verify the contents of a broadcast, for instance, by visiting the facility, the media outlet itself would provide cover for the operation.

However, determining exactly how the DOD is accomplishing this is mostly irrelevant. Once again, it has been declared that PsyOp is being used on citizens, and that those under surveillance will be made aware that they are being watched. I refer to these programs as PsyOp Radio and PsyOp TV.

The internet spoofing that I’ve noticed appears to use the same observation and feedback mechanism. While technically there must be more to it than this, basically the system generates key words pertaining to an event and sends them to a spoofing program which inserts them into pages requested by the TI, seconds after the event occurs.

Summary

Simplified, this system is an automated, wireless surveillance and feedback loop consisting of a network of sensors, nodes, and directed-energy weapons, which is controlled by artificial intelligence (AI). It is an automated, covert, efficient way of handling informants, guiding swarms, and attacking targets.

In the past, when regimes used informants to enforce their rule, technological limitations allowed targeted people more options when dealing with attacks. It also gave them more time to recover in between attacks. For instance, there were delays between the instance that a situation changed, decisionmaking, and the transmission of new instructions to the informant. Depending on the technology available during that time-period, these delays may have been days, minutes, or seconds.

This is no longer the case. The entire process, from surveillance, to decisionmaking, to transmission, is automated. Informants are now connected to this system and guided the entire time that they’re in the presence of the TI. The attacks are not limited by the mental capacity of the informant. Furthermore, the connection mechanism is robust, it works indoors and outdoors, there appears to be little disconnect.

Just as any progressing technology is continually applied to existing systems, so too, recent advancements in human-computer interface has revolutionized the practice of handling informant networks. The DOD has the technology to hijack a conversation. The destructive potential for such technology is limitless. It can be used to misguide and attack individuals or groups in a traceless manner which seems completely natural.

Functioning as a weapon, this automated system closes an electronic feedback loop around a targeted individual, attacking them relentlessly night and day with painful stimuli based on their ongoing activities, transmitted through a variety of channels.

Sources

Endnotes

1 The book, Augmented Cognition: A Practitioner's Guide, by Dylan D. Schmorrow and Kay M. Stanney, says that the computer can detect user errors before they are even aware that they've been made, although it doesn't elaborate on what this involves.

2 Research is being conducted to determine what an operator's functional state will be in the future. See Wired, Pentagon's Mind-Reading Computers Replicate, March 19, 2008, and Augmented Cognition: A Practitioner's Guide. The book Augmented Cognition mentions that it can currently predict an operator's future performance, but doesn't provide details.

3 Although books such as, Foundations of Augmented Cognition, by Dylan D. Schmorrow and Leah M. Reeves, and Augmented Cognition: A Practitioner's Guide, mention that the environment can be changed, they provide little information on exactly how.

4 Some other important questions include the following: What are the locations of the devices which emit the transmissions that guide the nodes? Are they space-based? Do the signals originate from the same weapons platforms that are used to attack people? What is the DOD using to tag its targets? It appears that targets and informants are connected to the same system and receive auditory stimuli. Possibly, the same tagging method that is used to distinguish each node for auditory transmissions is used to track TIs. Also, how long have people been connected to this system? How long have these computer-generated swarms been occurring?