Main Menu

Updates

Donations

Available Books

New World War: Revolutionary Methods for Political Control

Dedication & Thanks


Volume I: Current Political Situation


Volume II: The New War


Volume III: Weapons of The New War


Volume IV: The Coverup


Appendix


Mind-Reading
History

Probably no more “intrusive and persistent” method of obtaining information about a person exists than reading their mind.1 Research on mind-reading has been vigorously pursued by US government agencies and various academic centers since the 1970s, and continues to this day.

Since 1973 DARPA has been studying mind-reading with EEG hooked to computers, using scientists at the University of Illinois, UCLA, Stanford Research Institute, Massachusetts Institute of Technology, and the University of Rochester.

They developed a system that could determine how a person perceived colors or shapes and were working on methods to detect daydreaming, fatigue, and other brain states. Although the device had to be calibrated for each person’s brain by having them think a series of specific thoughts, the calibration was quick.

In 1974 another very basic mind-reading machine was created by researchers at Stanford Research Institute. It used an EEG hooked to a computer which allowed a dot to be moved across a computer screen using thought alone. When interpreting people’s brainwaves, it was right about 60% of the time. During these tests scientists discovered that brain patterns are like fingerprints, each person has their own. So, each computer would have to be calibrated for a specific person.

Another method to address this issue was to store a large amount of generic patterns on the computer, so when it encountered a brain pattern it didn’t recognized, it used one that most resembled it. Since then, DARPA has sponsored Brain-Computer Interface (BCI) and mind-reading programs at Duke University, MIT, University of Florida, and New York State University, Brooklyn.

The Human Computer Interaction group at Tufts University has studied mind-reading funded by grants from a government research and education agency known as the National Science Foundation (NSF). Carnegie Mellon University, Stanford University, and the MIT Sloan School of Management have studied mind-reading. The Computer Laboratory at the University of Cambridge in England has developed mind-reading machines based on facial expressions.

Other academic institutions that have participated in mind-reading projects include the University of California, Berkeley, University of Maryland, and Princeton University in New Jersey. Microsoft has studied mind-reading using EEG to better accommodate its users. Emotiv Systems built a mind-reading gaming device which uses EEG to infer the mental states of video game players. Honda Motors and Advanced Telecommunications Research Institute International (ATR) have studied mind-reading.

Neuroimaging Devices

Scientists discovered that the neural code of the human brain is similar to the digital code of a computer. To some extent, they have deciphered this code. Prior to this, they assumed that it was necessary to identify the neurons associated with specific acts, which would have made mind-reading much more difficult.2

They now understand that it’s not necessary to monitor billions of neurons to determine which are connected to a particular thought or act. Only a small number of them need to be monitored to accomplish this. To monitor these neurons researchers use neuroimaging devices. They include event-related optical signal (EROS), functional magnetic resonance imaging (fMRI), electroencephalography (EEG), functional near-infrared imaging (fNIR), magnetoencephalography (MEG), and positron emission tomography (PET).3 These devices may be combined for a more accurate reading.

There are basically two types of measurements, direct methods and indirect methods. Direct methods measure changes in electromagnetic fields and currents around the brain which are emitted from the surface of the scalp, or they monitor the neurons themselves. Indirect methods measure hemodynamic (blood movement) changes of hemoglobin in specific tissue compartments.

Both of these methods are almost simultaneous with neuronal activity. Regarding sensors, there are invasive ones which must be implanted, and non-invasive ones which can be worn on the scalp, in the form of a headband.

Electroencephalography (EEG) provides a direct method for determining brain states and processes by measuring the electrical activity on the scalp produced by the firing of neurons in the brain. EEG has been around for over 100 years. EEG is commonly used in neuroscience, cognitive science, and cognitive psychology. It is inexpensive, silent, non-invasive, portable, and tolerates movement.

Wireless EEG which uses non-invasive sensors that have physical contact with the scalp can transmit the signals to a remote machine for deciphering. Although, in 1976 the Los Angeles Times reported that DARPA was working on an EEG to detect brain activity several feet from a person’s head, which was to be completed in the 1980s. EEG normally produces only a general indicator of brain activity.

However, in 2008 Discovery News reported that a company called Emotiv Systems developed an algorithm that decodes the cortex, providing a more accurate measurement. “We can calibrate the algorithm across a wide range of technologies with the same resolution you would get from placing an invasive chip inside the head,” said Tan Le, president of Emotiv Systems.

Functional magnetic resonance imaging (fMRI) measures the blood flow in the brain in response to neural activity. Active neurons use oxygen, which is brought to them by blood. The more active a region of the brain is the more blood flows in the area. This movement of blood is referred to as hemodynamic activity. FMRI can detect which areas are receiving blood, which indicates that they’re processing information.

The fMRI provides an indirect measurement of brain processes. It is the most common method of neuroimaging, and can produce 2 and 3-dimensional images. It is non-invasive, and can record signals from all brain regions, unlike EEG which focuses on the surface only.

Functional near-infrared imaging (fNIR) provides an indirect measurement of brain activity by detecting hemodynamic changes in the cortex. Although it is based on different principles, in that it uses light, it functions in the same manner as fMRI. FNIR can provide an almost continuous display of these changes in the cortex. It is inexpensive, non-invasive, and portable. A wireless headband with sensors exists for this device.

Event-related optical signal (EROS) is a brain-scanning device that focuses near-infrared light into the cerebral cortex to detect the density of neurons indicated by the transparency of brain tissue. Because it can only detect these changes a few centimeters deep, it can only image the cerebral cortex. Unlike fNIR, which is an optical method for measuring blood flow, EROS detects the intensity of neurons themselves and provides a direct measurement of brain activity. It is very accurate, portable, inexpensive, and non-invasive. A wireless headband with sensors exists for this device.

Capabilities

Mind-reading can be accomplished by first having a computer learn which brain patterns are associated with specific thoughts, then store the decoded information in a database. This machine learning is accomplished using a type of artificial intelligence (AI) called an algorithm. A very basic algorithm is a spell checker, which uses a database of common mistakes associated with a particular sequence of letters to present suggestions to a user.

“The new realization is that every thought is associated with a pattern of brain activity,” proclaimed neuroscientist John Dylan Haynes, in Newsweek International on February 4, 2008. “And,” says Haynes, “you can train a computer to recognize the pattern associated with a particular thought.”

In a January 2000 issue of US News and World Report, Lockheed Martin neuroengineer Dr. John Norseen announced, “Just like you can find one person in a million through fingerprints ... you can find one thought in a million.” This can be accomplished using AI and HCI, or what Dr. Norseen calls biofusion.

The decoded brain signals can be stored in a database. Then when someone is scanned, the computer detects the pattern and matches the signals to the database of known meanings. But it’s not necessary to scan a brain to decode its signals for every single thought, such as a picture.

Instead, after the machine has learned how to decipher patterns associated with specific thoughts such as images, more images can be added to the program and the computer can use the process it used for the other images as a model to somewhat accurately detect additional thoughts.

Both words and images can be detected using mind-reading devices with varying degrees of accuracy. This can occur for words and images being viewed by a person on an external display, such as a book, or words and images just being thought of with no external stimuli.

“It is possible to read someone’s mind by remotely measuring their brain activity,” announced New Scientist in their Mind-Reading Machine Knows What You See article of April of 2005. The Computational Neuroscience Laboratories at the Advanced Telecommunications Research Institute International (ATR) in Kyoto Japan, and Princeton University in New Jersey, proved that by monitoring the visual cortex with fMRI they could determine which basic objects (sets of lines) a person was looking at.

When the objects were combined, they could even determine which one was being focused on. According to the scientists, it may be possible not only to view but also to record and replay these images. They announced that the technology could be used to figure out dreams and other secrets in people’s minds.

Vanderbilt University in Nashville has conducted simple mind-reading tests using an fMRI/Computer, which learned what basic images a group of test subjects was looking at. They were able to predict with 50% accuracy which objects the test subjects were thinking of when they were asked only to remember what they had seen, without being shown the images.

On March 6, 2008 ABC News reported that neuroscientists at the University of California at Berkeley accomplished mind-reading by monitoring the visual cortex with an fMRI connected to a self-learning (artificial intelligence) computer program.

First, they used 1750 pictures to build a computational database for the computer to learn with by flashing the pictures in front of test subjects connected to an fMRI. This allowed the algorithm to decipher the brain patterns and associate them with the images.

In addition to deciphering these brain patterns, the computer recorded the process that it used to accomplish this, and built a model based upon it. Then, without scanning the test subjects, they added 120 new pictures to the program and allowed it to create its interpretation of what the new brain signals would be, based on the previous model.

Then they had the test subjects look at these pictures which they had never seen while being scanned. The computer predicted what they were looking at 72% of the time. The scientists announced that the model could be used as a basis to predict the brain activity associated with any image.

What this means is that it’s not necessary to scan a brain to obtain the meaning of each signal. Once the model had been developed, they could simply add new pictures to the database/dictionary. The scientists suggested that out of 1 billion pictures, the computer would be accurate about 20% of the time.

Images which are not consciously seen by a person can even be detected by mind-reading machines. Researchers at University College London flashed pictures in quick succession to test subjects connected to an fMRI. Although some of these pictures were invisible to the subjects, they were accurately recorded 80% of the time by the computer.

Like a fingerprint, each person has their own brainprint. Therefore, calibration for each brain is necessary. This is accomplished by having the person think a series of specific thoughts. In the case of EEG, this calibration can take less than a minute. However, because the signals which represent thoughts are similar from one person to the next, a universal mind-reading database has been suggested.

Using fMRI, scientists at Carnegie Mellon University (CMU) discovered that the brain patterns associated with specific thoughts are quite similar among multiple people. This, they stated, would provide the opportunity to create a universal mind-reading dictionary.

Scientists at the University of California at Berkeley mentioned that a “general visual decoder” would have great scientific use. Likewise, the brain patterns associated with specific words that occur when people are reading are also basically the same. This similarity of brain functions associated with words seems to have been an evolutionary development which allowed for an advantage in communication.

A mind-reading machine capable of determining the brain pattern associated with a specific word was developed by scientists at CMU. Brain scans using fMRI were taken of test subjects who were given a variety of words to think of in order to train the computer. An important consideration here is that they were not viewing these words on an external display, only thinking about them. After the computer identified the brain patterns associated with those words, the subjects were given two new words to think about, which the computer accurately determined.

Although, in this particular study only a couple of words were tested, it proves that after a model of how to decipher brain signals was created, AI could accurately determine new words that subjects were thinking about. “These building blocks could be used to predict patterns for any concrete noun,” proclaimed Tom Mitchell of the Machine Learning Department.

In February of 2004 Popular Science announced that a mind-reading computer could, in theory, translate a person’s working verbal memory onto a computer screen. “You could imagine thinking about talking and having it projected into a room 2,000 miles away,” says Professor Craig Henriquez at Duke University’s Center for Neuroengineering, who has studied mind-reading for DARPA. He added, “It’s very, very possible.”

FMRI can be used to determine if someone is reading or writing. Neuroscientists can determine when a person is reading by monitoring their brainwaves. They can almost determine exactly what they’re reading. And because these patterns are similar from one person to the next, a universal device for determining what people are reading is possible.

In March of 2008 both Technology Review and ABC News revealed that an fMRI could in theory be used to display a person’s dreams. Then in December of 2008, scientists at ATR in Kyoto Japan announced that they developed a technology that would eventually allow them to record and replay a person’s dreams.

Emotions from love to hate can be recognized by neuralimaging. The level of stress a person is experiencing can also be measured. Brain states such as honesty, deception, and even self deception, can be measured.

Patterns associated with decisions can also be read. Scientists from CMU, Stanford University, and the MIT Sloan School of Management were able to accurately predict the purchasing decisions of test subjects in a virtual shopping center. They monitored the subject’s level of interest in a product as well as their decision to purchase it.

Neuroimaging can also detect decisions about how someone will later do a high-level mental activity. Neuroimaging can be used to determine if someone is speaking or reading. It can be used to detect areas of the brain that are active when someone is hearing a sound, or touching an object.

Brain patterns associated with specific physical movements, such as a finger, can be deciphered with neural imaging. The mere intention to make a physical movement can be detected before the actual movement is made.

Cameras

A type of mind-reading is possible with cameras connected to computers. One such device, called the Emotional Social Intelligence Prosthetic (ESP), was developed at the MIT Media Laboratory in 2006. It consists of a tiny camera that can be worn on a hat, an earphone and a small computer that is worn on a belt. It infers a person’s emotional state by analyzing combinations of subtle facial movements and gestures.

When an emotional state is detected, the wearer is signaled through the earphone to adjust their behavior in order to gain the attention of the target. The computer can detect 6 emotional states. It can also be adjusted for cultural differences and configured specifically for the wearer.

Around this time, the Computer Laboratory at the University of Cambridge, UK, developed a similar camera-based mind-reading machine. It uses a computer to monitor, in real-time, combinations of head movement, shape, color, smiles, and eyebrow activity to infer a person’s emotional state.

It detects basic emotional states such as happiness, sadness, anger, fear, surprise, and disgust, as well as more complex states. It’s accurate between 65 and 90 percent of the time. “The mind-reading computer system presents information about your mental state as easily as a keyboard and mouse present text and commands,” they announced.

Used for Surveillance

Mind reading exists. The DOD and various institutions have vigorously researched this subject since at least the mid 1970s. “Mapping human brain functions is now routine,” declared US News and World Report, in an article entitled Reading Your Mind—And Injecting Smart Thoughts, of January of 2000.

Both words and images, being viewed or thought, can be mind-read. Various emotional states as well as mental processes such decisionmaking, reading, writing, movement and the intention to make a movement, can be detected with mind-reading devices. Perceptions such as touch, sound, and light can also be detected.

The proposed uses for mind-reading technology are positive. Some include determining if people in comas can communicate, helping stroke patients and those who suffered brain injuries, aiding those with learning disorders, assisting with online shopping, and improving people’s communications skills.

However, other uses that have been suggested include the monitoring of unconscious mental processes, and interrogation of criminal suspects and potential terrorists. Dr. John Alexander mentioned that the recent developments in mind-reading technology would take surveillance to new levels by allowing investigators to “peer into the inner sanctum of the mind,” in order to determine if a suspect has caused, or will likely cause a crime.

Dr. Norseen has sent R&D plans to the pentagon to have tiny mind-reading devices installed at airports to profile potential terrorists. He suggested that these devices could be functional by 2005. In August of 2008, CNN stated that the US military’s knowledge obtained from mind-reading research could be used to interrogate the enemy.

Law Enforcement Technology announced in September of 2005 the existence of a new forensic technology known as Brain Fingerprinting, which has already been used in hundreds of investigations as a lie detector by the CIA, FBI and law enforcement agencies in the United States.

Brain Fingerprinting is admissible in court, because unlike a polygraph, which relies on emotional responses, it uses EEG to see how the brain reacts to words and pictures related to a crime scene. Dr. Larry Farwell, its inventor, says it is completely accurate. According to the report it will be used to help speed-up investigations.

Sources

Endnotes

1 Another possible method to obtain information is Remote Viewing. RV is the ability to produce correct information about people, events, objects, or concepts that are somewhere else in space and time, and are completely blind to the viewer collecting the information. It can be used to describe people or events, produce leads, reconstruct events, make decisions, and make predictions about the future. See Remote Viewing Secrets by Joseph McMoneagle. RV tests were conducted by the US government over a 20-year period during Project Stargate, a classified initiative by the CIA which began in 1972 and lasted until about 1994. Most of the 154 tests and 26,000 trials took place at the Cognitive Sciences Laboratory at Forte Meade, Maryland. A majority of the results of the project are still classified. See the Journal of Parapsychology articles, Remote Viewing by Committee, September 22, 2003, by Lance Storm and Experiment One of the SAIC Remote Viewing Program, of December 1, 1998, by Richard Wiseman and Julie Milton. The success of the project varies depending on the source. Allegedly the original tests were conducted under rigid scientific conditions, which had impressive results. However, the same sources describe RV in general as ineffective. See Discover Magazine's article, CIA ESP, on April 1, 1996, by Jeffrey Kluger, and the Washington Post's report, Many Find Remote Viewing a Far Fetch from Science, on December 2, 1995, by Curt Suplee. According to author McMoneagle, an original viewer during Project Stargate, it is accurate about 50 or 60 percent of the time. Nevertheless, RV will be used to obtain intelligence, according to John B. Alexander. See The New Mental Battlefield, which appeared in the December 1980 issue of Military Review. Also see the June 1998 Research Report Number 2 of the University of Bradford's Non-Lethal Weapons Research Project (BNLWRP), for how RV has been added to the NLW arsenal. According to multiple sources, US government agencies are now using the consulting services of RV professionals. This was reported on January 9, 2002 in the University Wire's (Colorado Daily) article, Clairvoyant Discusses Reveals Details of Remote Viewing, by Wendy Kale, and in the Bulletin of the Atomic Scientists on September 1, 1994, in its report, The Soft Kill Fallacy by Steven Aftergood. In his book Winning The War: Advanced Weapons, Strategies, and Concepts for the Post-911 World, Alexander had this to say regarding RV: "Since the beginning of history, humans have made anecdotal references to innate abilities to foretell the future, to know what was occurring at distant locations or the status of people separated from them, and to find resources they need without any traditional means of accessing that information." He continued: "Studies have demonstrated beyond any doubt that these nontraditional capabilities exist. … [RV can] radically change our means of gathering intelligence. It holds the promise of providing information about inaccessible redoubts and advances in technology. More importantly, once these skills are understood, those possessing them will be able to determine an adversary's intent and be predictive about the events."

2 Because neuroimaging technology decodes brain patterns to thoughts, some argue that it technically doesn't read a person's mind. However, because specific thoughts and brain states can be deciphered, here it is referred to as mind-reading. Additionally, most mainstream documents refer to this as mind-reading, despite the fact that it is actually brainwave-reading.

3 Magnetoencephalography (MEG) and positron emission tomography (PET) can also be used to infer a person's neurophysiological state. But because they are impractical for field use due to their large size and harmful radiation, MEG and PET won't be considered here. However, DARPA is in the process of developing a small helmet-sized MEG device which would be connected to a portable computer. See the article Mind over Machine in the February 1, 2004 issue of Popular Science, by Carl Zimmer.