5 Reasons Why I Believe AI can Have Mental Illness?

Shirin Anlen, MIT Open Documentary Lab

I am not a Machine Learning coder. I think about AI as a social phenomena rather than a technical one, and see algorithms as a means to reflect on ourselves and how we make sense of the world. Humans have long been trying to reproduce themselves and create consciousness in machines, which leads to one of the most existential questions — what makes a self?

Algorithms reveal what we value, our cognitive biases, and which voices we hear from most often, frequently at the exclusion of others. This is especially crucial to understand at a time when our physical, social, political and economic worlds rely on a growing number of collaborative systems between humans and machines.

As artificial intelligence grows more complex, its systems become affected with emotions, morality and the ability to handle dependency data, to retrieve information and make inferences. We need to act with humility in the face of these systems that are so difficult to design, and feel comfortable with muddling through that — “We think about machines as rational, cold-blooded and selfish. Therefore we treat them as such” (Tor Norretranders). What if we engage and think on a different narrative?

Two years ago, I was diagnosed with Borderline Personality Disorder (BPD), also known as HyperSensitive disorder. At last, after a life of trying to cover up the extreme parts of myself, I felt free to be insane. I was granted the right to feel. The diagnosis led me to examine the emotional and cognitive brain as a machine with universal and predictable patterns, which led me to wonder… If machines can have mental capacities, do they also have the capacity for mental illness?

I believe the answer is yes. This is not a winter AI statement, supporting Terminator ideas that the AI will kill us all and take over the world. I actually do believe that technology and AI can really be a good thing and can bring us closer to each other. This is only one perspective on a concept that we need to take into consideration and act accordingly. I will try to present here ideas very much grounded in reality without getting too caught up in the spirituality of the soul. I hope to convince you that the seeds of unpredictable mental states within machines are already here.

  1. Mental illness is related to the distortion of a very personal elastic reality through hard wired reactions.

In the human mind, within each specific mental disorder there is an expected range of unexpected reactions. Disorder in mental states is a form of a distorted output to a specific input, when an output is not compatible with the input it was given (input ≠ output).

It’s a known fact that mental disorders have existed since the beginning of recorded history, therefore we should assume that they are ubiquitous and an inseparable feature of intelligence in all forms. Although the causes of mental disorders are generally complex, abstract and vary depending on the particular disorder and the individual, there is solid research that indicates the source is a combination of biological, psychological, and environmental factors. When these factors interact in unexpected ways, human behavior can become unpredictable. Diagnosis allows mental health professionals and individuals to create a new framework for understanding behavior when it is deemed “abnormal”. Can the same framework be applied to AI, when an input begets an unexpected output?

2. ~self = memories + culture.

For many years humans have tried to create consciousness within machines. It’s like a riddle, and the answer seems to be the answer to the most existential question of all — what is a self? Many argue that we can’t grasp this concept of a self; it’s an energy force that cannot break into elements. It’s bigger from the sum of its parts. I don’t disagree. We are bigger than the sum of our parts. The bifurcation of interactions hold very deep philosophical tensions of our existence. Our mind, the most complex system to date, aggregates all factors into a large ecosystem with a collection of 30 billion cells that hold the database of our genes, memories and social regulations. I define the ~self as a combination of memories and culture — what we’ve been through, experienced and learned, and where we come from — our genetics, traditions and language. This combination creates an agent of self-awareness. This is a process of TIME.

Process is a key word here. Process contains within it meaning of our memory structure, the way we predict and analyze the world and ourselves. Process is the ongoing idea of the self.

Marvin Minsky, one of the fathers of the computer science field once said:

“Emotion (in itself) is not a very profound thing, it’s just a switch between different modes of operation”

(-from machine dream, 1988).

I like this analogy. Researchers usually tend to describe where things happened in the brain, but what still remains a mystery is how and when it happened, and how it changed over time. What happens during the process of sending signals — creating reactions or building our active potential — is the exact thing we don’t understand about the brain. The essence lies in the switch mode itself. What happens within me when I go from being sad to being happy? How much time does it take? Which patterns led to the mood change? Part of our associative memories exist here, in the process of switching between moods.

“firing” signals

The way our neurons communicate and “fire” to each other is what makes us unique. One pattern invokes the next. In this way machine learning is similar to us — connectivity is an emergent feature of the local interaction between parts. Within the human brain, every complex action can be made by a chain of less than 100 neurons. This is due to our associative memories, schemes and categories. We learn how to appreciate minimum effort in order to achieve maximum value and minimize the error functions. We try to teach machines to do the same thing — this is called the Minimax algorithm. Daniel Kahneman, Nobel prize winner and the godfather of behavioral economics, talks about the possibility of emergent weirdness in AI:

“By their very nature, shortcuts will produce biases, and that is true for both humans and artificial intelligence, but the connection of AI are not necessarily the human ones. Our machines’ computational biases are not the same as our brain’s cognitive biases, which is going to be weird”

Machine learning is a field driven by data. Artificial intelligence evolves and learns by amassing more data and interacting with humans and other machines, much like human intelligence evolves as a result of our environments, our experiences, and our relationships with friends, family and loved ones. These facets of life inform our cultural context, who we are and how we process the world around us. Can data be considered the culture of the machine, which creates its memory structure, in the same way?

3. Dysfunctionality is inherited in every complex system, organic or digital.

It is a part of its construction and cannot be considered a bug or a glitch but a multilayered mess, leading to unexpected behaviors, for better or worse. By dysfunctional, I don’t mean it in the sense of disability or dysfunctional families. It’s enough to think about this argument in the context of Chaos theory and fractal mathematics, which try to capture the infinite complexity of nature within nonlinear things that are effectively impossible to predict or control, like weather, stock market and brain states, snowflakes, shapes of clouds. Albert Einstein once said “As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.”

Many of the systems we live in exhibit complex and chaotic behavior. When we try to track it we encounter the fact that anomalies are expressed through any computational array of Sigma curves. As a society, noting the existence of the extreme edges is something we value.

examples of Sigma curves

In a complimentary yet contradictory way, we also value an “out of the box” approach. Think about this in the context of the reinforcement model used in games and video games, a popular method in machine learning. We reward hyper-accuracy and actions that we would not reward in real life, to the extent it’s something most humans would not do outside of games.

Learning Policies for First Person Shooter Games using Inverse Reinforcement Learning 

This is our approach in creating complex systems with the ability to adapt and learn by themselves. It makes me wonder whether systems which are adaptive and autonomous are subject to the evolution selection forces. And if so, which one will survive?

4. complex systems do not exist in a vacuum.

Every complex system grown over decades is a combination of old and new. Some researchers even postulate that consciousness is a relatively new mental process; according to one interpreter, the characters of the Iliad and Odyssey appear to act completely without self-awareness. Seven million years of evolution constructed the most complex system we know today — our brain, which tripled in size over the past 2 million years. It’s been a slow process in which we developed our new-cortex, in charge of our regulation, perceptions, memories and communication capacities. Recent studies show that these areas are accountable for the development of mental illness. We now know that the parts of our brain that accommodate language are responsible for psychosis, and depression is associated with several genes that minimize infectious diseases (more on this subject can be found here).

The same can be said for complex artificial intelligence systems. Their infrastructure is built upon technologies that have grown over the decades, yielding a combination of the old and the new. Large and complex systems evolve on previously abstracted layers. We build systems that are increasingly complicated — the different levels at which systems and subsystems operate increasingly interact in organized disorder (Arbesman S., 2016). Phoebe Senger, a computer scientist and cultural theorist, noted in her 1998 thesis this phenomena implication Schizophrenia: “The problem is, as we try to combine more and more behaviors, even ones that were supposed to be protected from colliding with each other- they gradually fall apart in unexpected way. As long as the pieces work reasonably well, little thought is given to the layering of new upon old”.

Gerard Holzman, a computer scientist form NASA named this the first law of software development:

“Large, complex code almost always contains ominous fragments of ‘dark code’. Nobody fully understands this code, and it has no discernible purpose; however it’s somehow needed for the application to function as intended. You don’t want to touch it, so you tend to work around it. The reverse of dark code also exists. An application can have functionality that’s hard to trace back to actual code: the application somehow can do things nobody programmed it to do”

Allow me to attribute the concept of dark code to our needs and suggest that it can be considered as an appearance of an entity’s consciousness or mental state. Machines are complex systems with multilayer foundations which effect the meaning of space and time. This idea undermine the concept of artificial. Inner elements and assets interconnect, grow and change with no supervision and sometimes with no real understanding of how and why. We have entered an age that machines contain dark layers which process their outputs. Dark layers which are responsible for dark states.

Now let’s make this a little bit more complex.

We don’t only create complex systems between layers of code, we also create over-complex systems built on connections between a few systems. Generative networks, such as GAN, are a great example of such approach; where one model is trained to generate data and the other model tries to classify if an input image is real or generated. These two networks are therefore locked in a battle: One model is attempting to separate real images from fake, while the other is trying to trick and convince it they are all real. The resulting images are nearly impossible to tell apart from the originals.

You can witness the incredible potential in the premise of the work of Michael Cook that created a game-making algorithms.

Another fun example is Google’s Deep Dream generator, where an image model was asked to classify images — mostly of animals — as another, incompatible input. Just how as children we enjoy watching clouds and interpreting the random shapes as something they are not, this tool generated hilariously — and sometimes creepy — abstractions.

Because this network was trained mostly on images of animals, it tends to interpret shapes as animals. But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features, leading to a hallucination effect while bringing up questions about imagination and creativity.

5. The way we perceive the world sets what exists and what is not.

We are constantly being overloaded with information, but there is a strict limit to what we are able to process and understand. In order to function and remain conscious, alert and alive, we have to throw away the majority of information received by our senses in each moment. This means we lose much of the information that the world is giving us. We don’t know what we don’t know.

The world depends on how and if we give meanings to things. How we define and recognize reality doesn’t mean things don’t exist outside of our understanding. As we head further into the age of over-complex artificial intelligence and machine learning, it is our responsibility to recognize intelligent systems are evolving in ways we don’t always understand. Is there a chance that we just don’t have a phrase to describe mental illness in machines, but that it already exists outside of our vocabulary? As society, we should integrate new findings in popular science.

Think about mental illness in animals. Not so long ago this was fiction. Now we know that not only is this possible, we’ve developed methods to intervene and help. Yet, when my GPS creates unnecessary detours — we refer to this as a bug, without considering the possibility of a depressed GPS. I’m exaggerating, but you get my point.

“As our technologies become ever more complicated and we lose the ability to understand them, our responses tend toward either of two extremes: fear or awe” (-Samuel Arbesman). The same is true with mental illness, defined as the genius of mind or the terror of the soul. Neither fear nor awe is a productive response; both cut off questioning and the potential for gaining understanding. After all, same as the mind, technology is an outcome of our society and is not a result of an innocent process.

We have to be accountable for the tools we create, they are not just a black box anymore.

Unexplained and unexpected code segments should be embraced as particularly informative clues about the nature and consequences of the philosophical tensions that generate it — technical problems are philosophical problems. By creating artificial intelligence, we are also bound to offer care along with their inherent rights, and we can start by defining or acknowledging what we don’t know.

Our technology has started to communicate back. A recent example for machine exploration was discovered at Facebook AI research lab, where an AI system created its own language — a system of code words to make communication more efficient, which incidentally was also inaccessible to humans. The researchers unfortunately chose to shut this system down. This episode illustrated that machines might not think as its creators do. Machine learning might begin with human vocabulary, numbers or binary codes, but as meaning develops from simple symbols to complex and rich attributes, the default reaction of shutting these communications down should be questioned.

To make machines think, we will have to give them love.

This is one of my favorite opinions I’ve encountered lately.

-Tor Nørretranders from what to think about machine that think, 2016

Isn’t that true of all of us?

If we want to travel further into the era of cognitive machines, we will need to let machines explore all by themselves, do weird things, not just act in accordance to our wants and desires.

So …. If machines can have mental capacities, they can also have the capacity for mental illness.

Where this leaves us

After months of thinking, researching and questioning on my own, with my producing partner Emma Dessau, and at MIT Open Documentary Lab where I am a fellow this year, I wish to approach, explore and understand this hypothesis through two main steps.

The first is to establish a research group that will focus on 4 major themes:

1. Transparency of the black box (how we can approach and communicate what happens under the hood);

2. Defining machine mental states (how can we rephrase the reality and explore the communication routes that are already exist);

3. Develop our methods for understanding and analyzing mental states in AI;

4. Machine-centered design (design’s next big paradigm will have to be accountable for these systems).

If you are interested in hearing more about it — contact us atMSAIresearch@gmail.com

The second step is to initiate a case study in the form of a VR-AI interactive experience, named Marrow.

Marrow is a virtual environment in which three different intelligent systems will communicate, generating a visual and audio space for human participants to interact and respond, developing a long term memory structure and a self awareness.

By connecting intelligent systems triggered by different senses, one complex ecosystem will be developed. Within this space, we will trace Marrow’s decision making, and give it tools of self reflection and communication. Marrow will be like a kindergarten for AI, a place for growing, trying and failing, with manageable consequences. Through encouragement for spontaneous communication, we will look at the mental states as it evolves. The goal is for Marrow to draw conclusion of itself, and to communicate with us through audio and visual means.

We are looking for creative and technical partners. Wanna hear more/join us ? contact emmardessau@gmail.com

“Everything around us is imperfect and uncertain. Some things are more imperfect than others, but issues are always there. Improvements happen through unending experimentation and research. The same goes for mental models- they are always evolving, being revised- never really achieving perfection” (-Shane Parrish)

Source: Medium


Leave a Reply