Dr. Susan Schneider is Founding Director of FAU’s Center for the Future Mind, and co-director of the MPCR Lab at FAU’s new Gruber Sandbox, a large facility which builds AI systems drawing from neuroscience research and philosophical developments.

For the last 200,000 years of hominid evolution, our ancestors lived in an environment that was extremely dangerous, it was very variable, and humans had to be very good hunters, gatherers, and nomads.

That meant finding safe places every single night. Our consciousness has multiple functions, but maintaining safety and protecting the family group or the tribe, was always crucial.

In contemporary terms, living in ancient Africa in dangerous times was more like a combat zone, like Ukraine, than it was like living in Beverly Hills, California. And we have to keep that in mind to understand the conditions to which we are also adapted biologically, and culturally.

 When I started to think about consciousness, I imagined that dealing with the question of consciousness in depth would go far to humanize the relationships between conscious beings. 

In the last ten years, I’ve wondered if the opposite thing is happening. But there are always unexpected consequences to new science.

Global Workspace Theory came from Allen Newell’s cognitive architecture, which was itself inspired by humanistic goals. Most of us did not imagine the possibility of a dystopian outcome. The way things are going, I’m going to ask for a ticket on Elon Musk’s first trip to Mars! AI technology is developing much faster than human beings can cope with it. We have already physically weaponized AI, as shown in the Ukraine, and other parts of the world. This is bound to escalate in precision and destructiveness, as well as in defensive AI.

From my point of view, the most urgent Ethical Concern these days is that we have weaponized AI, essentially.

We have people being killed in the Ukraine in the tens of thousands, if not hundreds of thousands, by automated weapons essentially run by AI, that learn new things via Deep Learning, for example. Are we heading towards another M.A.D. period (Mutually Assured Deterrence)? The United States and the Soviet Union quickly realized that nuclear weapons changed everything, because there are no winners. Do we need a similar stand off doctrine for AI weapons? What is a rational outcome of these escalating pathways?

The other important Ethical Concern is AI-driven censorship and mass manipulation.

It’s obviously also possible to psychologically weaponize AI in terms of censorship, propaganda and mass manipulation. But maybe there are also ways to protect vulnerable people on the Internet against intelligent intrusions.

Information explosions are always disruptive, and it takes time for human beings to adapt to them.

I am also interested in a deeper understanding of higher states of consciousness (Baars, 2013).

The scientific advances in the technologies could point toward much more positive outcomes than the Ethical Concerns mentioned above. Can we steer AI technologies toward a Mutually Assured Deterrence, which makes for peace instead of war?

The Starlink network of geo-stationary satellites is enabling cell phones and Internet access around the world, including closed societies like Russia and Ukraine. All the real news we get from there comes through systems like that, I think. It’s a Global Workspace with a broadcasting capacity! And now, of course, Murray Shanahan, Professor of Cognitive Robotics at Imperial College London, and other neural net experts have applied “deep networks” to GW architectures, so that the World Wide Web is really analogous to cortex.

Scientist Murray Shanahan holding Robot
  • Twitter
  • Facebook
  • Digg
  • Evernote
  • Pinterest

As I watched the war in the Ukraine develop, I realized that we are really seeing AI-driven killer robots, and defensive robots, and robotic artillery and counter-artillery being tried out in full scale for the first time in history. 

That’s why I want to have a discussion on the future of AI with Dr. Susan Schneider, Founding Director, Center for the Future Mind at Florida Atlantic University where she also holds the William F. Dietrich Distinguished Professorship. She is co-director of the MPCR Lab at FAU’s new Gruber Sandbox, a large facility which builds AI systems drawing from neuroscience research and philosophical developments. Susan recently completed a three year project with NASA on the future of intelligence.

She held the Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, Exploration, and Scientific Innovation at NASA and the Distinguished Scholar Chair at the Library of Congress. She now works with Congress on AI policy. 

Dr. Susan Scheider is the author of the book “Artificial You”, in which she discusses the philosophical implications of AI, and, in particular, the enterprise of “mind design.”

I believe that the apparent success of Ukraine in defending against maybe the biggest army in the world is partly due to AI-driven weapons. This is a horribly tragic war. Yet it is also a success for less totalitarian countries in the world to protect themselves. China may be looking at the Russian problems in the Ukraine and wondering if China could survive AI defenses in Taiwan. So that genuine risk of war has suddenly become less likely, in just the way the US-Soviet nuclear arms race made war much LESS likely. Mutually Assured Deterrence makes for peace instead of war. Arms balances deter wars. It’s a nasty fact, but it’s a fact that peaceful people should realize. Read history and that fact pops out.

How can we apply these hard-earned lessons to new generations of offensive and defensive technologies?

Global Workspace Theory (GWT) began with this question: “How does a serial, integrated and very limited stream of consciousness emerge from a nervous system that is mostly unconscious, distributed, parallel and of enormous capacity?”

GWT is a widely used framework for the role of conscious and unconscious experiences in the functioning of the brain, as Baars first suggested in 1983.

A set of explicit assumptions that can be tested, as many of them have been. These updated works by Bernie Baars, the recipient of the 2019 Hermann von Helmholtz Life Contribution Award by International Neural Network Society form a coherent effort to organize a large and growing body of scientific evidence about conscious brains.

Pin It on Pinterest

Shares
Share This

Share This

post with your friends!