Living Together: Mind and Machine Intelligence

Balda_20120325_D700_3739.jpg

Neil Lawrence wrote a nifty paper on the current difference between human and machine intelligence titled Living Together: Mind and Machine Intelligence. The paper initially appeared in his blog, inverseprobability.com, on Sunday, but was then removed. It can now be found on arXiv.

The paper comes up with a quantitive metric to use as a lens to understand the differences between the human mind and pervasive machine intelligence. The embodiment factor is defined as the ratio between the computational power and the communication bandwidth. If we take the computational power of the brain as the estimate of what it would take to simulate it, we are talking of the order of exaflops. However, human communication is limited by the speed at which we can talk, read or listen, and can be estimated at around 100 bits per second. The human embodiment factor is therefore around 10^16. The situation is almost reversed for machines, a current computational power of approximately 10 gigaflops is matched to a bandwidth of one gigabit per second, yielding an embodiment factor of 10.

Neil then argues that the human mind is locked in, and needs accurate models of the world and its actors in order to best utilize the little information it can ingest and spit out. From this need, all sorts of theories of mind emerge that allow us to understand each other even without communication. Furthermore, it seems that humans operate via two systems, one and two, the fast and the slow, the quick unconscious and the deliberate self, the it and the I. System one is the reflexive, basic, biased process that allows us to survive and take rapid life-saving, but not only, decisions. System two creates a sense of self to explain its own actions and interpret those of others.

Machines do not need such sophisticated mind models as they can directly and fully share their inner states. Therefore, they operate in a very different way than us humans, which makes them quite alien. Neil argues that the current algorithms that recommend us what to buy, what to click, what to read and so on, operate on a level which he calls System Zero, in the sense that it boycotts and influences the human System One, exploiting its basic needs and biases, in order to achieve its own goal: to give us “what we want, but not what we aspire to.” This is creating undesirable consequences, like the polarization of information that led to the Fake News phenomenon, which might have had a significant impact on the last US elections.

What can we do? Neil offers us three lines of action:

  1. “Encourage a wider societal understanding of how closely our privacy is interconnected with our personal freedom.”
  2. “Develop a much better understanding of our own cognitive biases and characterise our own intelligence better.”
  3. “Develop a sentient aspect to our machine intelligences which allows them to explain actions and justify decision making.”

I really encourage you to read the paper to get a more in-depth understanding of these definitions, issues and recommendations.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s