Talk: Artificial Intelligence: A Guide for Thinking Humans

presenter
Melanie Mitchell
source
Youtube

Talk at the Santa Fe Institute on Nov 13, 2019.

What is Artificial Intelligence?

Many different things fall under the name AI (self-driving cars, chess playing machines, image classifier, video game AIs, etc.).

[Building] machines that perform tasks normally requiring human intelligence. — Nils Nilsson, 1971

Chess was thought to be the pinnacle of intelligence until a brute-force approach was found to beat any human intelligent approach.

The study of the common sense world and how a system can find out how to achieve its goals. — John McCarthy, 1988

An anarchy of methods. — (Lehman et al. 2014)

This anarchy involves:

  • Logic: make computer “reason” with logical propositions. But it is very brittle, and it is hard to learn or prove something new, never seen before.
  • Statistics: learn form data, collections of data points.
  • Biology: “Simulate the brain”, this approach has not been successful but recently took over the field.

Machine learning was not the dominant part of the field in 1950s-80s, and deep learning was very tiny. In the 90s, machine learning took over, but deep learning stayed small. But in the 2010s, deep learning took over machine learning.

Deep learning

Image recognition

Impressive achievements like facial recognition, image classification, etc.

Imagenet story: between 2011 and 2017, we’ve gone from 28% error rate to less than 5% thanks to CNNs. It was reported we’ve surpassed human performance, but this is only based on Andrej Karpathy’s performance on a randomly sampled subset of the 500k test images.

Self-driving cars

We also got very impressive claims from the media about self-driving cars everywhere, etc.

Deep RL

Deepmind used algorithms to play Atari games, play Go.

What do these machines actually learn?

According to the media, these machines could learn anything.

But these machine don’t learn like humans at all. The amount of training, of labeled data or replay needed to learn is very much larger in machine learning. Also, humans are designing those neural networks very carefully, etc. but they cannot learn that on their own.

Edge cases are a particular flaw of these machine learning systems. For example, what counts as an obstacle for a self-driving car (plastic bag or rock, birds, broken glass)?

Humans use common sense to deal with those edge cases.

In machine learning, the system will learn what is in the data to fulfill the objective, not what we think it should learn.

Adversarial attacks on deep learning

Researchers discovered it is possible to add very small perturbations to a deep neural network’s input and make it classify the image as anything.

This has also been shown to work for facial recognition software.

Meaning and machine learning

I wonder whether or when AI will ever crash the barrier of meaning.

— Gian-Carlo Rota, 1985

We don’t have good definitions for concepts we are trying to approach, reproduce and simulate — things like “understanding”, “intelligence”, “reasoning”, “common sense” are not well understood. We don’t know how our brains work. But if we want these things to come into our live, we are going to need to understand them.

The barrier of meaning

Give machines common sense? Example of the Winograd challenge to evaluate NLP systems.

We are far from the common sense of a 18 months old baby. Interesting paradox between impressive superhuman performances for some tasks and complete lack of understanding of very basic things.

A computer needs knowledge about the world to interact with it. We have intuitive physics, biology, psychology. It also needs a mental model of causes and effects and capability to abstract from world-knowledge.

What is a concept and how can we teach it to a machine?

Without concepts there can be no thoughts, and without analogies there can be no concepts.

— D. Hofstadter & E. Sander, Surfaces and essences (2013)

How to form and fluidly use concepts is the most important open problem in AI.

Bibliography

  1. . . "An Anarchy of Methods: Current Trends in How Intelligence Is Abstracted in Ai". IEEE Intelligent Systems 29 (6). IEEE:56–62.

Links to this note

Last changed | authored by

Comments


← Back to Notes