Common sense: the Chomsky/Piaget debates come to AI


mostlysignssomeportents:

In 1975, Noam Chomsky and Jean Paiget held a historic debate about the
nature of human cognition; Chomsky held that babies are born with a
bunch of in-built rules and instincts that help them build up the
knowledge that they need to navigate the world; Piaget argued that
babies are effectively blank slates that acquire knowledge from
experiencing the world (including the knowledge that there is a thing
called “experience” and “the world”).

For most of AI’s history, Chomsky’s approach prevailed: computer
scientists painstakingly tried to equip computers with a baseline of
knowledge about the relationships between things in the world, hoping
that computers would some day build up from this base to construct
complex, powerful reasoning systems.

The current machine learning revolution can be traced to a jettisoning
of this approach in favor of a Piaget-style blank slate, where layers of
neural nets are trained on massive corpuses of data (sometimes labeled
by hand, but often completely blank) and use equally massive computation
to make sense of the data, creating their own understanding of the
world.

Piaget-style deep learning has taken AI a long way in a short time, but it’s hitting a wall. It’s not just the weird and vastly entertaining local optima
that these systems get stuck in: it’s the huge corpuses of data needed
to train them and the inability of machine learning to generalize one
model to bootstrap another and another.

The fall-off the rate of progress in machine learning, combined with the
excitement that ML’s recent gains provoked, has breathed new life into
the Chomskyian approach to ML, and computer scientists all over the
world are trying to create “common sense” corpuses of knowledge that
they can imbue machine learning systems with before they are exposed to
training data.

This approach seems to be hurdling some of the walls that stopped the
Piaget-style ML. Some Chomskyian ML models attained a high degree of
efficiency with much smaller training data sets.

Frequent Boing Boing contributor
Clive Thompson’s long piece on the state of the Chomsky/Piaget debate
in ML is an excellent read, and really comes to the (retrospectively)
obvious conclusion: it doesn’t really matter whether Chomsky or Piaget
are right about how kids learn, because each of them is right about how
computers learn – a little from Column A, a little from Column B.

https://boingboing.net/2018/11/13/naive-learning.html

at it’s core AI and Machine Learning are hypotheses about how humans learn. That is to say they are guesses. Educated guesses by erudite people, but guesses all the same.