I’ve learned my forms of procrastination are subtle pushes into productivity. Checking RSS feeds to spark ideas for writing, jumping between Android and iOS projects or updating an experiment. Anyone else procrastinate like this?
One of the very few books I think about all the time is Robert Wright’s Nonzero: The Logic of Human Destiny. Paras Chopra tweeted out a good summary of the book a couple of weeks ago.
The basic premise of the book is that history has a direction which favors co-operation and non-zero sum games, and that causes an increase in complexity. Starting from the first replicating molecule which co-operated with an outer layer to form first proto-cell, evolutionary and cultural history is full of examples where two entities come together to survive and progress a lot more than they would have done individually. This co-operative entity fares much better than two individual entities because of specialization. If two entities are in the same boat – that they win together or lose together – then trust is implicit. In a non-zero sum game, trust causes entities to focus on what they do best.
This type of win-win cooperation in biology is mirrored in the cultural world:
Out of all technologies, perhaps information technologies are most conducive to enabling more non-zero sum games. As writing skill spread, more and more people entered into simple written contracts that helped people co-operate and specialize. Perhaps the biggest information technology was money and the corresponding meme of capitalism that helped people express their desires clearly and others to fulfil those desires. We have a thousand different types of shoes because shoe-makers today do not have to worry about baking their own bread. This “trust” in the larger entity of commerce helps everyone progress.
Nonzero is an intriguing lens through which to view current events (which is why it’s often in my thoughts). As Chopra notes, cooperation isn’t always the norm…Trumpist Republicans and Brexit proponents are both veering towards the zero sum end of the spectrum and I don’t think it will work out well for either country in the long run.
“In a century of study, no one has managed to make these knots talk. But recent breakthroughs have begun to unpick this tangled mystery of the Andes, revealing the first signs of phonetic symbolism within the strands. Now two anthropologists are closing in on the Inca equivalent of the Rosetta stone. That could finally crack the code and transform our understanding of a civilisation whose history has so far been told only through the eyes of the Europeans who sought to eviscerate it.”
Daniel Cossins writing in the New Scientist. We thought the Incas couldn’t write. These knots change everything
A lost language encoded in intricate cords is finally revealing its secrets – and it could upend what we know about Incan history and culture
We’ve been talking about all the robots in cyberCultures class
@lilmiquela on IG schools us
In 1975, Noam Chomsky and Jean Paiget held a historic debate about the
nature of human cognition; Chomsky held that babies are born with a
bunch of in-built rules and instincts that help them build up the
knowledge that they need to navigate the world; Piaget argued that
babies are effectively blank slates that acquire knowledge from
experiencing the world (including the knowledge that there is a thing
called “experience” and “the world”).
For most of AI’s history, Chomsky’s approach prevailed: computer
scientists painstakingly tried to equip computers with a baseline of
knowledge about the relationships between things in the world, hoping
that computers would some day build up from this base to construct
complex, powerful reasoning systems.
The current machine learning revolution can be traced to a jettisoning
of this approach in favor of a Piaget-style blank slate, where layers of
neural nets are trained on massive corpuses of data (sometimes labeled
by hand, but often completely blank) and use equally massive computation
to make sense of the data, creating their own understanding of the
Piaget-style deep learning has taken AI a long way in a short time, but it’s hitting a wall. It’s not just the weird and vastly entertaining local optima
that these systems get stuck in: it’s the huge corpuses of data needed
to train them and the inability of machine learning to generalize one
model to bootstrap another and another.
The fall-off the rate of progress in machine learning, combined with the
excitement that ML’s recent gains provoked, has breathed new life into
the Chomskyian approach to ML, and computer scientists all over the
world are trying to create “common sense” corpuses of knowledge that
they can imbue machine learning systems with before they are exposed to
This approach seems to be hurdling some of the walls that stopped the
Piaget-style ML. Some Chomskyian ML models attained a high degree of
efficiency with much smaller training data sets.
Frequent Boing Boing contributor
Clive Thompson’s long piece on the state of the Chomsky/Piaget debate
in ML is an excellent read, and really comes to the (retrospectively)
obvious conclusion: it doesn’t really matter whether Chomsky or Piaget
are right about how kids learn, because each of them is right about how
computers learn – a little from Column A, a little from Column B.
at it’s core AI and Machine Learning are hypotheses about how humans learn. That is to say they are guesses. Educated guesses by erudite people, but guesses all the same.