Accounting is a universal function upon which all life and even Algorithmic Intelligence if founded. Social Fabric is Social Accounting.
The article is a transcript of a 29 minute video also in the article.
Where does culture come from? How can we select for culture in evolution when it’s the individuals that reproduce? What you need is something that selects for the best cultures and the best groups, but also selects for the best individuals because they’re the things that transmit the genes.
The key to this distributed Thompson sampling way of solving the credit assignment problem is something we call social sampling. It’s very simply looking around you at what other people do, finding the things that are popular, and then copying them if they seem like a good idea to you. It sounds very simple, but if you look at what people do, and you look at how good it is mathematically, what they’re doing by finding out what’s popular, is they’re trying to find the best ideas out there. Idea propagation has this popularity function driving it, but individual adoption also has to do with figuring out how it works for the individual—a reflective attitude.
When you put the two of them together, you get decision making that is pretty much better than anything else you can do. It’s a Bayesian optimal portfolio method. That’s pretty amazing, because now we have a mathematical recipe for doing with humans what all these AI techniques are doing with dumb computer neurons. We have a way of putting people together to make better decisions, given more and more experience.
The idea of a credit assignment function, reinforcing “neurons” that work, is the core of current AI. And if you make those little neurons that get reinforced smarter, the AI gets smarter. So, what would happen if the neurons were people? People have lots of capabilities; they know lots of things about the world; they can perceive things in a human way. What would happen if you had a network of people where you could reinforce the ones that were helping and maybe discourage the ones that weren’t?
That begins to sound like a society or a company. We all live in a human social network. We’re reinforced for things that seem to help everybody and discouraged from things that are not appreciated. Culture is something that comes from a sort of human AI, the function of reinforcing the good and penalizing the bad, but applied to humans and human problems. Once you realize that you can take this general framework of AI and create a human AI, the question becomes, what’s the right way to do that? Is it a safe idea? Is it completely crazy?
Sandy Pentland is director of the MIT Connection Science and Human Dynamics labs. He is a founding member of advisory boards for Google, AT&T, Nissan, and the UN Secretary General. He is the author of Social Physics, and Honest Signals.
The big question that I’m asking myself these days is how can we make a human artificial intelligence? Something that is not a machine, but rather a cyber culture that we can all live in as humans, with a human feel to it. I don’t want to think small—people talk about robots and stuff—I want this to be global. Think Skynet. But how would you make Skynet something that’s really about the human fabric?
The first thing you have to ask is what’s the magic of the current AI? Where is it wrong and where is it right?
The good magic is that it has something called the credit assignment function. What that lets you do is take stupid neurons, these little linear functions, and figure out, in a big network, which ones are doing the work and encourage them more. It’s a way of taking a random bunch of things that are all hooked together in a network and making them smart by giving them feedback about what works and what doesn’t. It sounds pretty simple, but it’s got some complicated math around it. That’s the magic that makes AI work.
The bad part of that is, because those little neurons are stupid, the things that they learn don’t generalize very well. If it sees something that it hasn’t seen before, or if the world changes a little bit, it’s likely to make a horrible mistake. It has absolutely no sense of context. In some ways, it’s as far from Wiener’s original notion of cybernetics as you can get because it’s not contextualized: it’s this little idiot savant.
But imagine that you took away these limitations of current AI. Instead of using dumb neurons, you used things that embedded some knowledge. Maybe instead of linear neurons, you used neurons that were functions in physics, and you tried to fit physics data. Or maybe you put in a lot of stuff about humans and how they interact with each other, the statistics and characteristics of that. When you do that and you add this credit assignment function, you take your set of things you know about—either physics or humans, and a bunch of data—in order to reinforce the functions that are working, then you get an AI that works extremely well and can generalize.
You can read the entire article here
Leave a Reply