As everyone from Nature to BBC Tech Tent throw the spotlight on the new self taught AI AlphaGo Zero, what perhaps seems key to me as an observer from a weekend read of public information is that:
1) Thinking matters: The algorithm itself seems to be the key compared to data or computing power i.e. once you know the rules of the game, it’s probably how you think that matters more than how much you know and how much thinking you can get done
2) The ability to make intuitive predictions matters: Expert opinions seem to have an evolutionary edge over comprehensive deductive logic i.e. the ability to form intuitive predictions accurately based on the rules of the game and the situation at hand matters much more than one might expect
[Update: Yes, AlphaGo Zero did not use any human input unlike the AlphaGo that beat Lee Sedol, so “expert opinion” was not as useful, right?
Well, I’m not referring to data about “expert opinion” (ie data about human games played), but to the method of thinking itself i.e. the choice of algorithm. I was referring to the fact that sometimes we humans do not have the knowledge, time, or resources to thoroughly analyse all possible courses of action. At such times, we make an expert judgement based on “intuition”. Some of these judgements are wrong, and we learn from our mistakes. Finally, learners are encouraged to try to intuitively assess/predict and learn from their own mistakes, rather than have someone teach at every step (although the availability of a teacher makes every step less risky, but the ability to learn on one’s own takes longer to develop).
Hence, the act of learning is better solved by the activity of “learning to predict” (i.e. the ability to form intuitive predictions or expert opinions) rather than by acquiring lots of knowledge about who decided what (i.e. data about what each expert did).
End of Update]
3) AI is yet to self-learn complex rules: The really powerful AGI (one that self learns the rules and masters its behaviour) is probably yet to be developed i.e. one that combines:
– something like AlphaGo Zero’s ability to learn complex behaviour on its own after it has been provided the rules of the game, and
– something like the original Q-learning algorithm that figures out the rules of a game and learns to play it well, albeit for less complex games
While this sounds easy to define, the dimensional implications of doing might require significantly more learning time or thinking differently e.g. perhaps building on metadata about learning, such as what you remember about how you changed your understanding before. When accomplished, such a powerful AI agent might end up teaching humans the art (or now science) of deciphering rules and then working them to your advantage – similar to how AlphaGo Zero already seems to have creatively imagined better ways of playing Go that were not known before to human experts (talk from Prof. David Silver at DeepMind)
4) AI is yet to learn to act in a multidimensional and evolving reality: Even an AI agent that can learn complex rules on its own will need to confront the boundaries of changing rules of the game, changing games e.g. changing behaviours (and intentions) of agents, and changing levers (i.e. ways of translating the agent’s thinking into actions) across different dimensions instantly if it needs to be effective – all which the human mind seems amazingly at ease to perform naturally in familiar contexts. Surely, there’s a lot of progress needed, but technological development seems to happen at an exponential rate (interesting TED talk on this) and a breakthrough probably may not be too far away.
5) Governance and risk frameworks need to keep up at a faster pace: AI is definitely the space to watch, as the use-cases of cutting edge technology can go far beyond perhaps protein folding (mentioned by Nature) and probably pathogen/cancer research, and I wonder if effective governance and risk management frameworks for AI can keep up with the pace at which this technology seems to evolve. Or can the AI agent itself be used to support the definition and management of risks, and can that be done effectively?
Multiple efforts to spur research on AI governance have been initiated worldwide in recent years, and the effectiveness of risk frameworks and the 1st and 2nd lines of defense might probably be reassessed now that an AI agent can seemingly teach its human expert counterparts.
Note: Disclaimer: This is NOT research or advice. This is merely a blog post reflecting my individual opinions viewed through my cognitive biases.
To share your comments, please message me directly via my Linkedin profile.