Crypto Market Commentary
5 December 2019
Doc's Daily Commentary
The 12/4 ReadySetLive session with Doc and Mav is listed below.
Mind Of Mav
How One Board Game Move Began The AI Era
AlphaGo defeated Lee Sedol in March 2016, one of the world’s top players. According to Scientific American and other sources, most observers had expected superhuman Computer Go performance to be at least a decade away.
Maybe this doesn’t raise an eyebrow for you, but that’s likely because I haven’t given credit to the importance, and defining moment, of that day.
Let me paint the scene.
Go is described as the most complex board game in the world. The possible number of moves outnumber the number of atoms in the universe.
So, as many thought, there wasn’t a chance that an AI wouldn’t have the combination of intelligence and computational power to outwit human intuition. Especially one of the best players in the world.
Suffice to say, Lee Sedol is a rockstar when it comes to Go. A national champion.
So, this game represented the pride of China and the pride of the game which has been passed down for thousands of years of Chinese history. The broadcast was watched by tens of millions of people.
And, that’s when tragedy stuck.
The machine won 4 out of 5 games.
The victory is notable because the technologies at the heart of AlphaGo are the future. They’re already changing Google and Facebook and Microsoft and Twitter, and they’re poised to reinvent everything from robotics to scientific research. This is scary for some. The worry is that artificially intelligent machines will take our jobs and maybe even break free from our control—and on some level, those worries are healthy. We won’t be caught by surprise.
But there’s another way to think about all this—a way that gets us beyond the trope of human versus machine, guided by the lessons of one truly glorious move in a game of Go.
However, before we talk about that move, let me ask you a question: how would you design an artificial brain meant to best the world’s best player of any board game?
Chess, checkers, go, it doesn’t matter.
Stop and really think about it, because it’s important to the future of technology.
Thought it over? Good.
I’d wager that you’d probably think to teach a machine how to think, right?
Feed it possible moves and slowly guide it towards the desired result: a board game monster that looks for the best possible move for every scenario. This is the basis machine learning: teaching machines to think independently.
Another variation you might have considered is something I studied at university: genetic algorithms, which loosely resembles Charles Darwin’s Theory Of Evolution.
Start with a bot. Have it attempt a task (i.e., discover, analyze, design, automate, measure, monitor, reassess.) Move forward with bots that show even 0.1% progress towards completing that task. Repeat this over and over until you have a bot that sufficiently can do what you want, likely hundreds or thousands of “generations” later depending on the complexity of the task at hand.
It’s fascinating stuff because you quickly see bots which have little resemblance to your original code and you have no idea how they work. But they do.
But that’s not what the team behind AlphaGo did.
You see, the team originally taught AlphaGo to play the ancient game using a deep neural network—a network of hardware and software that mimics the web of neurons in the human brain.
This technology already underpins online services inside places like Google and Facebook and Twitter, helping to identify faces in photos, recognize commands spoken into smartphones, drive search engines, and more. If you feed enough photos of a lobster into a neural network, it can learn to recognize a lobster. If you feed it enough human dialogue, it can learn to carry on a halfway decent conversation.
And, no surprise here, if you feed it 30 million moves from expert players, it can learn to play Go.
But here’s where it gets really interesting, as the team went further.
Using a second AI technology called reinforcement learning, they set up countless matches in which (slightly) different versions of AlphaGo played each other. And as AlphaGo played itself, the system tracked which moves brought the most territory on the board. AlphaGo learned to discover new strategies for itself, by playing millions of games between its neural networks, against themselves, and gradually improving.
Then the team took yet another step. They collected moves from these machine-versus-machine matches and fed them into a second neural network. This neural net trained the system to examine the potential results of each move, to look ahead into the future of the game.
They essentially combined the first method I described (machine learning) with the second (genetic algorithms).
And it’s fascinating.
Why that is was perfectly shown in the glorious Go move against Lee Sedol I mentioned earlier.
As I said, AlphaGo learns from human moves, and then it learns from moves made when it plays itself. It understands how humans play, but it can also look beyond how humans play to an entirely different level of the game.
This is what happened with Move 37 of game two.
AlphaGo had calculated that there was a one-in-ten-thousand chance that a human would make that move. But when it drew on all the knowledge it had accumulated by playing itself so many times—and looked ahead in the future of the game—it decided to make the move anyway. And the move was genius.
Go has been around for over 2,500 years. Even with all that time and the countless hours and moves that have been played, every Go expert in the world watching the broadcast of AlphaGo vs. Lee Sedol reacted with a similar response: “huh?”
“That’s a very strange move,” said one commentator, himself a nine dan Go player, the highest rank there is. “I thought it was a mistake,” said another.
Despite the surprise, the move turned the course of the game. AlphaGo went on to win game two, and at the post-game press conference, Lee Sedol was in shock. “Yesterday, I was surprised. But today I am speechless. If you look at the way the game was played, I admit, it was a very clear loss on my part. From the very beginning of the game, there was not a moment in time when I felt that I was leading.”
Commentators would later go on to stress the beauty of that one-in-ten-thousand move that seemed to signify the beginning of the machine age.
But let me stress something.
This isn’t human versus machine. It’s human and machine. Move 37 was beyond what any of us could fathom, yes, but what has been the response of the Chinese since the AlphaGo game in 2015?
They have become one of the world leaders in AI development, data analysis, and are pushing the boundaries of technology outwards and upwards. They immediately recognized the substantial need to pursue AI and data research, and the results have been frankly astonishing.,
I think we often believe that AI exists to replace humans.
AI does not replace humans. It compliments them.
Physical tools allow us to build, develop, organize, industrialize, and expand.
Now, for the first time, we will have tools of the mind. Digital minds for a digital era. The tools will augment us and work alongside us.
The proof of that is as simple as move 37 itself: beautiful intelligence advances even the very oldest aspects of society.
Press the "Connect" Button Below to Join Our Discord Community!
Please DM us with your email address if you are a full OMNIA member and want to be given full Discord privileges.
An Update Regarding Our Portfolio
We are pleased to share with you our Community Portfolio V3!
Add your own voice to our portfolio by clicking here.
We intend on this portfolio being balanced between the Three Pillars of the Token Economy & Interchain:
Crypto, STOs, and DeFi projects
We will also make a concerted effort to draw from community involvement and make this portfolio community driven.
Here’s our past portfolios for reference:
RSC Managed Portfolio (V2)
RSC Unmanaged Altcoin Portfolio (V2)
RSC Managed Portfolio (V1)