AlphaGo knows how to play Go extremely well now, or does it?
The guardian reports how a new version of Alpha-Go is now definitely the best Go player in the world, and it learned how to be that in just three days by playing against itself. That is impressive, and I am sure there are a lot of applications of this machine and AI in general that people have not thought about yet. More in particular, the article claims that the machine “derived thousands of years of human knowledge of the game all in the space of three days”. That may be true in some sense, but if it did, all the knowledge it ‘derived’ is entirely implicit. The machine cannot communicate its knowledge and we cannot extract it from the machine in any other way than by playing against it. In that sense it did not at all accomplish what humans accomplished with Go over the course of thousands of years. Our knowledge of Go is not implicit, but largely explicit. We can talk about it, discuss it, write it down, articulate it, etc. Alpha-Go can still not do any of that. It beats us in Go, but it has no idea how it does it.
Leave a Reply