Does the classic "wear funky makeup" to fool face recognition even work with recent deep learning advances? I'd doubt it's nearly as effective.
If you're referring to the recent "deep learning classifies this random noise as a dog" news, keep in mind that humans are far more suited to more abstract tricks like wearing makeup than learning the perfect static to fool an image recognition algorithm. Even with computer assistance and access to AlphaGo in order to train the perfect pathological boards, the high branching factor of go makes it impossible to for a human to remember what the relevant pathological board would be halfway through the game.
(Also, keep in mind that the adversarial reinforcement learning is essentially training against as many pathological boards as possible. Image recognition isn't trained this way.)
So I don't think you're right about this. I feel like the era of "makeup" tricks is finished (though maybe there's still one or two wins left from this approach). If Lee beats AlphaGo, I think it's because their strengths are truly similar.
If you're referring to the recent "deep learning classifies this random noise as a dog" news, keep in mind that humans are far more suited to more abstract tricks like wearing makeup than learning the perfect static to fool an image recognition algorithm. Even with computer assistance and access to AlphaGo in order to train the perfect pathological boards, the high branching factor of go makes it impossible to for a human to remember what the relevant pathological board would be halfway through the game.
(Also, keep in mind that the adversarial reinforcement learning is essentially training against as many pathological boards as possible. Image recognition isn't trained this way.)
So I don't think you're right about this. I feel like the era of "makeup" tricks is finished (though maybe there's still one or two wins left from this approach). If Lee beats AlphaGo, I think it's because their strengths are truly similar.