This is a really cool dive into the interpretability of intermediate-level neural network activations. I really like the way that they've identified a core debate in the ML conversation, and created a relatively simple test-space to better understand it and (hopefully) move the discussion forward.
All the authors are part of OpenAI, and the very first sentence talks about computer vision and a parallel with real neurons. So "This isn't immediately clear" should be read as "I only read the title and didn't even try to understand what the article is about".