In the sense of “There. That’s that.” It’s just too big. Machine learning might help but machines don’t explain their decisions very well. If the brain is immensely complex, it may elude complete understanding in detail. Deep Learning may survey it but that won’t convey understanding to us. We may need to look at more comprehensive ways of knowing:
“I think the word ‘understanding’ has to undergo an evolution,” Lichtman said, as we sat around his desk. “Most of us know what we mean when we say ‘I understand something.’ It makes sense to us. We can hold the idea in our heads. We can explain it with language. But if I asked, ‘Do you understand New York City?’ you would probably respond, ‘What do you mean?’ There’s all this complexity. If you can’t understand New York City, it’s not because you can’t get access to the data. It’s just there’s so much going on at the same time. That’s what a human brain is. It’s millions of things happening simultaneously among different types of cells, neuromodulators, genetic components, things from the outside. There’s no point when you can suddenly say, ‘I now understand the brain,’ just as you wouldn’t say, ‘I now get New York City.’”GRIGORI GUITCHOUNTS, “AN EXISTENTIAL CRISIS IN NEUROSCIENCE” AT NAUTILUS
Language, Lichtman argues, is not the correct tool for the kind of understanding required. Oddly, Guitchounts did come across a tool of sorts, a short story by Jorge Louis Borges (1899–1986). In the story, cartographers, seeking excellence, publish a map of an empire of such detailed accuracy and complexity that it is as big as the empire itself and entirely useless, producing an awesome ruin. More.
Further reading on understanding the brain (or not):
We will never “solve” the brain. A science historian offers a look at some of the difficulties we face in understanding the brain. In a forthcoming book, science historian Matthew Cobb suggests that we may need to be content with different explanations for different brain parts. And that the image of the brain as a computer is definitely on the way out.
Unexplainability and incomprehensibility of AI: In the domain of AI safety, the more accurate the explanation is, the less comprehensible it is. (Roman Yampolskiy)