Sometimes it seems as though it would be most convenient to be able to matrix-style download knowledge into my head. You know – plug yourself into a computer, run a quick program, and suddenly, it’s all, “whoa, I know Kung Fu.”
Imagine for a moment that such technology was available. Imagine that instead of spending the next several years in a Ph.D. program, I could download and install everything I needed in minutes. What would that look like?
First of all, either everyone would suddenly know everything or – more likely, perhaps – inequality would be sharpened by the restriction of knowledge to those of the highest social strata.
It seems optimistic to imagine that knowledge would become free.
But, for the moment I’ll put aside musings about the social implications. What I really want to know is – what would such technology mean for learning?
I suppose it’s a bit of a philosophical question – would the ability to download knowledge obliterate learning or bring it to new heights?
I’m inclined to take such technology as the antithesis of learning. I mean that here with no value assumptions, but rather as a matter of semantics. Learning is process, not a net measure of knowledge. Acquiring knowledge instantaneously thus virtually eliminates the process we call learning.
That seems like it may be a worthy sacrifice, though. Exchanging a little process to acquire vast quantities of human knowledge in the blink of an eye may be a fair trade.
All this, of course, assumes that knowledge is little more than a vast quantity of data. Perhaps more than a collection of facts, but still capable of being condensed down into a complex series of statistics.
There’s this funny, thing, though – that is arguably not how knowledge works. At it’s simplest, this can be seen as the wistful claim that it’s not the destination, its the journey. But more profoundly –
Last week, the podcast The Infinite Monkey Cage had a show on artificial intelligence. While discussing the topic guest Alan Winfield made the startling observation: in the early days of AI, we took for granted that a computer would find easy the same tasks that a person finds easy, and that, similarly, a computer would have difficulty with the same tasks a person finds difficult.
Playing chess, for example, takes quite a bit of human skill to do well, so it seemed like an appropriate challenge.
But for a computer, which can quickly store and analyze many possible moves and outcomes – playing chess is relatively easy. On the other hand, recognizing sarcasm in a human-produced sentence is nearly impossible. Indeed, this is one of the challenges of computer science today.
All this is relevant to the concept of learning and matrix-downloads because the groundbreaking area of artificial intelligence is machine learning – algorithms that help a computer learn from initial data to make predictions about future data.
The idea of downloadable knowledge implies that such learning is unnecessary – we only need a massive input of all available data to make sense of it all. But a deeper understanding of knowledge and computing reveals that – not only is such technology unlikely to emerge any time soon, it is not really how computers work, either.
There is something ineffable about learning, about the process of figuring things out and trying again and again and again. To say the process is invaluable is not merely some human platitude, it is a subtle point about the nature of knowledge.