I’ve been reading educational psychology literature on knowledge structures – a “representation of a person’s knowledge that includes both the definitions of a set of domain-specific concepts and the relations among those concepts,” as Dorsey defines it.
The basic premise here is that people not only store various concepts they’re familiar with, they store an entire network structure detailing the inter-relations between those concepts. Storing information in this way provides valuable heuristic short-cuts when it comes time to retrieve that information.
This claim has direct implications for education and what it means to “learn.”
As Dorsey argues:
…Human knowledge embodies more than just declarative facts…the organization of knowledge stored in memory is of equal or greater significance than the amount or type of knowledge. The construct of knowledge structures implies that the relation between knowledge acquisition and performance in many domains requires not just a set of declarative facts, but a framework or a set of connections that leads to an understanding of when and how a set of facts applies in a given situation.
Having knowledge stored in network form not only allows for easy retrieval, it lays the foundation for problem-solving in the face of new challenges.
As Collins and Quillian argue, “it is by using inference that people can know much more than they learn.”
Interestingly, a core element of these systems is that they are self defining: “Many words acquire most of their meaning through their use in sentences,” Preece argues, “In this respect, word meanings, or concepts, are like mathematical points: They have few qualities other than their relationships with other concepts.”
Shavelson similarly insists on a somewhat tautological definition, writing, “a concept, then, is a set of relations among other concepts.”
And Collins and Quillian argue:
An interesting aspect of such a network is that within the system there are no primitive or undefined terms in the mathematical sense; everything is defined by everything else so that the usual logistical (axiomatic) structure of mathematical systems does not hold. In this respect, it is like a dictionary.
In many of the papers I’ve been reading, these networks are elicited through word association: researchers provide subjects with a word and subjects provide as many associated words as possible.
Shavelson does this experiment with physics terms and compares the development of physics students and non-physics students. Over the course of the semester, the students in a physics class increased the number of words they could associate with a root physics term.
Shavelson also finds a sharp increase in the number of “constrained responses” – e.g., “if the term used in the response was an element in the defining equation for the special stimulus word. For example, the response term ‘mass,’ was scored as a constrained response to the special stimulus ‘force,’ since force equals mass times acceleration.”
Validation of these networks is, of course, a non-trivial process. But scholars have been chipping away at this question for decades. It’s still not clear how to best way to capture or model these knowledge structures, but the body of literature that exists in this space so far indicates that this is a meaningful way to approach human learning and understanding.