Category Archives: Network Analysis

Axelrod’s Cognitive Networks

Before introducing the cultural diffusion model he is now better known for, Axelrod proposed mapping individuals’ reasoning process as a causal network.

“A person’s beliefs can be regarded as a complex system,” he argued, and, “given a person’s concepts and beliefs, and given certain rules for deducing other beliefs from them” it is therefore possible to model how “a person would make a choice among alternatives” (Axelrod, 1976).

Axelrod called these networks of beliefs and causal relationships “cognitive maps,” and he engaged other scholars in deriving cognitive maps for select political elites using a detailed hand-coding procedure of a subject’s existing documents.

For Axelrod, the representation of beliefs as a network was a natural and obvious extension of how individuals reason. “People do evaluate complex policy alternatives in terms of the consequences of a particular choice would cause, and ultimately of what the sum of these effects would be,” he argued. “Indeed, such cause analysis is built into our language, and it would be very difficult for us to think complete in other terms, even if we tried” (Axelrod, 1976).

Axelrod takes the nodes of these networks to be concepts, with directed edges between them indicating causal links. Importantly, the nodal concepts are not things but rather “variables that can take on different values.” This makes the cognitive map “an algebraic rather than a logical system.”

Axelrod saw great value in the approach of cognitive mapping – seeing them as tools to understand decision-making, resources capable of meaningful policy suggestions, and imagining how individuals’ maps could aggregate into a collective.


Computational Models of Belief Systems & Cultural Systems

Work on belief systems is similar to the research on cultural systems – both use agent-based models to explore how complex systems evolve given a simple set of actor rules and interactions – there are important conceptual differences between the two lines of work.

Research on cultural systems takes a maco-level approach, seeking to explain if, when, and how, distinctive communities of similar traits emerge, while research on belief systems uses comparable methods to understand if, when, and how distinctive individuals come to agree on a given point.

The difference between these approaches is subtle but notable. The cultural systems approach begins with the observation that distinctive cultures do exist, despite local tendencies for convergence, while research on belief systems begins from the observation that groups of people are capable of working together, despite heterogeneous opinions and interests.

In his foundational work on cultural systems, Axelrod begins, “despite tendencies towards convergence, differences between individuals and groups continue to exist in beliefs, attitudes, and behavior” (Axelrod, 1997).

Compare this to how DeGroot begins his exploration of belief systems: “consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution parameter by pooling their individual opinions” (DeGroot, 1974).

In other words, while cultural models seek to explain the presence of homophily and other system-level traits, belief systems more properly seek to capture deliberative exchange. The important methodological difference here is that cultural systems model agent change as function of similarity, while belief systems model agent change as a process of reasoning.



Computational Models of Cultural Systems

Computational approaches to studying the broader social context can be found in work on the emergence and diffusion of communities in cultural system. Spicer makes an anthropological appeal for the study of such systems, arguing that cultural change can only be properly considered in relation to more stable elements of culture. These persistent cultural elements, he argues, can best be understood as ‘identity systems,’ in which individuals bestow meaning to symbols. Spicer notes that there are collective identity systems (i.e., culture) as well as individual systems, and chooses to focus his attention on the former. Spicer talks about these systems in implicitly network terms: identity systems capture “relationships between human beings and their cultural products” (Spicer, 1971). To the extent that individuals share the same relationships with the same cultural products, they are united under a common culture; they are, as Spicer says, “a people.”

Axelrod presents a more robust mathematical model for studying these cultural systems. Similar to Schelling’s dynamic models of segregation, Axelrod imagines individuals interacting through processes of social influence and social selection (Axelrod, 1997). Agents are described with n-length vectors, with each element initialized to a value between 0 and m. The elements of the vector represent cultural dimensions (features), and the value of each element represents an individual’s state along that dimension (traits). Two individuals with the exact same vector are said to share a culture, while, in general, agents are considered culturally similar to the extent to which they hold the same trait for the same feature. Agents on a grid are then allowed to interact: two neighboring agents are selected at random. With a probability equal to their cultural similarity, the agents interact. An interaction consists of selecting a random feature on which the agents differ (if there is one), and updating one agent’s trait on this feature to its neighbor’s trait on that feature. This simple model captures both the process of choice homophily, as agents are more likely to interact with similar agents, and the process of social influence, as interacting agents become more similar over time. Perhaps the most surprising finding of Axelrod’s approach is just how complex this cultural system turns out to be. Despite the model’s simple rules, he finds that it is difficult to predict the ultimate number of stable cultural regions based on the system’s n and m parameters.

This concept of modeling cultural convergence through simple social processes has maintained a foothold in the literature and has been slowly gaining more widespread attention. Bednar and Page take a game theoretic approach, imagining agents who must play multiple cognitively taxing games simultaneously. Their finding that in these scenarios “culturally distinct behavior is likely and in many cases unavoidable” (Bednar & Page, 2007) is notable because classic game-theoretic models fail to explain the emergence of culture at all: rather rational agents simply maximize their utility and move on. In their simultaneous game scenarios, however, cognitively limited agents adopt the strategies that can best be applied across the tasks they face. Cultures, then, emerge as “agents evolve behaviors in strategic environments.” This finding underscores Granovetter’s argument of embeddedness (M. Granovetter, 1985): distinctive cultures emerge because regional contexts influence adaptive choices, which in turn influence an agent’s environment.

Moving beyond Axelrod’s grid implementation, Flache and Macy (Flache & Macy, 2011) consider agent interaction on the small world network proposed by Watts and Strogatz (Watts & Strogatz, 1998). This model randomly rewires a grid with select long-distance ties. Following Granovetter’s strength of weak ties theory (M. S. Granovetter, 1973), the rewired edges in the Watts-Strogatz model should bridge clusters and promote cultural diffusion. Flache and Macy also introduce the notion of the valiance of interaction, considering social influence along dimensions of assimilation and differentiation, and taking social selection to consist of either attraction or xenophobia. In systems with only positively-valenced interaction (assimilation and attraction), they find that the ‘weak’ ties have the expected result: cultural signals diffuse and the system tends towards cultural integration. However, introduction of negatively valenced interactions (differentiation and xenophobia), leads to cultural polarization; resulting in deep disagreement between communities which themselves have high internal consensus.


Economics of Matching

A canonical problem in graph theory is that of matching – pairing people (or nodes) based on mutual preference. The classic example of this – framed, unfortunately, in a cis-heteronormative way – is known as the marriage problem. Assuming knowledge of the whole population, men have a ranked-order list of appropriate female partners and women similarly make a ranked-order list of appropriate male partners. The question then, is can we, as an all-knowing mathematician, make a matching in which no non-matched (opposite sex) pair would prefer to be with each other than with the partners they are matched with?

The mathematical solution to “stable marriage matching” is elegant, and worth at some point a post of its own. But for the moment, I was recently struck by the economic implications of this problem. That is, I had always considered it from the vantage point of the all-knowing observer, with the implicit understanding that such scope of vision is what makes the solution possible.

Roth’s 2008 article, What have we learned from market design?, brings a new perspective to the market failures that can result from the lack of such global coordination. Because it’s such an interesting story, I include below a long excerpt describing the history of today’s residency-matching program for medical school graduates:

The first job American doctors take after graduating from medical school is called a residency. These jobs are a big part of hospitals’ labor force, a critical part of physicians’ graduate education, and a substantial influence on their future careers. From 1900 to 1945, one way that hospitals competed for new residents was to try to hire residents earlier than other hospitals. This moved the date of appointment earlier, first slowly and then quickly, until by 1945 residents were sometimes being hired almost two years before they would graduate from medical school and begin work.

When I studied this in Roth (1984) it was the first market in which I had seen this kind of “unraveling” of appointment dates, but today we know that unraveling is a common and costly form of market failure. What we see when we study markets in the process of unraveling is that offers not only become increasingly early, but also become dispersed in time and of increasingly short duration. So not only are decisions being made early (before uncertainty is resolved about workers’ preferences or abilities), but also quickly, with applicants having to respond to offers before they can learn what other offers might be forthcoming. Efforts to prevent unraveling are venerable, for example Roth and Xing (1994) quote Salzman (1931) on laws in various English market from the 13th century concerning “forestalling” a market by transacting before goods could be offered in the market.

In 1945, American medical schools agreed not to release information about students before a specified date. This helped control the date of the market, but a new problem emerged: hospitals found that if some of the first offers they made were rejected after a period of deliberation, the candidates to whom they wished to make their next offers had often already accepted other positions. This led hospitals to make exploding offers to which candidates had to reply immediately, before they could learn what other offers might be available, and led to a chaotic market that shortened in duration from year to year, and resulted not only in missed agreements but also in broken ones. This kind of congestion also has since been seen in other markets, and in the extreme form it took in the American medical market by the late 1940’s, it also constitutes a form of market failure (cf. Roth and Xing 1997, and Avery, Jolls, Roth, and Posner 2007 for detailed accounts of congestion in labor markets in psychology and law). Faced with a market that was working very badly, the various American medical associations (of hospitals, students, and schools) agreed to employ a centralized clearinghouse to coordinate the market. After students had applied to residency programs and been interviewed, instead of having hospitals make individual offers to which students had to respond immediately, students and residency programs would instead be invited to submit rank order lists to indicate their preferences. That is, hospitals (residency programs) would rank the students they had interviewed, students would rank the hospitals (residency programs) they had interviewed, and a centralized clearinghouse — a matching mechanism — would be employed to produce a matching from the preference lists. Today this centralized clearinghouse is called the National Resident Matching Program (NRMP).


Deliberation in a Homophilous Network

The social context of a society is both an input and an output of the deliberative system. As Granovetter argued, “actors do not behave or decide as atoms outside a social context, nor do they adhere slavishly to a script written for them by the particular intersection of social categories that they happen to occupy. Their attempts at purposive action are instead embedded in concrete, ongoing systems of social relations” (Granovetter, 1985). This “problem of embeddedness” manifests in a scholarly tension between studying the role of individual agency and the structures that shape available actions.

Consider, for example, the presence of homophily in social networks. A priori, there is no reason to attribute such a feature to a single mechanism. Perhaps homophily results from individual preference for being with ‘like’ people, or perhaps it results primarily from the structural realities within which agents are embedded: we should not be surprised that high school students spend a great deal of time with each other.

From a deliberative perspective, widespread homophily is deeply disconcerting. Networks with predominately homophilous relationships may indicate disparate spheres of association, even while maintaining a global balance on the whole. The linking patterns between an equal number of liberal and conservative blogs, for example, reveals distinctively separate communities rather than a more robust, crosscutting public sphere (Adamic & Glance, 2005).

Such homophily is particularly troubling as diversity of thought is arguably one of the most fundamental requirements for deliberation to proceed. Indeed, the vision of democratic legitimacy emerging from deliberation rests on the idea that all people, regardless of ideology, actively and equally participate (Cohen, 1989; Habermas, 1984; Mansbridge, 2003; Young, 1997). A commitment to this ideal has enshrined two standards – respect and the absence of power – as the only elements of deliberation which go undisputed within the broader field (Mansbridge, 2015). Furthermore, if we are concerned with the quality of deliberative output, then we ought to prefer deliberation amongst diverse groups, which typically identify better solutions than more homogenous groups (Hong, Page, & Baumol, 2004). Most pragmatically, homophily narrows the scope of potential topics for deliberation. Indeed, if deliberation is to be considered as an “ongoing process of mutual justification” (Gutmann & Thompson, 1999) or as a “game of giving and asking for reasons” (Neblo, 2015), then deliberation can only properly take place between participants who, in some respects, disagree. In a thought experiment of perfect homophily, where agents are exactly identical to their neighbors, then deliberation does not take place – simply because there is nothing for agents to deliberate about.


Collective Action and the Problem of Embeddedness

Divergent conceptions of homophily fall within a broader sociological debate about the freedom of an individual given the structural constraints of his or her context. As Gueorgi Kossinets and Duncan Watts argue, “one can always ask to what extent the observed outcome reflects the preferences and intentions of the individuals themselves and to what extent it is a consequence of the social-organizational structure in which they are embedded” (Kossinets & Watts, 2009). If our neighborhoods are segregated is it because individuals prefer to live in ‘like’ communities, or is it due to deeper correlations between race and socio-economic status? If our friends enjoy the same activities as ourselves, is it because we prefer to spend time with people who share our tastes, or because we met those friends through a shared activity?

The tension between these two approaches is what Granovetter called the “problem of embeddedness,” (Granovetter, 1985) because neither the agent-based nor structural view captures the whole picture. As Granovettor argued, “actors do not behave or decide as atoms outside a social context, nor do they adhere slavishly to a script written for them by the particular intersection of social categories that they happen to occupy. Their attempts at purposive action are instead embedded in concrete, ongoing systems of social relations.”

The challenge of embeddedness can be seen acutely in network homophily research, as scholars try to account for both the role of individual agency and the structures which shape available options. In their yearlong study of university relationships, Kossinets and Watts observe that both agent-driven and structurally-induced homophily play integral roles in tie formation. Indeed, the two mechanisms “appear to act as substitutes, each reinforcing the observed tendency of similar individuals to interact” (Kossinets & Watts, 2009). In detailed, agent-based studies, Schelling finds that individual preference leads to amplified global results; that extreme structural segregation can result from individuals’ moderate preference against being in the minority (Schelling, 1971). Mutz similarly argues that the workplace serves as an important setting for diverse political discourse precisely because it is a structured institution in which individual choice is constrained (Mutz & Mondak, 2006).

Consider also Michael Spence’s economic model of gender-based pay disparity (Spence, 1973). Imagine an employee pool in which people have two observable characteristics: sex and education. An employer assigns each employee to a higher or lower wage by inferring the unobserved characteristic of productivity. Assume also that gender and productivity are perfectly uncorrelated. Intuitively, this should mean that gender and pay will also be uncorrelated, however Spence’s game-theoretic model reveals a surprising result. After initial rounds of hiring, the employer will begin to associate higher levels of education with higher levels of productivity. More precisely, because an employer’s opinions are conditioned on gender as well as education, “if at some point in time men and women are not investing in education in the same ways, then the returns to education for men and women will be different in the next round.” In other words, Spence finds that there are numerous system equilibria and, given differing initial investments in education, the pay schedules for men and women will settle into different equilibrium states.

Here again, we see the interaction of agency and structure. Whether initial investments in education differed because of personal taste or as the result of structural gender discrimination, once a gender-based equilibrium has been reached, individual investment in education does little to shift the established paradigm. A woman today may be paid less because women were barred from educational attainment two generations ago. That inequity may be further compounded by active discrimination on the part of an employer, but the structural history itself is enough to result in disparity. Furthermore, this structural context then sets the stage for inducing gender-based homophily, as men and women could be socially inclined towards different workplaces or career paths.

Given these complex interactions, where past individual choices accumulate into future social context, it is perhaps unsurprising that teasing apart the impact of agency and structure is no small feat; one that is virtually impossible in the absence of dynamic data (Kossinets & Watts, 2009). Individuals embedded within this system may similiarly struggle to identify their own role in shaping social structures. As Schelling writes, “people acting individually are often unable to affect the results; they can only affect their own positions within the overall results” (Schelling, 1971). Acting individually, we create self-sustaining segregated societies; opting into like communities and presenting our children with a narrow range of friends with whom to connect.

Yet the very role that individual actions play in building social structures indicates that individuals may work together to change that structural context. It is a classic collective action problem – if we collectively prefer diverse communities, than we must act collectively, not individually. In her extensive work on collective action problems, Elinor Ostrom finds that “individuals frequently do design new institutional arrangements – and thus create social capital themselves through covenantal processes” (Ostrom, 1994). Embeddedness presents a methodological challenge but it need not be a problem; it simply reflects the current, changeable, institutional arrangement. That individual actions create the structures which in turn effect future actions need not be constraining – indeed, it illustrates the power which individuals collectively posses: the power to shape context, create social structures, and to build social capital by working together to solve our collective problems.


Granovetter, M. (1985). Economic action and social structure: The problem of embeddedness. American journal of sociology, 481-510.

Kossinets, G., & Watts, D. J. (2009). Origins of homophily in an evolving social network. American journal of sociology, 115(2), 405-450.

Mutz, D. C., & Mondak, J. J. (2006). The Workplace as a Context for Cross‐Cutting Political Discourse. Journal of politics, 68(1), 140-155.

Ostrom, E. (1994). Covenants, collective action, and common-pool resources.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of mathematical sociology, 1(2), 143-186.

Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355-374.


Noncooperation and the Latency of Weak Ties

As Centola and Macy summarize, the key insight of Granovetter’s seminal 1973 work (Granovetter, 1973) is that ties which are “weak in the relational sense – that the relations are less salient or frequent – are often strong in the structural sense – that they provide shortcuts across social topology” (Centola & Macy, 2007). While this remains an important sociological finding, there are important reasons to be wary of generalizing too far: such ‘weak ties’ may not be sufficient for diffusion in complex contagion (Centola & Macy, 2007) and identification of such ties is highly dependent on how connections are defined and measured (Grannis, 2010).

Furthermore, recent studies probing just how far ‘the strength of weak ties’ can be taken allude to another underexplored concern: the latency of ties. For example, Grannis points to the oft glossed-over result of Milgram’s small world experiment (Milgram, 1967): 71% of the chains did not make it to their target. As Milgram explains, “chains die before completion because on each remove a certain portion of participants simply do not cooperate and fail to send the folder. Thus the results we obtained on the distribution of chain lengths occurred within the general drift of a decay curve.” Milgram and later Dodds et al. (Dodds, Muhamad, & Watts, 2003) correct for this decay by including in the average path length an estimation of how long uncompleted paths would be if they had in fact been completed. For his part, Grannis argues that the failure caused by such noncooperation is exactly the point: “it calls into question what efficiency, if any, could be derived from these hypothesized, noncooperative paths” (Grannis, 2010).

I call this a problem of latency because one can imagine that social ties aren’t always reliably activated. Rather, activation may occur as a function of relationship strength and task burden, or may simply vary stochastically. In their global search email task, Dodds et al. find that only 25% of self-registered participants actually initiated a chain, whereas 37% of subsequent participants – those who were recruited by an acquaintance of some sort – did carry on the chain (Dodds et al., 2003). They attribute this difference to the very social relations they are studying: who does the asking matters.

In their survey of non-participants, the authors further find that “less than 0.3% of those contacted claimed that they could not think of an appropriate recipient, suggesting that lack of interest or incentive, not difficulty, was the main reason for chain termination.” Again, this implies that not all asks are equal – the noncomplying participants could have continued the chain, but they chose not to. In economic terms, it seems that the activation cost – the cost of continuing the chain – was greater than the reward for participating.

One can imagine similar interactions in the job-search domain. Passing on information about a job-opening maybe relatively low-cost while actively recommending a candidate for a position may come with certain risk (Smith, 2005). In many ways, the informational nature of a job search is reminiscent of ‘top-of-mind’ marketing: it is good if customers choose your product when faced with a range of options, but ideally they would think of you first; they would chose to purchase your product before even being confronted with alternatives. In the job-search scenario, unemployed people are often encouraged to reach out to as many contacts as they can, in order keep their name top-of-mind so that these ‘weak ties’ – who otherwise may not have thought of them – do forward information when learning of job openings. Granovetter does not examine the job search process in detail, but his findings – that among people who found a new job through a contact, 55.6% saw that contact occasionally while another 27.8% saw that contact only rarely (Granovetter, 1973) – imply that information was most likely diffused by a job-seeker requesting information. In this case, the job seeker had to activate a latent weak tie before receiving its benefit.

Arguably, the concept of latency is built into the very definition of a weak tie – weak ties are weak because their latency makes them easier to maintain than strong, always-active ties. Yet, the latency of weak ties, or more precisely, their activation costs, are generally not considered. In his detailed study of three distinct datasets, Grannis finds that a key problem in network interpretation is that connections’ temporal nature is often over looked (Grannis, 2010). I would argue that a related challenge is that the observed relations are considered to always be active. Using Grannis’ example, there is nothing inherently wrong with the suggestion that ideas may flow from A to C over the course of 40 years; the problem comes in interpreting this as a simple network where C’s beliefs directly trace to A. Indeed, in the academic context, it’s quite reasonable to think that an academic ‘grandparent’ may influence one’s scholarly work – but that influence comes through in some ideas and not others, it comes through connections whose strength waxes and wanes. To consider these links always present, and always active, is indeed to neglect the true nature of the relationship.

Ultimately, Grannis argues that the core problem in many network models is that the phase transitions which govern global network characteristics are sensitive to local-level phenomena: if the average degree is measured to be 1, there will be a giant component. Given this sensitivity, it becomes essential to consider the latency of weak network ties. A candidate who doesn’t activate weak ties may never find a job, and a message-passing task for which participants feel unmotivated may never reach completion. In his pop-science article, Malcolm Gladwell argues that some people just feel an inherent motivation to maintain more social ties than others (Gladwell, 1999). Given such individual variation in number of ties and willingness to activate ties, it seems clear that the latency of weak ties needs further study, otherwise, as Grannis warns, our generalizations could lead to “fundamental errors in our understanding of the effects of network topology on diffusion processes” (Grannis, 2010).


Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American journal of sociology, 113(3), 702-734.

Dodds, P. S., Muhamad, R., & Watts, D. J. (2003). An experimental study of search in global social networks. Science, 301(5634), 827-829.

Gladwell, M. (1999). Six degrees of lois weisberg.

Grannis, R. (2010). Six Degrees of “Who Cares?”. American journal of sociology, 115(4), 991-1017.

Granovetter, M. S. (1973). The strength of weak ties. American journal of sociology, 1360-1380.

Milgram, S. (1967). The small world problem. Psychology today, 2(1), 60-67.

Smith, S. S. (2005). “Don’t put my name on it”: Social Capital Activation and Job-Finding Assistance among the Black Urban Poor American journal of sociology, 111(1), 1-57.


The Chasm and the Bridge: Modes of Considering Social Network Structure

In their respective work, Granovetter and Burt explore roughly the same phenomenon –heterogeneous connection patterns within a social network. However, they each choose different metaphors to describe that phenomenon, leading to differences in how one should understand and interpret social network structure.

Perhaps most famously, Granovetter argues for the ‘strength of weak ties,’ finding that it is the weak, between-group ties which best support information diffusion – as studied for the specific task of finding a job (Granovetter, 1973). For his part, Burt prefers to focus on ‘structural holes’: rather than considering a tie which spans two groups, Burt focuses on the void it covers. As Burt describes, “The weak tie argument is about is about the strength of relationships that span the chasm between two social clusters. The structural hole argument is about the chasm spanned” (Burt, 1995). Burt further argues that his concept is the more valuable of the two; that ‘structural holes’ are more informative than ‘weak ties.’ “Whether a relationship is strong or weak,” Burt argues, “it generates information benefits when it is a bridge over a structural hole.”

While Granovetter’s weak tie concept pre-dates Burt’s structural holes, his paper implies a rebuttal to this argument. Illustrating with the so-called ‘forbidden triad,’ Granovetter argues that in social networks your friends are likely to be friends with each other. That is, if person A is strongly linked to both B and C, it is unlikely that B and C have no connection. Granovetter finds this forbidden triad is uncommon in social networks, arguing that “it follows that, except under unlikely conditions, no strong tie is a bridge.” This implies that Granovetter’s argument is not precisely about identifying whether a relationship is strong or weak, as Burt says, but rather it is about identifying bridges over structural holes. It is merely the fact those bridges are almost always weak which then leads to Granovetter’s interest in the strength of a tie.

This seems to indicate that there is little difference between looking for weak ties or for structural holes: what matters for successful information exchange is that a hole is bridged, and it is only a matter of semantics whether you consider the hole or consider the bridge. Yet in Burt’s later work, he further develops the idea of a hole, building the argument for why this mode of thinking is important. He describes French CEO René Fourtou’s observation that the best ideas were stimulated by people from divergent disciplines. “Fourtou emphasized le vide – literally, the emptiness; conceptually, structural holes – as essential to coming up with new ideas: ‘Le vide has a huge function in organizations…shock comes when different things meet. It’s the interface that’s interesting…If you don’t leave le vide, you have no unexpected things, no creation.’” (Burt, 2004)

It is this last piece which is missing from Granovetter’s conception – Granovetter argues that bridges are valuable because they span holes; Burt argues that the holes themselves have value. You must leave le vide.

Hayek writes that the fundamental economic challenge of society is “a problem of the utilization of knowledge not given to anyone in its totality” (Hayek, 1945). If you consider each individual to have unique knowledge, the question of economics becomes how to best leverage this disparate knowledge for “rapid adaptation to changes in the particular circumstances of time and place.” With this understanding, any network which effectively disseminated information would be optimal for solving economic challenges.

Imagine a fully connected network, or one sufficiently connected with weak ties. In Granovetter’s model – assuming no limit to a person’s capacity to maintain ties – such a network would be sufficient for solving complex problems. If you have full, easy access to every other individual in the network, then you would learn about job openings or otherwise have the information needed to engage in complex, collective problem-solving. A weak tie only provides benefit if it brings information from another community; if it spans a structural hole.

In Burt’s model, however, such a network is not enough – an optimal network must contain le vide; it must have structural holes. Research by Lazer and Friedman (Lazer & Friedman, 2007) gives insight into how these structural holes add value. In an agent-based simulation, Lazer and Friedman examine the relationship between group problem-solving and network structure. Surprisingly, they find that those networks which are most efficient at disseminating information – such as a fully connected network – are better in the short-run but have lower long-term performance. An inefficient network, on the other hand, one with structural holes, “maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run.” This seems to support Burt’s thesis that it is not just the ability to bridge, but the very existence of holes which matter.

There are, of course, drawbacks to these structural holes as well. Burt finds that structural holes help generate good ideas but – as the work of Lazer and Friedman would imply – hurts their dissemination and adoption (Burt, 2004). So it remains to be seen whether the ‘strength of structural holes,’ as Burt writes, is sufficient to overcome their drawbacks. But regardless of the normative value of these holes, Burt is right to argue that this mode of thinking should be side-by-side with Granovetter’s. For thorough social network analysis, it is not enough to consider the bridge, one must consider the chasm. Le vide matters.


Burt, R. S. (1995). Structural Holes: The Social Structure of Competition: Belknap Press.

Burt, R. S. (2004). Structural holes and good ideas. American journal of sociology, 110(2), 349-399.

Granovetter, M. S. (1973). The strength of weak ties. American journal of sociology, 1360-1380.

Hayek, F. A. (1945). The use of knowledge in society. The American economic review, 35(4), 519-530.

Lazer, D., & Friedman, A. (2007). The network structure of exploration and exploitation. Administrative Science Quarterly, 52(4), 667-694.



Epistemic Networks and Idea Exchange

Earlier this week, I gave a brief lightning talk as part of the fall welcome event for Northeastern’s Digital Scholarship Group and NULab for Texts, Maps, and Data. In my talk, I gave a high-level introduction to the motivation and concept behind a research project I’m in the early stages of formulating with my advisor Nick Beauchamp and my Tufts colleague Peter Levine.

I didn’t write out my remarks and my slides don’t contain much text, but I thought it would be helpful to try to recreate those remarks here:

I am interested broadly in the topic of political dialogue and deliberation. When I use the term “political” here, I’m not referring exclusively to debate between elected officials. Indeed, I am much more interested politics as associated living; I am interested in the conversations between every-day people just trying to figure out how we live in this world together. These conversations may be structured or unstructured.

With this group of participants in mind, the next question is to explore how ideas spread. There is a great model borrowed from epistemology that looks at spreading on networks. Considering social networks, for example, you can imagine tracking the spread of a meme across Facebook as people share it with their friends, who then share it with friend of friends, and so on.

This model is not ideal in the context of dialogue. Take the interaction between two people, for example. If my friend shares a meme, there’s some probability that I will see it in my feed and there is some probability that I won’t see it in my feed. But those are basically the only two options: either I see it or I don’t see it.

With dialogue, I may understand you, I may not understanding you, I may think I understand you…etc. Furthermore, dialogue is a back and forth process. And while a meme is either shared or not shared, in the back and forth of dialogue, there is no certainty that an idea is actually exchanged to that a comment had a predictable effect.

This raises the challenging question of how to model dialogue as a process at the local level. This initial work considers an individual’s epistemic network – a network of ideas and beliefs which models an given individual’s reasoning process. The act of dialogue then, is no longer an exchange between two (or more) individuals, it is an exchange between two (or more) epistemic networks.

There are, of course, a lot of methodological challenges and questions to this approach. Most fundamentally, how do you model a person’s epistemic network? There are multiple, divergent way to do this from which you can imagine getting very different – but equally valid results.

The first method – which has been piloted several times by Peter Levine – is a guided reflection process in which individuals respond to a series of prompts in order to self-identify the nodes and links of their epistemic network. The second method involves the automatic extraction of a semantic network from a written reflection or discussion transcript.

I am interested in exploring both of these methods – ideally with the same people, in order to compare both construction models. Additionally, once epistemic networks are constructed, through either approach, you can evaluate and compare their change over time.

There are a number of other research questions I am interested in exploring, such as what network topology is conducive to “good” dialogue and what interactions and conditions lead to opinion change.


Multivariate Network Exploration and Presentation

In “Multivariate Network Exploration and Presentation,” authors Stef van den Elzen and Jarke J. van Wijk introduce an approach they call “Detail to Overview via Selections and Aggregations,” or DOSA. I was going to make fun of them for naming their approach after a delicious south Indian dish, but since they comment that their name “resonates with our aim to combine existing ingredients into a tasteful result,” I’ll have to just leave it there.

The DOSA approach – and now I am hungry – aims to allow a user to explore the complex interplay between network topology and node attributes. For example, in company email data, you may wish to simultaneously examine assortativity by gender and department over time. That is, you may need to consider both structure and multivariate data.

This is a non-trivial problem, and I particularly appreciated van den Elzen and van Wijk’s practical framing of why this is a problem:

“Multivariate networks are commonly visualized using node-link diagrams for structural analysis. However, node-link diagrams do not scale to large numbers of nodes and links and users regularly end up with hairball-like visualizations. The multivariate data associated with the nodes and links are encoded using visual variables like color, size, shape or small visualization glyphs. From the hairball-like visualizations no network exploration or analysis is possible and no insights are gained or even worse, false conclusions are drawn due to clutter and overdraw.”

YES. From my own experience, I can attest that this is a problem.

So what do we do about it?

The authors suggest a multi-pronged approach which allows non-expert users to select nodes and edges of interest, simultaneously see a detail and infographic-like overview, and to examine the aggregated attributes of a selection.

Overall, this approach looks really cool and very helpful. (The paper did win the “best paper” award at the IEEE Information Visualization 2014 Conference, so perhaps that shouldn’t be that surprising.) I was a little disappointed that I couldn’t find the GUI implementation of this approach online, though, which makes it a little hard to judge how useful the tool really is.

From their screenshots and online video, however, I find that while this is a really valiant effort to tackle a difficult problem, there is still more work to do in this area. The challenge with visualizing complex networks is indeed that they are complex, and while DOSA gives a user some control over how to filter and interact with this complexity, there is still a whole lot going on.

While I appreciate the inclusion of examples and use cases, I would have also liked to see a user design study evaluating how well their tool met their goal of providing a navigation and exploration tool for non-experts. I also think that the issues of scalability with respect to attributes and selection that they raise in the limitations section are important topics which, while reasonably beyond the scope of this paper, ought to be tackled in future work.