Monthly Archives: October 2016

A Lesson from the West Area Computers

I really want to read Hidden Figures, the new book by Margot Lee Shetterly which chronicles “the untold story of the Black women mathematicians who helped win the space race.” If you aren’t as excited about this book as I am, it highlights the work and experiences of the West Area Computers – a group of black, female mathematicians who worked at NASA Langley from 1943 through 1958.

I haven’t gotten a chance to yet, but I was particularly struck by one incident I heard on the podcast Science Friday and which I found recounted in the Smithsonian Magazine:

But life at Langley wasn’t just the churn of greased gears. Not only were the women rarely provided the same opportunities and titles as their male counterparts, but the West Computers lived with constant reminders that they were second-class citizens. In the book, Shetterly highlights one particular incident involving an offensive sign in the dining room bearing the designation: Colored Computers.

One particularly brazen computer, Miriam Mann, took responding to the affront on as a her own personal vendetta. She plucked the sign from the table, tucking it away in her purse. When the sign returned, she removed it again. “That was incredible courage,” says Shetterly. “This was still a time when people are lynched, when you could be pulled off the bus for sitting in the wrong seat. [There were] very, very high stakes.”

But eventually Mann won. The sign disappeared.

I love this story.

Not because it has a hopeful message about how determination always wins – but because it serves as a reminder of the effort and risk people of color face every day just in interacting with their environment.

The West Computers were tremendously good at their jobs and were respected by their white, male, colleagues. I imagine many of these colleagues considered themselves open-minded, even radical for the day, for valuing the talent of their black colleagues.

When I hear the story about how Mann removed the “Colored Computers” sign every day, I don’t just hear a story of the valiant strength of one woman.

I hear a story of white silence.

I hear a story about how other people didn’t complain about the sign. I imagine they barely even noticed the sign. It didn’t effect them and never weighed upon their world.

John Glenn reportedly refused to fly unless West Area Computer Katherine Johnson verified the calculations first – such respect he had for her work.

And yet it never crossed anyone’s mind that a “Colored Computers” sign might not be appropriate.

That’s just the way the world was then.

And that makes me wonder – what don’t I see?

To me, this story is a reminder that people of color experience the world differently than I do – because people like me constructed the world I experience. There must be so many things every day that just slip passed my notice, no matter how open minded or progressive I’d like to be.

It’s easy too look back at the 1940’s and see that a “Colored” sign is racist. What’s hard is to look at the world today and to see that sign’s modern day equivalent.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Election Modeling

I’ve been spending a lot of my time working on a class assignment in which we are asked to model the U.S. presidential election. The model is by necessity fairly rudimentary – I’m afraid I won’t be giving Nate Silver a run for his money any time soon – but it’s nonetheless been very interesting to think through the various steps and factors which influence how election results play out.

The basic approach is borrowed from the compartmental models of epidemiology. Essentially, you treat all people as statically equivalent and allow for transitions between discrete compartments of behavior.

Consider a simple model with the flu: you start with a large pool of susceptible people and a few infectious people. With some probability, a susceptible person will come in contact with an infectious person and become infected. At some average rate, and infectious person will recover. Thus, you can separate people into compartments, Susceptible, Infectious, Recovered, and with average transition rates can estimate the number of people in each compartment at each time step.

Of course, sophisticated epidemic models can be much more complicated then this, and trying to interpret the complexity of an electoral system through such a simple model has proven to be challenging.

First there’s the question of how to transfer this metaphor to electoral politics – what does it mean to be ‘susceptible’, ‘infectious’, or ‘recovered’ in this context?

But perhaps the piece I have found most interesting is trying to understand the system’s “initial conditions.” I am not an epidemiologist, but a simplified model of disease spreading where some people start susceptible and a few people start infected makes intuitive sense to me. We even worked out mathematically how moving people from “susceptible” to “recovered” via vaccination helps prevent a serious outbreak of a disease. (PSA: get your flu shot.)

But I’ve had a much harder time wrapping my head around what initial compartment a voter might belong in.

There’s an idealized version of politics in which all eligible voters start with a completely open mind – a clean slate ready to be filled with thoughtful judgements and reflections on the merits of each candidate’s policies.

But that’s not really how electoral politics works.

I, for example, have always been a staunch partisan, and while it perhaps would be better if I entered an election season as a clean slate – I always enter with a whole host of biases and preconceptions. The debates and TV ads were never going to change my mind.

So what has been most striking in the process is how little movement actually takes place – especially considering just how long this election has gone on.

When you take the partisan leaning of Independents into account, Pew estimates the current population of registered voters as 44% Republican/lean Republican, 48% Democrat/lean Democrat, and 8% no leaning/other party.

FiveThiryEight‘s weighted average of national polls shows some fluctuations over the last six months, but currently puts Clinton at 45.8%, Trump at 39.4%, and Johnson at 5.8%. That’s not a direct correlation to the raw partisan leaning, but it’s close enough to show that – in the epistemological framework – relatively few transitions are happening.

In fact, the earliest FiveThiryEight numbers, from June 8 of this year put Clinton at 42%, Trump at 38%, and Johnson just shy of 8%. So I guess this makes me wonder:

…Couldn’t we have held this election back in June and saved ourselves the trouble?

facebooktwittergoogle_plusredditlinkedintumblrmail

Epistemic Networks and Idea Exchange

Earlier this week, I gave a brief lightning talk as part of the fall welcome event for Northeastern’s Digital Scholarship Group and NULab for Texts, Maps, and Data. In my talk, I gave a high-level introduction to the motivation and concept behind a research project I’m in the early stages of formulating with my advisor Nick Beauchamp and my Tufts colleague Peter Levine.

I didn’t write out my remarks and my slides don’t contain much text, but I thought it would be helpful to try to recreate those remarks here:

I am interested broadly in the topic of political dialogue and deliberation. When I use the term “political” here, I’m not referring exclusively to debate between elected officials. Indeed, I am much more interested politics as associated living; I am interested in the conversations between every-day people just trying to figure out how we live in this world together. These conversations may be structured or unstructured.

With this group of participants in mind, the next question is to explore how ideas spread. There is a great model borrowed from epistemology that looks at spreading on networks. Considering social networks, for example, you can imagine tracking the spread of a meme across Facebook as people share it with their friends, who then share it with friend of friends, and so on.

This model is not ideal in the context of dialogue. Take the interaction between two people, for example. If my friend shares a meme, there’s some probability that I will see it in my feed and there is some probability that I won’t see it in my feed. But those are basically the only two options: either I see it or I don’t see it.

With dialogue, I may understand you, I may not understanding you, I may think I understand you…etc. Furthermore, dialogue is a back and forth process. And while a meme is either shared or not shared, in the back and forth of dialogue, there is no certainty that an idea is actually exchanged to that a comment had a predictable effect.

This raises the challenging question of how to model dialogue as a process at the local level. This initial work considers an individual’s epistemic network – a network of ideas and beliefs which models an given individual’s reasoning process. The act of dialogue then, is no longer an exchange between two (or more) individuals, it is an exchange between two (or more) epistemic networks.

There are, of course, a lot of methodological challenges and questions to this approach. Most fundamentally, how do you model a person’s epistemic network? There are multiple, divergent way to do this from which you can imagine getting very different – but equally valid results.

The first method – which has been piloted several times by Peter Levine – is a guided reflection process in which individuals respond to a series of prompts in order to self-identify the nodes and links of their epistemic network. The second method involves the automatic extraction of a semantic network from a written reflection or discussion transcript.

I am interested in exploring both of these methods – ideally with the same people, in order to compare both construction models. Additionally, once epistemic networks are constructed, through either approach, you can evaluate and compare their change over time.

There are a number of other research questions I am interested in exploring, such as what network topology is conducive to “good” dialogue and what interactions and conditions lead to opinion change.

facebooktwittergoogle_plusredditlinkedintumblrmail

Multivariate Network Exploration and Presentation

In “Multivariate Network Exploration and Presentation,” authors Stef van den Elzen and Jarke J. van Wijk introduce an approach they call “Detail to Overview via Selections and Aggregations,” or DOSA. I was going to make fun of them for naming their approach after a delicious south Indian dish, but since they comment that their name “resonates with our aim to combine existing ingredients into a tasteful result,” I’ll have to just leave it there.

The DOSA approach – and now I am hungry – aims to allow a user to explore the complex interplay between network topology and node attributes. For example, in company email data, you may wish to simultaneously examine assortativity by gender and department over time. That is, you may need to consider both structure and multivariate data.

This is a non-trivial problem, and I particularly appreciated van den Elzen and van Wijk’s practical framing of why this is a problem:

“Multivariate networks are commonly visualized using node-link diagrams for structural analysis. However, node-link diagrams do not scale to large numbers of nodes and links and users regularly end up with hairball-like visualizations. The multivariate data associated with the nodes and links are encoded using visual variables like color, size, shape or small visualization glyphs. From the hairball-like visualizations no network exploration or analysis is possible and no insights are gained or even worse, false conclusions are drawn due to clutter and overdraw.”

YES. From my own experience, I can attest that this is a problem.

So what do we do about it?

The authors suggest a multi-pronged approach which allows non-expert users to select nodes and edges of interest, simultaneously see a detail and infographic-like overview, and to examine the aggregated attributes of a selection.

Overall, this approach looks really cool and very helpful. (The paper did win the “best paper” award at the IEEE Information Visualization 2014 Conference, so perhaps that shouldn’t be that surprising.) I was a little disappointed that I couldn’t find the GUI implementation of this approach online, though, which makes it a little hard to judge how useful the tool really is.

From their screenshots and online video, however, I find that while this is a really valiant effort to tackle a difficult problem, there is still more work to do in this area. The challenge with visualizing complex networks is indeed that they are complex, and while DOSA gives a user some control over how to filter and interact with this complexity, there is still a whole lot going on.

While I appreciate the inclusion of examples and use cases, I would have also liked to see a user design study evaluating how well their tool met their goal of providing a navigation and exploration tool for non-experts. I also think that the issues of scalability with respect to attributes and selection that they raise in the limitations section are important topics which, while reasonably beyond the scope of this paper, ought to be tackled in future work.

facebooktwittergoogle_plusredditlinkedintumblrmail

Facts, Power, and the Bias of AI

I spent last Friday and Saturday at the 7th Annual Text as Data conference, which draws together scholars from many different universities and disciplines to discuss developments in text as data research. This year’s conference, hosted by Northeastern, featured a number of great papers and discussions.

I was particularly struck by a comment from Joanna J. Bryson as she presented her work with Aylin Caliskan-Islam, Arvind Narayanan on A Story of Discrimination and Unfairness: Using the Implicit Bias Task to Assess Cultural Bias Embedded in Language Models:

There is no neutral knowledge.

This argument becomes especially salient in the context of artificial intelligence: we tend to think of algorithms as neutral, fact-based processes which are free from the biases we experience as humans. But such a simplification is deeply faulty. As Bryson argued, AI won’t be neutral if it’s based on human culture; there is no neutral knowledge.

This argument resonates quite deeply with me, but I find it particularly interesting through the lens of an increasingly relativistic world: as facts increasingly become seen as matters of opinion.

To complicate matters, there is no clear normative judgment that can be applied to such relativism: on the one hand this allows for embracing diverse perspectives, which is necessary for a flourishing, pluralistic world. On the other hand, nearly a quarter of high school government teachers in the U.S. report that parents or others would object if they discussed politics in a government classroom.

Discussing “current events” in a neutral manner is becoming increasingly challenging if not impossible.

This comment also reminds me of the work of urban planner Bent Flyvbjerg who turns an old axiom on its head to argue that “power is knowledge.” Flyvbjerg’s concern doesn’t require a complete collapse into relativism, but rather argues that “power procures the knowledge which supports its purposes, while it ignores or suppresses that knowledge which does not serve it.” Power, thus, selects what defines knowledge and ultimately shapes our understanding of reality.

In his work with rural coal minors, John Gaventa further showed how such power dynamics can become deeply entrenched, so the “powerless” don’t even realize the extent to which their reality is dictated by the those with power.

It is these elements which make Bryson’s comments so critical; it is not just that there is no neutral knowledge, but that “knowledge” is fundamentally controlled and defined by those in power. Thus it is imperative that any algorithm take these biases into account – because they are not just the biases of culture, but rather the biases of power.

facebooktwittergoogle_plusredditlinkedintumblrmail

Reflections from the Trenches and the Stacks

In my Network Visualization class, we’ve been talking a lot about methodologies for design research studies. On that topic, I recently read an interesting article by Michael Sedlmair, Miriah Meyer, and Tamara Munzner: Design Study Methodology: Reflections from the Trenches and the Stacks, after conducting a literature review to determine best practices, they realized that there were no best practices – at least not organized in a coherent, practical to follow way.

Thus, the authors aim to develop “holistic methodological approaches for conducting design studies,” drawn from their combined experiences as researchers as well as from their review of the literature in this field. They define the scope of their work very clearly: they aim to develop a practical guide to determine methodological approaches in “problem-driven research,” that is, research where “the goal is to work with real users to solve their real-world problems.”

Their first step in doing so is to define a 2-dimensional space in which any proposed research task can be placed. One axis looks at task clarity (from fuzzy to crisp) and the other looks at information location (from head to computer). These strike me as helpful axises for positioning a study and for thinking about what kinds of methodologies are appropriate. If your task is very fuzzy, for example, you may want to start with a study that clarifies the specific tasks which need to be examined. If your task is very crisp, and can be articulated computationally…perhaps you don’t need a visualization study but can rather do everything algorithmically.

From my own experience of user studies in a marketing context, I found these axes a very helpful framework for thinking about specific needs and outcomes – and therefore appropriate methodologies – of a research study.

The authors then go into their nine-stage framework for practical guidance in conducting design studies and their 32 identified pitfalls which can occur throughout the framework.

The report can be distilled more briefly into 5 steps a researcher should go through in designing, implementing, and sharing a study. These five stages should feed into each other and are not necessarily neatly chronological:

  1. Before designing a study think carefully about what you hope to accomplish and what approach you need. (Describe the clarity/information location axes are a tool for doing this).
  2. Think about what data you have and who needs to be part of the conversation.
  3. Design and implement the study
  4. Reflect and share your results
  5. Throughout the process, be sure to think carefully about goals, timelines and roles

Their paper, of course, goes into much greater detail about each of these five steps. But overall, I find this a helpful heuristic in thinking about the steps one should go through.

facebooktwittergoogle_plusredditlinkedintumblrmail

Text As Data Conference

At the end of this week, Northeastern will host the seventh annual research conference on “New Directions in Analyzing Text as Data.”

I’m very excited for this conference which brings together scholars from many different universities and disciplines to discuss developments in text as data research.  This year’s conference is cohosted by David Smith and my advisor Nick Beauchamp, and I’ve been busily working on getting everything in order for it.

Here is the description from the conference website:

The main purpose of this conference is to bring together researchers from the social sciences, computer science and linguistics to investigate new approaches to utilizing text in social science research. Text has always been a valuable resource for research, and recent developments in automatic language-processing methodologies from the fields of information retrieval, natural language processing, and machine learning are creating unprecedented opportunities for searching, categorizing, and extracting social science information from text.

Previous conferences took place at Harvard University, Northwestern University, the London School of Economics, and New York University. Selection of participants and papers for the conferences is the responsibility of a team led by Nick Beauchamp (Northeastern) and David Smith (Northeastern), along with Ken Benoit (LSE), Yejin Choi (University of Washington), and Arthur Spirling (NYU).

facebooktwittergoogle_plusredditlinkedintumblrmail

Politics as Associated Living

When I consider ‘politics’ as a field, I’m generally referring to something much broader than simply electoral politics.

‘Electoral politics’ is a relatively narrow field, concerned with the intricacies of voting and otherwise selecting elected officials. Politics is much broader.

John Dewey argued that ‘democracy’ is not simply a form a government but rather more broadly a way of living. Similarly, I take ‘politics’ to mean not merely electoral details but rather the art of associated living.

The members of any society face a collective challenge: we have divergent and conflicting needs and interests, but we must find ways of living together. The ‘must’ in that imperative is perhaps a little strong: without political life to moderate our interactions we would no doubt settle into some sort of equilibrium, but I suspect that equilibrium would be deeply unjust and unpredictable.

The greatest detractors of human nature imagine a world without politics, a world without laws, to be a desolate dystopia; where people maim and murder because they can get away with it or simply because that’s what is needed to survive.

But even without such horrific visions of lawlessness, I imagine a world without thoughtful, associated living to be, at best – distasteful. It would be a society where people yell past each other, consistently put their own interests first, and deeply deride anyone who with different needs or perspectives.

Unfortunately, this description of such a mad society may ring a little too true. It certainly sounds like at least one society with which I am familiar.

And this emphasizes why I find it so important to consider politics broadly as associated living. In this U.S. presidential election, I’ve heard people ask again and again: are any of the candidates worthy role models? Before the second presidential debate Sunday night, the discomfort was palpable: how did our electoral politics become so distasteful?

Those are good and important questions. But I find myself more interested in the broader questions: are we good role models in the challenging task of associated living? Do we shut down and deride our opponents or try, in some way, to understand? If understanding is impossible do we must try, at the very least, to finding ways of living together?

In many ways, the poisonous tones of our national politics is not that surprising. It reflects, I believe, a general loss of political awareness, of civic life. Not that the “good old days” were ever really that good. Political life has always been a little rough-and-tumble, and goodness knows we have many, many dark spots in our past.

But we should still aspire to be better. To welcome the disagreements which come inherent to hearing diverse perspectives, and to try, as best we can, to engage thoughtfully in the political life that is associated living.

facebooktwittergoogle_plusredditlinkedintumblrmail