Category Archives: Computer Science

Interactive Machine Learning

For one of my class projects, I’ve been reading a lot about interactive machine learning – an approach which Karl Sims describes as allowing “the user and computer to interactively work together in a new way to produce results that neither could easily produce alone.”

In someways, this approach is intuitive. Michael Muller, for example, argues that any work with technology has an inherently social dimension. “Must we always analyze the impact of technology on people,” he asks, “or is there just as strong an impact of people on technology?” From this perspective, any machine learning approach which doesn’t account for both the user and the algorithm is incomplete.

Jerry Fails and Dan Olsen fully embrace this approach, proposing a paradigm shift in the fundamental way researchers approach machine learning tasks. While classic machine learning models “require the user to choose features and wait an extended amount of time for the algorithm to train,” Fails and Olsen propose an interactive machine learning approach which feeds a large number of features into a classifier, with human judgement continually correcting and refining the results. They find this approach removes the need to pre-select features, reduces the burden of technical knowledge on the user, and significantly speeds up training.

facebooktwittergoogle_plusredditlinkedintumblrmail

A Lesson from the West Area Computers

I really want to read Hidden Figures, the new book by Margot Lee Shetterly which chronicles “the untold story of the Black women mathematicians who helped win the space race.” If you aren’t as excited about this book as I am, it highlights the work and experiences of the West Area Computers – a group of black, female mathematicians who worked at NASA Langley from 1943 through 1958.

I haven’t gotten a chance to yet, but I was particularly struck by one incident I heard on the podcast Science Friday and which I found recounted in the Smithsonian Magazine:

But life at Langley wasn’t just the churn of greased gears. Not only were the women rarely provided the same opportunities and titles as their male counterparts, but the West Computers lived with constant reminders that they were second-class citizens. In the book, Shetterly highlights one particular incident involving an offensive sign in the dining room bearing the designation: Colored Computers.

One particularly brazen computer, Miriam Mann, took responding to the affront on as a her own personal vendetta. She plucked the sign from the table, tucking it away in her purse. When the sign returned, she removed it again. “That was incredible courage,” says Shetterly. “This was still a time when people are lynched, when you could be pulled off the bus for sitting in the wrong seat. [There were] very, very high stakes.”

But eventually Mann won. The sign disappeared.

I love this story.

Not because it has a hopeful message about how determination always wins – but because it serves as a reminder of the effort and risk people of color face every day just in interacting with their environment.

The West Computers were tremendously good at their jobs and were respected by their white, male, colleagues. I imagine many of these colleagues considered themselves open-minded, even radical for the day, for valuing the talent of their black colleagues.

When I hear the story about how Mann removed the “Colored Computers” sign every day, I don’t just hear a story of the valiant strength of one woman.

I hear a story of white silence.

I hear a story about how other people didn’t complain about the sign. I imagine they barely even noticed the sign. It didn’t effect them and never weighed upon their world.

John Glenn reportedly refused to fly unless West Area Computer Katherine Johnson verified the calculations first – such respect he had for her work.

And yet it never crossed anyone’s mind that a “Colored Computers” sign might not be appropriate.

That’s just the way the world was then.

And that makes me wonder – what don’t I see?

To me, this story is a reminder that people of color experience the world differently than I do – because people like me constructed the world I experience. There must be so many things every day that just slip passed my notice, no matter how open minded or progressive I’d like to be.

It’s easy too look back at the 1940’s and see that a “Colored” sign is racist. What’s hard is to look at the world today and to see that sign’s modern day equivalent.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Multivariate Network Exploration and Presentation

In “Multivariate Network Exploration and Presentation,” authors Stef van den Elzen and Jarke J. van Wijk introduce an approach they call “Detail to Overview via Selections and Aggregations,” or DOSA. I was going to make fun of them for naming their approach after a delicious south Indian dish, but since they comment that their name “resonates with our aim to combine existing ingredients into a tasteful result,” I’ll have to just leave it there.

The DOSA approach – and now I am hungry – aims to allow a user to explore the complex interplay between network topology and node attributes. For example, in company email data, you may wish to simultaneously examine assortativity by gender and department over time. That is, you may need to consider both structure and multivariate data.

This is a non-trivial problem, and I particularly appreciated van den Elzen and van Wijk’s practical framing of why this is a problem:

“Multivariate networks are commonly visualized using node-link diagrams for structural analysis. However, node-link diagrams do not scale to large numbers of nodes and links and users regularly end up with hairball-like visualizations. The multivariate data associated with the nodes and links are encoded using visual variables like color, size, shape or small visualization glyphs. From the hairball-like visualizations no network exploration or analysis is possible and no insights are gained or even worse, false conclusions are drawn due to clutter and overdraw.”

YES. From my own experience, I can attest that this is a problem.

So what do we do about it?

The authors suggest a multi-pronged approach which allows non-expert users to select nodes and edges of interest, simultaneously see a detail and infographic-like overview, and to examine the aggregated attributes of a selection.

Overall, this approach looks really cool and very helpful. (The paper did win the “best paper” award at the IEEE Information Visualization 2014 Conference, so perhaps that shouldn’t be that surprising.) I was a little disappointed that I couldn’t find the GUI implementation of this approach online, though, which makes it a little hard to judge how useful the tool really is.

From their screenshots and online video, however, I find that while this is a really valiant effort to tackle a difficult problem, there is still more work to do in this area. The challenge with visualizing complex networks is indeed that they are complex, and while DOSA gives a user some control over how to filter and interact with this complexity, there is still a whole lot going on.

While I appreciate the inclusion of examples and use cases, I would have also liked to see a user design study evaluating how well their tool met their goal of providing a navigation and exploration tool for non-experts. I also think that the issues of scalability with respect to attributes and selection that they raise in the limitations section are important topics which, while reasonably beyond the scope of this paper, ought to be tackled in future work.

facebooktwittergoogle_plusredditlinkedintumblrmail

Facts, Power, and the Bias of AI

I spent last Friday and Saturday at the 7th Annual Text as Data conference, which draws together scholars from many different universities and disciplines to discuss developments in text as data research. This year’s conference, hosted by Northeastern, featured a number of great papers and discussions.

I was particularly struck by a comment from Joanna J. Bryson as she presented her work with Aylin Caliskan-Islam, Arvind Narayanan on A Story of Discrimination and Unfairness: Using the Implicit Bias Task to Assess Cultural Bias Embedded in Language Models:

There is no neutral knowledge.

This argument becomes especially salient in the context of artificial intelligence: we tend to think of algorithms as neutral, fact-based processes which are free from the biases we experience as humans. But such a simplification is deeply faulty. As Bryson argued, AI won’t be neutral if it’s based on human culture; there is no neutral knowledge.

This argument resonates quite deeply with me, but I find it particularly interesting through the lens of an increasingly relativistic world: as facts increasingly become seen as matters of opinion.

To complicate matters, there is no clear normative judgment that can be applied to such relativism: on the one hand this allows for embracing diverse perspectives, which is necessary for a flourishing, pluralistic world. On the other hand, nearly a quarter of high school government teachers in the U.S. report that parents or others would object if they discussed politics in a government classroom.

Discussing “current events” in a neutral manner is becoming increasingly challenging if not impossible.

This comment also reminds me of the work of urban planner Bent Flyvbjerg who turns an old axiom on its head to argue that “power is knowledge.” Flyvbjerg’s concern doesn’t require a complete collapse into relativism, but rather argues that “power procures the knowledge which supports its purposes, while it ignores or suppresses that knowledge which does not serve it.” Power, thus, selects what defines knowledge and ultimately shapes our understanding of reality.

In his work with rural coal minors, John Gaventa further showed how such power dynamics can become deeply entrenched, so the “powerless” don’t even realize the extent to which their reality is dictated by the those with power.

It is these elements which make Bryson’s comments so critical; it is not just that there is no neutral knowledge, but that “knowledge” is fundamentally controlled and defined by those in power. Thus it is imperative that any algorithm take these biases into account – because they are not just the biases of culture, but rather the biases of power.

facebooktwittergoogle_plusredditlinkedintumblrmail

Reflections from the Trenches and the Stacks

In my Network Visualization class, we’ve been talking a lot about methodologies for design research studies. On that topic, I recently read an interesting article by Michael Sedlmair, Miriah Meyer, and Tamara Munzner: Design Study Methodology: Reflections from the Trenches and the Stacks, after conducting a literature review to determine best practices, they realized that there were no best practices – at least not organized in a coherent, practical to follow way.

Thus, the authors aim to develop “holistic methodological approaches for conducting design studies,” drawn from their combined experiences as researchers as well as from their review of the literature in this field. They define the scope of their work very clearly: they aim to develop a practical guide to determine methodological approaches in “problem-driven research,” that is, research where “the goal is to work with real users to solve their real-world problems.”

Their first step in doing so is to define a 2-dimensional space in which any proposed research task can be placed. One axis looks at task clarity (from fuzzy to crisp) and the other looks at information location (from head to computer). These strike me as helpful axises for positioning a study and for thinking about what kinds of methodologies are appropriate. If your task is very fuzzy, for example, you may want to start with a study that clarifies the specific tasks which need to be examined. If your task is very crisp, and can be articulated computationally…perhaps you don’t need a visualization study but can rather do everything algorithmically.

From my own experience of user studies in a marketing context, I found these axes a very helpful framework for thinking about specific needs and outcomes – and therefore appropriate methodologies – of a research study.

The authors then go into their nine-stage framework for practical guidance in conducting design studies and their 32 identified pitfalls which can occur throughout the framework.

The report can be distilled more briefly into 5 steps a researcher should go through in designing, implementing, and sharing a study. These five stages should feed into each other and are not necessarily neatly chronological:

  1. Before designing a study think carefully about what you hope to accomplish and what approach you need. (Describe the clarity/information location axes are a tool for doing this).
  2. Think about what data you have and who needs to be part of the conversation.
  3. Design and implement the study
  4. Reflect and share your results
  5. Throughout the process, be sure to think carefully about goals, timelines and roles

Their paper, of course, goes into much greater detail about each of these five steps. But overall, I find this a helpful heuristic in thinking about the steps one should go through.

facebooktwittergoogle_plusredditlinkedintumblrmail

Node Overlap Removal by Growing a Tree

I recently read Lev Nachmanson, Arlind Nocaj, Sergey Bereg, Leishi Zhang, and Alexander Holroyd’s article on “Node Overlap Removal by Growing a Tree,” which presents a really interesting method.

Using a minimum spanning tree to deal with overlapping nodes seems like a really innovative technique. It made me wonder how the authors came up with this approach!

As outlined in the paper, the algorithm begins with a Delaunay triangulation on the node centers – more information on Delaunay triangulations here – but its essentially a maximal planar subdivision of the graph: eg, you draw triangles connecting the centers of all the nodes.

From here, the algorithm finds the minimal spanning tree, where the cost of an edge is defined so that greater node overlap the lower the cost. The minimal spanning tree, then, find the maximal overlaps in the graph. The algorithm then “grows” the tree: increasing the cost of the tree by lengthening edges. Starting at the root, the lengthening propagates outwards. The algorithm repeats no overlaps exist on the edge of the triangulation.

Impressively, this algorithm runs in O(|V|) time per iteration, making it a fast as well as an effective algorithm.

facebooktwittergoogle_plusredditlinkedintumblrmail

Representing the Structure of Data

To be perfectly honest, I had never thought much about graph layout algorithms. You hit a button in Gephi or call a networkx function, some magic happens, and you get a layout. If you don’t like the layout generated, you hit the button again or call a different function.

In one of my classes last year, we generated our own layouts using eigenvectors of the Laplacian. This gave me a better sense of what happens when you use a layout algorithm, but I still tended to think of it as a step which takes place at the end of an assignment; a presentation element which can make your research accessible and look splashy.

In my visualization class yesterday, we had a guest lecture by Daniel Weidele, PhD student at University of Konstanz and researcher at IBM Watson. He covered fundamentals of select network layout algorithms but also spoke more broadly about the importance of layout. A network layout is more than a visualization of a collection of data, it is the final stage of a pipeline which attempts to represent some phenomena. The whole modeling process abstracts a phenomena into a concept, and the represents that concept as a network layout.

When you’re developing a network model for a phenomenon, you ask questions like “who is your audience? What are the questions we hope to answer?” Daniel pointed out that you should ask similar questions when evaluating a graph layout; the question isn’t just “does this look good?” You should ask: “Is this helpful? What does it tell me?”

If there are specific questions you are asking your model, you can use a graph layout to get at the answers. You may, for example, ask: “Can I predict partitioning?”

This is what makes modern algorithms such as stress optimization so powerful – it’s not just that they produce pretty pictures, or even that they layouts appropriately disambiguate nodes, but they actually represent the structure of the data in a meaningful way.

In his work with IBM Watson, Weidele indicated that a fundamental piece of their algorithm design process is building algorithms based on human perception. For a test layout, try to understand what a human likes about it, try to understand what a human can infer from it – and then try to understand the properties and metrics which made that human’s interpretation possible.

facebooktwittergoogle_plusredditlinkedintumblrmail

Large Graph Layout Algorithms

Having previously tried to use force-directed layout algorithms on large networks, I was very intrigued by

Stefan Hachul and Michael Junger’s article Large Graph-Layout Algorithms at Work: An Experimental Study. In my experience, trying to generate a layout for a large graph results in little more than a hairball and the sense that one really ought to focus on just a small subgraph.

With the recent development of increasingly sophisticated layout algorithms, Hachul and Junger compare the performance of several classical and more recent algorithms. Using a collection graphs – some relatively easy to layout and some more challenging – the authors compare the runtime and aesthetic output.

All the algorithms strive for the same aesthetic properties: uniformity of edge length, few edge crossings, non-overlapping nodes and edges, and the display of symmetries – which makes aesthetic comparison measurable.

Most of the algorithms performed well on the easier layouts. The only one which didn’t was their benchmark Grid-Variant Algorithm (GVA), a spring-layout which divides the drawing area into a grid and only calculates the repulsive forces acting between nodes that are placed relatively near to each other.

For the harder graphs, they found that the Fast Multipole Multilevel Method (FM3) often produced the best layout, though it is slower than High-Dimensional Embedding (HDE) and the Algebraic Multigrid Method (ACE), which can both produce satisfactory results. Ultimately, Hachul and Junger recommend as practical advice: “first use HDE followed by ACE, since they are the fastest methods…if the drawings are not satisfactory or one supposes that important details of the graph’s structure are hidden, use FM3.”

What’s interesting about this finding is that HDE and ACE both rely solely on linear algebra rather than the physical analogies of force-directed layouts. FM3, on the other hand – notably developed by Hachul and Junger – is force-directed.

In ACE, the algorithm minimizes the quadratic form of the Laplacian (xTLx), finding the eigenvectors of L that are associated with the two smallest eigenvalues. Using an algebraic multigrid algorithm to calculate the eigenvectors makes the algorithm among the fastest tested for smaller graphs.

By far the fastest algorithm was HDE, which takes a really interesting, two-step approach. First approximating a high-dimensional k-clustering solution and then projecting those clusters into 2D space by calculating the eigenvectors of the covariance matrix from the clusters. The original paper describing the algorithm is here.

Finally, the slower but more aesthetically reliable FM3 algorithm improves upon classic force-direct approaches by relying on an important assumption: in large graphs, you don’t necessarily have to see everything. In this algorithm, “subgraphs with a small diameter (called solar systems) are collapsed” resulting in a final visualization which captures the structure of the large network with the visual ease of a smaller network.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Effects of Interactive Latency on Exploratory Visual Analysis

In their paper, Zhicheng Liu and Jeffrey Heer explore “The Effects of Interactive Latency on Exploratory Visual Analysis” – that is, how user behavior changes with system response time. As the authors point out, while it seems intuitively ideal to minimize latency, effects vary by domain.

In strategy games, “latency as high as several seconds does not significantly affect user performance,” most likely because tasks which “take place at a larger time scale,” such as “understanding game situation and conceiving strategy” play a more important role in affecting the outcome of a game. In a puzzle game, imposed latency caused players to solve the puzzle in fewer moves – spending more time mentally planning their moves.

These examples illustrate perhaps the most interesting aspect of latency: while it’s often true that time delays will make users bored or frustrated, that is not the only dimension of effect. Latency can alter the way a user thinks about a problem; consciously or unconsciously shifting strategies to whatever seems more time effective.

Liu and Heer focus on latency effecting “knowledge discovery with visualizations,” a largely unexplored area. One thing which makes this domain unique is that “unlike problem-solving tasks or most computer games, exploratory visual analysis is open-ended and does not have a clear goal state.”

The authors design an experimental setup in which participants are asked to explore two different datasets and “report anything they found interesting, including salient patterns in the visualizations, their interpretations, and any hypotheses based on those patterns.” Each participant experienced an additional 500ms latency in one of the datasets. They recorded participant mouse clicks, as well as 9 additional “application events,” such as zoom and color slider, which capture user interaction with the visualization.

The authors also used a “think aloud protocol” to capture participant findings. As the name implies, a think aloud methodology asks users to continually describe what they are thinking as they work. A helpful  summary of the benefits and downsides of this methodology can be found here.

Liu and Heer find that latency does have significant effects: latency decreased user activity and coverage of the dataset, while also “reducing rates of observation, generalization and hypothesis.” Additionally, users who experienced the latency earlier in the study had “reduced rates of observation and generalization during subsequent analysis sessions in which full system performance was restored.”

This second finding lines up with earlier research which found that a delay of 300ms in web searches reduced the number of searches a user would perform – a reduction which would persist for days after latency was restored to previous levels.

Ultimately, the authors recommend “taking a user-centric approach to system optimization” rather than “uniformly focusing on reducing latency” for each individual visual operation.

facebooktwittergoogle_plusredditlinkedintumblrmail

Analytic Visualization and the Pictures in Our Head

In 1922, American journalist and political philosopher Walter Lippmann wrote about the “pictures in our head,” arguing that we conceptualize distant lands and experiences beyond our own through a mental image we create. He coined the word “stereotypes” to describe these mental pictures, launching a field of political science focused on how people form, maintain, and change judgements.

While visual analytics is far from the study of stereotypes, in some ways it relies on the same phenomenon. As described in Illuminating the Path, edited by James J. Thomas and Kristin A. Cook, there is an “innate connection among vision, visualization, and our reasoning processes.” Therefore, they argue, the full exercise of reason requires “visual metaphors” which “create visual representations that instantly convey the important content of information.”

F. J. Anscombe’s 1973 article Graphs in Statistical Analysis makes a similar argument. While we are often taught that “performing intricate calculations is virtuous, whereas actually looking at the data is cheating,” Anscombe elegantly illustrates the importance of visual representation through his now-famous Anscombe’s Quartet. These four data sets all have the same statistical measures when considered as a linear regression, but the visual plots quickly illustrate their differences. In some ways, Anscombe’s argument perfectly reinforces Lippmann’s argument from five decades before: it’s not precisely problematic to  have a mental image of something; but problems arise when the “picture in your head” does not match the picture in reality.

As Anscombe argues, “in practice, we do not know that the theoretical description [linear regression] is correct, we should generally suspect that it is not, and we cannot therefore heave a sigh of relief when the regression calculation has been made, knowing that statistical justice has been done.”

Running a linear regression is not enough. The results of a linear regression are only meaningful if the data actually fit a linear model. The best and fastest way to check this is to actually observe the data; to visualize it to see if it fits the “picture in your head” of linear regression.

While Anscombe had to argue for the value of visualizing data in 1973, the practice has now become a robust and growing field. With the rise of data journalism, numerous academic conferences, and a growing focus on visualization as storytelling, even a quiet year for visualization – such as 2014 – was not a “bad year for information visualization” according to Robert Kosara, Senior Research Scientist at Tableau Software.

And Kosara finds even more hope for the future. With emerging technologies and a renewed academic focus on developing theory, Kosara writes, “I think 2015 and beyond will be even better.”

facebooktwittergoogle_plusredditlinkedintumblrmail