While I’m relatively new to the computer science domain, one thing that’s notable is the field’s obsession with predictive accuracy. Particularly within natural language processing, the primary objective of most scholars – or, perhaps, more exactly, the requirement for being published – seems to be producing methods which edge past the accuracy of existing approaches.
I’m not really in a position to comment on the benefit of such a driver, but as an outsider, this focus is striking. I have to imagine there are great, historical reasons why the field evolved this way; that the mentality of constantly pushing towards incremental improvement has been an important factor in the great breakthroughs of computer science.
Yet, I can’t help feel like in this quest for computational improvement, something important is being left behind.
There are compelling arguments that the social sciences have done poorly to abandon their humanistic roots in favor of emulating the fashionable fields of science; that in grasping for predictive measures, social science has failed its duty towards the most critical concerns of what is right and good. Perhaps, after all, questions of such import should not be solely the domain of philosophy departments.
It seems a similar objection could be raised towards computer science; and no doubt someone I’m not aware of has raised these concerns. Such an approach would go beyond the philosophical literature on moral issues in computer science, probing more deeply into questions of meaning, interpretation, and structure.
Wittgenstein questioned fundamentally what it means for two people to communicate. Austin argues that words themselves can be actions. And there is, of course, a long tradition in many cultures of words having power.
None of these topics, while intrinsic to natural language, seem to be deeply embraced by current approaches to natural language processing. Much better to show a two point increase in predictive accuracy.
And to a certain extent, this dismissal is fair. While I myself have a fondness for Wittgenstein, I imagine computer science wouldn’t advance far if, instead of developing algorithms, practitioners spent all their time wondering – if you tell me you are in pain, do I understand you because I, too, have had my own experiences of pain? How can I know what ‘pain’ means to you?
Yet, while Wittgenstein’s Philosophical Investigations may be too far afield, it does highlight some practical issues. Perhaps metaphysical concerns about what it means to communicate can be safely disregarded, but this still leaves questions about what it looks like to communicate. That is, it seems reasonable to assume that miscommunication does happen, but what happens to dialogue plagued by such problems? What does it look like when people talk past each other or when they recognize a miscommunication and take steps to resolve it? Can an algorithm distinguish and properly parse these differences? Remembering, of course, that a human, perhaps, cannot.
In a recent review of literature around the natural language processing task of argument mining, I was struck by the value of a 1987 paper focused on understanding the structure of a single speech-act. It evoked no Wittgenstein-level of abstraction, and yet brought an important element of theory to the computational task of parsing a single argument.
I couldn’t find – and perhaps I missed it – no similar paper exploring the complex interactions of dialogue. Of course, there is much work done in this area among deliberation scholars – but this effort is not easily translated into the mechanized logic of algorithms.
In short, there seems to be a divide – a common one, I’m afraid, in the academy. In one field, theorists ask, what does it mean to deliberate? What makes good deliberation? And in another they ask, what algorithms can recognize arguments? What algorithms accurately predict stance?
And, while both pursuing important work, the fields fail to learn from each other.