Mimicking Deliberation

In 1950, pioneering computer scientist Alan Turing described an “imitation game” which has since come to be known as the Turing Test. The test is a game played between three agents: two humans and a computer. Human 1 asks a series of questions; human 2 and the computer respond.

The game: human 1 seeks to correctly identify the human respondent while human 2 and the computer both try to be identified as human.

Turing describes this test in order to answer the questioncan machines think?

The game, he argues, can empirically replace the posed philosophical question. A computer which could successfully be regularly be identified as human based on its command of language would indeed “think” in all practical meanings of the word.

Turing goes on to address the many philosophical, theological, and mathematical objections to his argument – but that is beyond the scope of what I want to write about today.

Regardless of the test’s indication for sentience, it quickly became a sort of gold standard in natural language processing – could we, in fact, build a computer clever enough to win this game?

Winning the game, of course, requires a detailed and nuanced grasp of language. What orders are properly appropriate for words? What elements of a question ought a respondent repeat? How do you introduce new topics or casually refer to past topics? How do you interact naturally, gracefully engaging with your interlocutor?

Let’s not pretend that I’ve fully mastered such social skills.

In this way, designing a Turing-successful machine can be seen as a mirror of ideal speaking. The winner of the Turing game, human or machine, will ultimately be the player who responds most properly – accepting some a nuanced definition of “proper” which incorporates human imperfection.

This makes me wonder – what would a Turing Test look like specifically in the context of political deliberation? That is, how would you program ideal dialogue?

Of course, the definition of ideal dialogue itself is much contested – should each speaker have an exactly measured amount of time? Should turn-taking be intentionally delineated or occur naturally? Must a group come to consensus and make a collective decision? Must there be an absence of conflict or is disagreement a positive signal that differing views are being justly considered?

These questions are richly considered in the deliberation literature, but it takes on a different aspect somehow in the context of the Turing Test.

Part of what makes deliberative norms so tricky is that people are, indeed, so different. A positive, safe, productive environment for one person may make another feel silenced. There are intersecting layers of power and privilege which are impossible to disambiguate.

But programming a computer to deliberate is different. A machine enters a dialogue naively – it has no history, no sense of power nor experience of oppression. It is the perfect blank slate upon which an idealized dialogue model could be placed.

This question is important because when trying to conceive of ideal dialogue run the risk of making a dangerous misstep. In the days when educated white men were the only ones allowed to participate in political dialogue, ideal dialogue was easier. People may have held different views, but they came to the conversation with generally equal levels of power and with similar experiences.

In trying to broaden the definition of ideal dialogue to incorporate the experiences of others who do not fit that mold, we run the risk of considering this “other” as a problematizing force. If we could just make women more like men; if we could make people of color “act white,” then the challenges of diverse deliberation would disappear.

No one would intentionally articulate this view, of course, but there’s a certain subversive stickiness to it which has a way of creeping in to certain models of dialogue. A quiet, underlying assumption that “white” is the norm and all else must change to accommodate that.

Setting out to program a computer changes all that. It’s a dramatic shift of context which belies all norms.

Frankly, I hardly know what an ideal dialogue machine might look like, but – it seems a question worth considering.

facebooktwittergoogle_plusredditlinkedintumblrmail

Leave a Reply

Your email address will not be published. Required fields are marked *