Category Archives: Unpopular Opinions

Reclaiming ‘Crazy’

The word ‘crazy’ has the remarkable power to instantly render invalid whatever person, perspective, or practice it is applied to.

It suggests behavior that is illogical or irrational; that is so unpredictable as to defy the bounds of ‘normal’ human reason. It therefore invalidates through implicit othering — crazy people can not be reasoned with, their behaviors can be neither interpreted nor explained, their beliefs carry little more meaning than noise.

Perhaps this is why ‘crazy’ is typically used as a pejorative.

Yet, the beliefs and behaviors that are deemed to be ‘crazy’ change over time. They are continually interpreted and reinterpreted to fit the narratives of the day. Madness, in other words, is a social construct.

Foucault documents this in detail, pointing to stories of the mad, insane, and crazy that seem absurd to our modern sensibilities. Scientifically-defended theories of hard bile and hot blood, concerns over contagious epidemics of women’s ‘hysteria,’ illness interpreted as a failure of morality.

Again and again in the West, cognition and behavior have been interpreted through a narrow normative lens: anyone who thinks or acts outside this framework is taken to be crazy.

‘Crazy’ then, is perhaps better understood not as a property of a person, but as a property of society. To call something crazy is to place it outside the bounds of standard social norms, to say that it is too far out there to be reasoned with rationally. It is the intellectual equivalent of throwing up your hands and declaring there is nothing to be done — a reasonable person simply cannot engage with crazy.

Yet, its very nature as a social construct raises the question: who determines what is crazy? Creative works are full of stories of in which those deemed mad are perhaps the only reasonable ones. The French film King of Hearts, for example, contrasts the world created by asylum inmates with the brutal and senseless killing of World War I.

I find myself particularly drawn to the word ‘crazy’ because it is inexplicably gendered. It’s not quite as causal as the relationship between old and spry — but women are much more likely to be described as ‘crazy’ and the word has a long history of being used to discredit women and their experiences.

Given my description of ‘crazy’ above, this makes sense — if you can’t reason with someone who is crazy, if you can’t meaningfully interpret their words or actions, then you are free to dismiss their claims. There is simply nothing to be done. In this sense, the epithet intrinsically provides authority to the person using the word while diminishing the power of the person it’s applied to. It’s actually quite a brilliant tactical maneuver.

For this reason, many people prefer to avoid the word ‘crazy.’ There are other good reasons to avoid it, too — as you may have already inferred from the shaky language of this piece, ‘crazy’ has a deeply problematic tendency to casually lump together several different concepts. It dismisses mental health challenges, disparages neurodiversity, and glibly ostracizes any deviance from the supposed norm.

Yet — as someone who is ‘crazy’ along multiple of these dimensions — I find the word can give me power, too.

I wrote above that ‘crazy’ locates a person outside the bounds of the ‘norm.’ I think that’s true, but — I don’t find that the word itself places a normative judgement on that positioning. That is, we interpret ‘crazy’ to be bad because we implicitly assume that being outside the norm is bad. We accept that crazy people cannot be reasoned with because we implicitly assume that people who who are outside the norm cannot be reasoned with. We feel embarrassed or ashamed when labeled as ‘crazy’ because we implicitly assume that falling within the norm is good.

I reject those claims.

For one thing, I don’t really believe in ‘normal.’ We are all crazy. But more deeply — what we generally take to be ‘normal’ only refers to an idealistic conception of a small slice of humanity. Why should any of us fall over ourselves trying to fit into a norm that doesn’t exist?

I refuse to feel shame for who I am.

In that sense, I find being labeled crazy to be quite freeing, actually. Oh, you thought you could diminish me by saying that I exist outside the norm? Oh, no no no, my friend – this is where I thrive.

Being crazy means being free to discover and create yourself, it means not worrying about conforming to the norm, and it means not letting anyone dictate your truth for you.

To be clear, there are still plenty of other things to worry about. I hardly mean to suggest that nothing is true and everything is permitted. Rather, the types of things one ought to worry about — being good, compassionate, respectful — are very different from trying to be ‘normal’ or trying to fit someone else’s mold of who you should be.

And that, perhaps, is the best thing about accepting the mantel of crazy: it gives other people permission to be crazy, too. When we shy away from talking about mental health, when we assume a neurotypical view, when we accept ‘crazy’ as a personal fault, we implicitly reinforce the idea that these are somehow shameful or wrong.

Embracing and even showcasing those pieces of ourselves not only can be personally fulfilling, it implicitly sends the message: None of us should have to hide who we are.

So that is why I frequently choose to refer to myself as ‘crazy,’ why I tend to talk about my thoughts, actions, choices, and diagnoses with such levity. I cannot hide who I am, and more than that — I don’t want anyone else to do so either.

So, though it may defy all norms and reason, I will continue to describe myself with that word. I will continue to think my crazy thoughts, act on my crazy impulses, and aim to be the best person I can be with no regrets for the fact that person will never be ‘normal.’ And I will do my best to create spaces where others feel they can genuinely do the same. I feel no shame or hesitation in this commitment, it is simply who I am: a total crazy person.

Me.

Facebooktwittergoogle_plusredditlinkedintumblrmail

Polls and Partisanship

A recent poll found that 37% of evangelicals are more likely to vote for GOP Senate candidate Roy Moore following allegations of sexual misconduct against him.

That makes for a nice clickbait headline.

It sounds appalling, and it is appalling, though perhaps not for the reasons one might think.

First, some details on the poll itself: it was fielded by JMC Analytics. A firm, for what it’s worth, given a ‘C’ ranking by FiveThirtyEight. It was a landline poll, with a 4.1% margin of error.

The question alluded to in the lede read: “Given the allegations that have come out about Roy Moore’s alleged sexual misconduct against four underage women, are you more or less likely to support him as a result of these allegations?”

Among all respondents, 29% responded that they were more likely to vote for Moore, a number which rises to 37% when considering the responses of self identified evangelicals. (Incidentally, 28% of evangelicals said the allegations made them less likely to vote fore Moore.)

It’s further notable that there is no gender variation in response to this question. 28% of men and 30% of women report being more like to vote for more, 39% of men and 37% of women report being less likely, and 33% of men and 34% of women say it makes no difference.

The poll asks no questions about why respondents are more or less likely to vote for Moore, though JMC’s results summary gestures towards a possible explanation:

Those more likely to support Moore over the allegations favor him over Jones 84-13%. However, the numbers are just as polarized (81-9% for Jones) among those who say the incident makes them less likely to support Moore.

This poll result isn’t about religion, it’s about partisanship.

That’s not to say those who support Moore are just dirty partisans who need to get their priorities in order. Indeed, if I may venture a guess, I’d imagine that supporters who find themselves on the side of “more likely” interpret this whole thing as a partisan stunt meant to weaken the Republican party.

Importantly, such a view does not intrinsically require doubting victims’ legitimacy – indeed, it might better be interpreted as a doubting our collective democratic legitimacy. It’s not a sign of a healthy democracy when people – of all parties – imagine our national politics to have the cloak and dagger character of House of Cards.

That makes me sad.

It makes me sad that we’re so caught up in the politics of partisanship that we can’t engage seriously with the real work of democracy; of working together to figure out how we all get by in this messy world.

Headlines and memes which indicate that Republicans or Evangelicals support child molestation do a disservice to democracy.

They make me tired. We have serious work to do.

And some of that serious work stems from the fact that there are terrible people in all parties. Seriously, there are terrible, abusive men everywhere. Everywhere. We can’t pretend that such abuse is relegated to one party, one state, or one denomination.

The first step, as they say, is admitting we have a problem.

With any hope, there is a great reckoning coming. As we finally start listening to women, and believing women, and building a non-patriachial society where such terrible abuse isn’t built into the fabric.

But as part of that reckoning, we’ll need to figure out how to collectively respond when abuses by celebrities, politicians, and other men of power, come to light. Neither steadfast solitary nor internet-mob panic seem the optimal way to go.

Personally, I’d like to see Alabama Republicans given the opportunity to replace Roy Moore on the ballot. Turns out he’s a terrible person. That happens some times. Reschedule the general if you need to. The system should support voter choice, not constrain it. I’d like to see a system which allowed voters to respond to this issue in a thoughtful, responsible way.

After all, while I wish this abuse were an isolated incident, if we’re being honest with ourselves, we’d know – this is going to happen again, and it could happen with a candidate from any party.

Facebooktwittergoogle_plusredditlinkedintumblrmail

Freedom of Speech

While I was offline for most of the weekend, there was a bit of excitement generated by a 10-page, misogynist manifesto published internally by a male employee at a certain well-known tech company. The employee has since been fired.

I’ll admit that I haven’t read the entire controversial post. Quite frankly, I’m not sure I need to. I’ve heard it all before and I have better things to do with my time than read 10 pages of arguments for why I’m unfit to do the type of work that I do.

The short version of his argument is that women aren’t cut out for STEM fields, but the strategic message of his post is that we collectively shouldn’t silence “uncomfortable” arguments just because we happen to disagree with them. A virtuous society should welcome dissenting opinions whether they are distasteful or not.

I have no interest in engaging with the misogynist message of his post. I flatly reject his arguments, and others – such as in this post by Yonatan Zunger – have already done detailed refutations of his point.

But I study citizens and civil society; I am interested in the ways we work together or don’t work together to co-create the world around us. So I am much more interested in the broader questions: in a society (or company) with many different people with many different views, what is the role of dissent? To what extent must speech be safeguarded? What social or institutional responses are appropriate regulators of speech, if any?

These are all important questions with non-obvious answers. I certainly don’t have any simple answers today.

I am inclined to agree with J. L. Austin, though, that words can be actions. Performative speech acts or rhetic acts are not mere sounds or words without meaning: they have real impact. As Austin writes:

Saying something will often, or even normally, produce certain consequential effects upon the feelings, thoughts, or actions of the audience, or of the speaker, or of other persons: and it may be done with the design, intention, or purpose of producing them…

Words have consequences, and thus we must take them seriously.

Words can do real harm.

So I think it’s unfair to say that anyone who feels harmed by another’s words should simply toughen up; they are not just words.

But the power of words to do harm also emphasizes why their freedom is so essential: words and ideas can be dangerous to corrupt, authoritarian regimes. Words have real power for harm and for good and their silencing should not be taken lightly.

But here’s the thing that’s struck me about this particularly case – the details of which are obscured and a bit fuzzy:

Was this employee a good worker and teammate who got a long just fine until one day he unleashed 10-pages of thought he knew his colleagues would hate?

That’s entirely possible, but I imagine a somewhat different scenario.

In his post, Zunger expresses pure distain at the views of the employee, writing “What I am is an engineer, and I was rather surprised that anyone has managed to make it this far without understanding some very basic points about what the job is.”

How did someone make it this far, indeed? I can’t help but wonder: was this really the first sign that the employee held so many of his colleagues in such low esteem? Was it the first indication that he had an entirely backwards view of what engineering really is?

I suspect not.

I have to admit I am disappointed, though not surprised, that he was so quickly fired. It just feels petty. It feels small.

It feels like the action to take to clean up a PR mess which comes at the same time your company is being investigated for systematically underpaying female employees.

And that’s the thing – words do matter. Pretending they don’t exist doesn’t wish the thoughts away. Sure, this one employee whose notable outburst went public can be swept under the rug and tidied up for a discerning public; but his words don’t go away. The culture that spawned those words, which allowed them to flourish, doesn’t change much as a result.

That’s not to say all vitriol should be labeled speech and allowed to run rampantly free; as noted, these words do harm and that harm should be taken seriously. But that’s why it’s important to have allies. Real allies, who will speak out when they hear something, who won’t laugh at bad jokes, who will pick up on the small things and provide constructive criticism.

We can’t pretend that a misogynist manifesto is the product of one guy at one company and we can’t pretend that his wrong and offensive views will just go away. The misogyny in tech is rampant, the misogyny in our culture unbearable. We should talk about these news scandals, sure, but the real work must be done at the ground level, every day. The real work begins long before it escalates to 10-page manifestos.

Facebooktwittergoogle_plusredditlinkedintumblrmail

Gendered Creative Teams: The Challenge of Quantification

I recently had the privilege of being an invited speaker at the Gendered Creative Teams workshop hosted by Central European University and organized by Ancsa Hannák, Roberta Sinatra, and Balázs Vedres.

It was a truly remarkable gathering of scholars, researchers, and activists, featuring two full days of provocations and rich discussion.

Perhaps one of the most interesting aspects of the conference was that most of the attendees did not come from a scholarly background focusing on gender, but rather came at the topic originally through the dimension of creative teams. The conference, then, provided an opportunity to think more deeply about this latent – but deeply salient – dimension of the work.

Because of this, one of the ongoing themes of the conference – and one which particularly stuck with me – focused on the subtle ways in which the patriarchy shapes the creation and distribution of knowledge.

As some of you may know, I am fond of quoting Bent Flyvbjerg’s axiom: power is knowledge.

As he elaborates:

…Power defines physical, economic, ecological, and social reality itself. Power is more concerned with defining a specific reality than understanding what reality is. …Power, quite simply, produces that knowledge and that rationality which is conductive to the reality it wants. Conversely, power suppresses that knowledge and rationality for which it has no use.

This presents a troubling challenge to the enlightenment ideal of rationality. As scientists and researchers, we have a duty and a commitment to rationality; a deep desire to do our best to discover the Truth. But as a human beings, living in and shaped by our societies, we may simultaneously be blind to the assumptions and biases which define our very conception of reality.

If you’re skeptical of that view, consider how the definition of “race” has changed in the U.S. Census over time. The ability to choose your own race – as opposed to having it selected for you by interpretation of a census interviewer – was only introduced in 1960. Multiracial recordings only became allowed in 2000.

These changes reflect shifting social understandings of what race is and who gets to define it.

We see a similarly problematic trend around the social construction of gender. Who gets to define a person’s gender? How many genders are there? These are non-trivial questions, and as researchers we have a responsibility to push beyond our own socialized sense of the answers.

Indeed, quantitative analysis may prove to be particularly problematic – there’s just something so reassuring, so confidence-inducing, about numbers and statistics.

As Johanna Drucker warns of statistical visualizations:

…Graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity. So naturalized are the Google maps and bar charts of generated from spread sheets that they pass as unquestioned representations of “what it.”

As a quantitive researcher myself – and one who is quite fond of visualizations – I don’t take this as a admonition to shun quantitive analysis all together. But rather, I take it a valuable, humanistic complication of what may otherwise go unobserved or unsaid.

Drucker’s warning ought to resonate with all researchers: our scholarship would be poor indeed if everything we presented was taken as wholesale truth by our peers. Research needs questioning, pushback, and a close evaluation of assumptions and limitations.

We know that our studies – no matter how good, how rigorous – will always be a simplification of the Truth. No one can possibly capture all of reality in a single snapshot study. Our goal then, as researchers, must be to try and be honest with ourselves and critical of our assumptions.

As Amanda Menking commented during the conference – it’s okay if you need to simplify gender down from something that’s experienced uniquely for everyone and provide narrow man/woman/other:___ options on a survey. There are often good reasons to make that choice.

But you can’t ignore that fact that it is a choice.

If you choose to look at a gender binary, ask yourself why you made that choice and explain in at least a sentence or two why you did.

Similarly, there are often good reasons to use previously validated survey measures: such approaches can provide meaningful comparison to earlier work and are likely to be more robust than quickly making up your own questions on the day you’re trying to get your survey live.

But, again, such decisions are a choice.

If you use such measures you should know who created them, what context defined them, and you should critically consider the implicit biases which may be buried in them.

All methodological choices have an impact on research – that’s why we constantly need replication and why we all carry a healthy list of future work. Of course we still need to make these choices – to do otherwise would paralyze us away from doing any research at all – but we have to acknowledge that they are choices.

Ignoring these complication may be an easier path, especially when it comes to aspects which are so well socialized into the broader population. But that easier path reduces scholarship to the level of pop-science. A quick, flashy headline that glosses over the real complications and limitations inherent in any single study.

You don’t have to solve all the complications, but you do have to acknowledge them. To do otherwise is just bad science.

Facebooktwittergoogle_plusredditlinkedintumblrmail

Optimism and Futility

People often tell me that they find my writing optimistic. Indeed, this is a primary reason people frequently give me for why they enjoy my writing. It’s just so optimistic. Well, not saccharine-sweet, over-the-top optimistic, but optimistic nonetheless.

I find this hilarious.

I wouldn’t self-identify as an optimist, and those who know me are likely to be familiar with my habit of giving a big teenage eye roll to concepts like ‘hope’ while periodically ranting about why hope is not required. But perhaps I’m an optimist despite myself.

Or perhaps I simply spend too much time reading Camus, who famously argues that we must find joy and meaning in futile and hopeless labor. Indeed, we must imagine Sisyphus happy.

We live in dark times. Every day the news seems to get worse, and our social challenges run so deep and come from so many directions that it seems nearly impossible that we could even begin to tackle them at all.

But that is no reason not to try.

And this, I suppose, is why I get labeled an optimist. Given the choice between action and paralyzed grief, I’d choose action every time. It’s really the only choice there is.

I’d like to think that the moral arc of the universe bends towards justice; that if we work hard enough and fight forcefully enough we can indeed leave this world a little better than we found it.

But the truth is, none of that matters. It hardly matters if all this amounts to is hopeless and futile labor because that is all there is – inaction isn’t a viable option.

All that is left is to return to our rock, to keep on pushing even when we know that there is no point. We keep on fighting for justice – ceaselessly, tirelessly working towards that vision; straining with all our might – because to do otherwise is untenable. As Camus writes, the struggle itself toward the heights is enough to fill a man’s heart.

Indeed, one must imagine Sisyphus happy.

Facebooktwittergoogle_plusredditlinkedintumblrmail

On Violence and Protest

I’ve been thinking a lot recently about the role of violence in social movements. Such violence could take many forms, from punching nazis to property damage.

Conventional wisdom among the mainstream left is that such violence isn’t a good tactic: not only is morally problematic, it is typically unsuccessful.

In his biography of Gandhi, Bhikhu Parekh describes Gandhi’s utility argument against violence, which went hand in hand with his moral argument against violence:

Gandhi further argued that violence rarely achieved lasting results. An act of violence was deemed to be successful when it achieved its immediate objectives. However, if it were to be judged by its long-term consequences, our conclusion would have to be very different. Every apparently successful act of violence encouraged the belief that it was the only effective way to achieve the desired goal, and developed the habit of using violence every time ran into opposition. Society thus became used to it and never felt compelled to explore an alternative. Violence also tended to generate an inflammatory spiral. Every successful use blunted the community’s moral sensibility and raised its threshold of violence, so that over time an increasingly larger amount became necessary to achieve the same results.

There are some compelling points in that argument, but it fails to address the larger question: is violence never a justifiable means for social change, either morally or pragmatically?

After all, Gandhi’s level of commitment to non-violence may not be the example we want to follow. In an extreme example of pacifism, Gandhi wrote of Jews in World War II Germany:

And suffering voluntarily undergone will bring [Jews] an inner strength and joy which no number of resolutions of sympathy passed in the world outside Germany can…The calculated violence of Hitler may even result in a general massacre of the Jews by way of his first answer to the declaration of such hostilities. But if the Jewish mind could be prepared for voluntary suffering, even the massacre I have imagined could be turned into a day of thanksgiving and joy that Jehovah had wrought deliverance of the race even at the hands of the tyrant. For to the godfearing, death has no terror. It is a joyful sleep to be followed by a waking that would be all the more refreshing for the long sleep.

In contrast to Gandhi’s view, there are many reasons to think violence in response to genocide may be permissible – or should even be encouraged.

My friend Joshua Miller recently reflected on this question, writing:

…in many ways, the canonization of Gandhi and Martin Luther King have served to create an artificial standard of non-violence that no social movement can ever really achieve and that neither the Civil Rights movement nor the Indian independence movement actually achieved. Plus, if violent repression by the police goes unmentioned in the media but activist violence becomes a regular topic of debate, then it will appear that the only violence is coming from the activists. 

I particularly appreciate his insight regarding the ‘canonization’ of Gandhi and King – they both deserve praise for their work and impacts, but we tend to enshrine them as peaceful activists who could do no wrong; who should be emulated at all costs. Malcolm X, on the other hand, is pushed by the wayside, his story is less told. Yet he did have an important and lasting impact on the American civil rights movement; could King’s pacifism have succeeded without Malcom X’s radicalism?

I have no easy answers to these question; indeed, such easy answers do not exist. But I think we owe it to ourselves to think through these questions – is violent protest ever morally justified? If it can be morally justified at times, is it ever pragmatically justified? Do our collective memories of history really capture what happened, or do we tell ourselves a simpler, softer story – do we only remember the way we wish it had happened?

Perhaps, as Camus wrote, there is no sun without shadow, and it is essential to know the night.

 

Facebooktwittergoogle_plusredditlinkedintumblrmail

Hope and Utopia

There is a common sentiment that hope is required for social action. We must hold on to hope. We must not give in to despair.

Perhaps it is simply the contrarian in me, but I cannot help but sigh when hearing these exhortations. We must hold on to hope? Why?

On the surface, I suppose it seem like a perfectly reasonably thing to say. So reasonable, in fact, that people often don’t take the time to justify the claim. We must hold on to hope as surely as we must see that the sky is blue – it is just the way things are.

This only makes me question harder.

In The Task of Utopia, Erin McKenna defends the value of utopian visions, repeating several times throughout the book, “utopian visions are visions of hope.” By which she means that they “challenge us to explore a range of possible human conditions.”

Hope is required, then, because only hope can inspire us to imagine that things might be different and only hope can motivate us to work towards those visions.

Importantly, McKenna advocates against static, end-state models of utopia, in which “hope” essentially becomes shorthand for “hope that a (near) perfect future is possible and achievable.”

Instead, McKenna articulates her hopeful vision as a process:

If one can get beyond trying to achieve final perfect end-states and accept that there are instead multiple possible futures-in-process, one has taken the first step in understanding the responsibility each of us has to the future in deciding how to live our lives now.

In this way, “hope” is a sort of future-awareness. It is not a feeling or an emotion per se, but minimally hope is a sense that there will be a future self which our present self has some power over shaping.

I generally take the term “hope” to be somewhat more optimistically inclined, but even under this broad definition, I still find myself skeptical of hope as a necessity.

Consider the character of Jean Tarrou from Albert Camus’ The Plague. After the city of Oran is quarantined following a deadly outbreak of plague, Tarrou organizes volunteers to help the sick and try to fight off the plague.

One could argue that he had hope in the manner described above – perhaps he imagined a future in which the city was no longer wracked by disease; perhaps he imagined his actions could play a role in creating that future. Such a future-vision combined with a sense of agency could be described as hope.

But it is exactly this story which motivates me to be skeptical of hope as a required element of social change.

The situation in Oran is desperate. There is every reason to think that all the city’s inhabitants will eventually succumb to the plague. Perhaps Tarrou’s efforts may stave off some deaths for a time, but in the middle of the novel it is reasonable to believe that Tarrou’s efforts will make no real difference. Either way, the outcome will be the same.

Many of Oran’s inhabitants seem to feel this way. In the face of almost certain death, people celebrate wildly at night, finally free of the taboos and inhibitions which had previously kept them more orderly. They had lost a vision of the future in which their actions played a part. They had lost hope.

Yet there is no reason to think that Tarrou felt any differently. Faced with almost certain death, accepting of the knowledge that his actions would make no difference, Tarrou still works to fight the plague.

He has no hope, it is simply what you do.

In The Myth of Sisyphus, another piece by Camus, he snarkily comments of Sisyphus’ labor that “the gods had thought that there is no more dreadful punishment than futile and hopeless labor.”

But life is futile and hopeless labor. This is, in fact, the essence of being alive. “The struggle itself toward the heights is enough to fill a man’s heart,” Camus writes.

Hope is not required.

I am heavily persuaded by McKenna’s process-model of utopia, but find hope to be a somewhat superfluous element. Her vision requires the imagination to conceive of possible futures, and it takes the agency to act in seeking those possible futures, but it does not require hope that those futures are achievable nor hope that one’s efforts will have impact.

In fact, I imagine the process-model as thriving better without hope. This vision finds that the future is and always will be imperfect. Perfection is neither desirable nor achievable. Abandoning hope means accepting the future as flawed, accepting ourselves as flawed. Most of us will probably have no impact, and most of us will never witness the futures we dream of. But that lack of hope is not a reason not to act – indeed, in abandoning such hope, our actions and our choices are all that we have left.

 

 

Facebooktwittergoogle_plusredditlinkedintumblrmail

Discontent of the Commons

In a session on “The Politics of Discontent” at this year’s Frontiers of Democracy conference, democracy scholar Alison Staudinger proposed considering “discontent” as a common pool resource. I am deeply intrigued by this idea, and interested to understand just what that might mean.

In 1968, ecologist Garrett Hardin popularized the concept of the “Tragedy of the Commons,” describing the game-theoretic prisoner’s dilemma which communities of people face when utilizing some common resource:

Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons. Such an arrangement may work reasonably satisfactorily for centuries because tribal wars, poaching, and disease keep the numbers of both man and beast well below the carrying capacity of the land. Finally, however, comes the day of reckoning, that is, the day when the long-desired goal of social stability becomes a reality. At this point, the inherent logic of the commons remorselessly generates tragedy.

As a rational being, each herdsman seeks to maximize his gain….the rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another; and another… But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy. Each man is locked into a system that compels him to increase his herd without limit–in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons. Freedom in a commons brings ruin to all.

This idea has been applied to a wide variety of resources which can be broadly categorized along two spectrums: excludable and subtractable. As the names suggest, excludable indicates whether or not people can be easily excluded from a resource while subtractable indicates whether use of a resource by one person restricts use of the resource by another.

The clothes I am wearing are both excludable and subtractable – I can prevent your use of them, and you cannot use them while I am using them. Wikipedia is non-excludable and non-subtactable – I cannot prevent your use and my use does not diminish yours. If Wikipedia added a paywall or if a State blocked its use, it would become excludable.

Seen as a common pool resource, discontentment would seem to fit in this non-excludable, non-subtactable category. I cannot stop you from feeling discontent, and I can be discontent without infringing on your ability to also be discontent.

Yet, this is perhaps not the most helpful framework. The political challenges we face today are not so much that people feel discontent – rather the challenge is the causes and repercussions of that discontent.

It is a fundamental aspect of a pluralist society that not everyone will agree all of the time. We each have different needs and wants, and our desired outcomes will at times be in conflict. We can’t all get what we want.

Under a simple definition, then, a person is discontent if they do not get their way. Since not everyone in a pluralistic society can simultaneously have their way, it is intrinsic that some portion of people will be discontent with any given issue.

This presents at least two possible social challenges connected to discontent. If discontent is inequitably and systemically distributed, those who have more discontent with have reason to see the system as unjust. If people experience the system as treating them unjustly, they will have reason to try to change the system – minimizing their own discontentment while making someone else more discontent.

Here, discontent seems to no longer be a resource – rather can be better interpreted as the absence of a resource.

The word that comes to mind here is power.

People with power can get the outcomes they desire, minimizing their discontent; people without power are subject to the whims of those with power – increasing the likelihood that they will not get the outcomes they desire and increasing their discontent.

Power, I would argue, is an excludable and subtractable resource. Those with power have certainly been known to exclude others from acquiring power, and if I have power, it does, I think, diminish your ability to have power.

This model unites people from all sides of the political spectrum who feel discontent under current systems and institutions. Some may feel they are losing power, some may never of had much power in the first place.

And the highest elites may feel most secure in the continuance of their power if everyone else is busy fighting over who gets whatever scraps are left.

Elinor Ostrom, the brilliant economist who argued that the drama of the commons need not be a tragedy, traveled around the world empirically studying communal and institutional management of common pool resources.

In Covenants, Collective Action, and Common-Pool Resources, Ostrom argues that conflict and destruction arise when “those involved act independently owing to a lack of communication or an incapacity to make credible commitments.” On the other hand, if members of a community “can communicate, agree on norms, monitor each other, and sanction noncompliance to their own covenants, then overuse, conflict, and the destruction of [common pool resources] can be reduced substantially.”

Managing common pool resources, then, is difficult but not impossible.

“If those who know the most about local time-and-place information and incentives are given sufficient autonomy to reach and enforce local covenants,” she argues. “They frequently are able to devise rules well tailored to the problems they face.”

In addition to this autonomy of the people, communication is essential:

“When symmetric subjects are given opportunities to communicate and devise their own agreements and sanctioning arrangements, then the outcomes approximate optimality,” Ostrom writes. “These findings are surprising for many theorists, because the capacity to communicate without an external enforcer for monitoring and sanctioning behavior inconsistent with covenantal agreements is considered to be mere ‘cheap talk’ having no impact on the strategic structure of the game.”

In seeing the rise of populism, in watching discontented people making bad political decisions, in seeing the mismanagement of a common pool resource, the liberal impulse is often to solve the problem through stronger regulation – to create institutions nominally managed by the people which can step in with rules and authority in order to overcome the destructive self-interest and poorly-informed actions of individual actors.

But perhaps Ostrom’s work on common pool resources ought to give us pause – “the people” may not collectively be wise, but they have the ability to surprise us; to work out their differences and to successfully self-manage in ways that external enforcing institutions could never accomplish.

Facebooktwittergoogle_plusredditlinkedintumblrmail

Self-Skepticism

I have complained before about the common solution to the so-called “confidence gap” – that those with less confidence (typically women) should simply behave more like their confident (typically male) peers.

There’s a whole, complex, gender dynamic to this conversation, but even putting that issue aside, I have a hard time accepting that the world would be better if more people were arrogant.

Of course, those advocating for this shift don’t call it arrogance, preferring the positive term of confidence, but there is a fine line between the two. If a person lacks the confidence to share a meaningful insight, that is a problem. But it is just as problematic – perhaps even more problematic – when someone with unfounded confidence continually dominates the conversation.

Confidence is not intrinsically good.

Thinking before you speak, questioning your own abilities – these are good, valuable traits. It’s only at their extreme of paralyzing inaction that these traits become problematic. Similarly, confidence is appropriate in moderation, but quickly becomes tiring at its own extreme of arrogance.

Finding a balance between the two is the skill we all ought to work on becoming good at.

Unfortunately, there doesn’t seem to be a good word for the opposite of over-confidence. Modesty is one, but it doesn’t quite capture the concept I’m trying to get at. Modesty is a trait of accomplished people who could reasonably be arrogant but manage not to be. Can you be modest while sincerely unsure of yourself?

I’ve started using the term self-skeptism; a sort of healthy, self-critique.

The word skeptic has a somewhat complicated etymological history, but is derived in part from the Greek skeptesthai meaning, “to reflect, look, view.” This is the same root as the word “scope.”

It implies a certain suspension of belief – an ability to step back and judge something empirically rather than biased by what you already believe. And, it implies that skeptical inquiry is a valuable process of growth. The skeptic neither loves nor hates the subject they are skeptical about – rather, they hope to get at a better, deeper understanding through the process of inquiry.

Applied to one’s self, then – though perhaps more typically called by the general term of self-reflection – self-skepticism can be seen as the process of trying to become a better person through healthy skepticism of yourself as you currently are.

This, to me, lacks the judgement implied by “lacking confidence,” while embracing that we are all flawed and imperfect in our own ways – though we can always, always work to become better.

Facebooktwittergoogle_plusredditlinkedintumblrmail

Embracing Behinity

Throughout the week, I’ve been reflecting on Sándor Szathmári’s great work of social satire, Voyage to Kazohinia. The work critiques a number of social institutions, but largely seems to focus on a broader question: is an ideal society one at equilibrium or one which embraces extremes?

Szathmári presents this question by introducing us, through the shipwrecked Englishman Gulliver, to two contrasting societies: the brilliant, efficient and loveless Hins and the backwards, chaotic, and destructive Behins.

Given the Hin’s complete lack of love, art, and unique character, one might be inclined to favor the mad but passionate world of the Behins, though Szathmári clearly seems to favor the ordered society of the Hins.

Following the principal of kazo – mathematical clarity – the Hins naturally act “so that the individual, through society, reaches the greatest possible well-being and comfort.” The Behins, on the other hand, are “kazi” – a term for the irrationality which captures everything not kazo.

While I have commented this week on the arguments favoring both types of communities and on reasons why we might want to force a choice between the two rather than just rejecting the premise all together, I have yet to actually answer the question for myself.

On this topic, I have found myself greatly torn.

On the one hand, the peaceful, equitable, and rational world of the Hins is clearly the more reasonable of the two societies. Nearly every logical thought argues in its favor.

Yet the Hin’s lack of art, of passion, of love seems too much to bear. It nearly seems worth sacrificing peace and equity for these peculiarities that make us so deeply human.

Furthermore, being generally inclined to favor unpopular opinions makes me want to argue for the Behinistic perspective on principle. If the kazo world of the Hins is so clearly the rational choice, the troublemaker and contrarian in me just has to push against it.

This instinct is quite clearly kazi.

Additionally, that proud desire to be kazi in the face of all reason strikes me as potentially little more than an arrogantly American trait.

One of my Japanese teachers once told me that she couldn’t understand why Americans took such pride in being individualistic. We fancy ourselves as standing up against the crowd, as being brave radicals willing to boldly buck conventional norms. My teacher just laughed. You think doing what you want is hard? Doing what’s best for others is harder.

As something of an aside here, I would be remiss if I didn’t mention that in addition to being a clever critique of western society at large, Szathmári’s novel brilliantly satirizes the west’s Orientalism.

The Hins – whose philosophy I previously compared to Lao Tzu‘s – encapsulates everything “the west” thinks of “the east.” They do not, of course, reflect any real culture existing in the world, but our English Gulliver views them exactly as he might if he had found himself among any of the real peoples of East Asia.

Gulliver comments that “the Behins respected the Hins very much even though they loathed them,” a sentiment which perfectly encapsulates Gulliver’s own attitude. He is impressed by their efficiency and technological innovations, but hates their uniformity and dispassion.

This duality epitomizes the sentiments of Orientalism, and is particularly resonant of western views of Japan around the second world war, when Kazohinia was written. It is no accident that Gulliver was being deployed to Japan when he was shipwrecked.

The Behins, on the other hand, represent the west as it is, disrobed from the vain glory in which it sees itself. One could also make a strong argument that the Behins represent eastern views of the west, but either way Szathmári seems to write in the hopes of convincing his Behinistic western audience to be a little less kazi – using our own stereotypes to highlight our failings and the true ideal we neglect.

And thus I come to my final conclusion. While I put little stock in the gross over-generalizations of cultures, whether as a product of my culture or a product of my experiences, I find myself irreparably kazi. I know rationally that the kazo life is better, but I cannot accept it; I could not survive.

Like Foucault, I’m inclined to find that madness is little more than a social construct and, like Lewis Carroll, I’m inclined to believe we are all mad here.

The whole world is kazi, and – while I’d like to work to make the world a little more kazo – I’m no less Behin than anyone else.

Ironically, it would be kazi to assume otherwise. Throughout Gulliver’s time among the Behins he finds people who rightly mock the foolish beliefs and invented norms of their kazi peers. The greatest error comes, though, when these Behins don’t recognize the same foolishness within themselves. They simply substitute one kazi belief for another.

To not recognize one’s own Behinity, then, seems the height of madness.

At the end of the novel, Szathmári tells as about a certain kind of Behin “whose only Behinity is that he doesn’t realize among whom he lives; for it could not be imagined, could it, that someone aware of the Behinistic disease would still want to explain reality to them?”

I take this as a direct appeal to the reader: having been enlightened as to the Behinistic disease and possibly identifying Behinistic traits within ourselves, we are urged to move beyond our kazi instincts and embrace the better path of kazo. The Hins, we learn, were once Behins themselves.

This is, perhaps, a wise argument, but, in typical fashion, I find myself siding with Camus. The world is indeed absurd and the only thing left is to embrace that absurdity.

Facebooktwittergoogle_plusredditlinkedintumblrmail