This semester I’m teaching Programming with Data for Social Science, and my students have recently started reading Matt Salganik’s excellent book, Bit by Bit: Social Research in the Digital Age. The book gives a detailed and thoughtful overview of the many challenges and opportunities of computational social science.
One dimension Salganik introduces early on is the difference between “custommade” and “readymade” data. Borrowed from the art world, these phrases suggest different origins: custommades are intentionally created with a specific purpose in mind while readymades are repurposed.
Traditional social science methods tend to be custommades — you design your experiment, surveys, and sampling approach with a specific research question in mind. Data science, on the other hand, relies more on readymades — data are detritus of some other goal.
Both of these approaches have their strengths and weaknesses, and each appropriate under different contexts. Understanding this is a key piece of the art of computational social science.
Speaking of art, as I mentioned, the terms “readymade” and “custommade” come from the art world, and Salganik illustrates this metaphor by comparing two specific works of art:
[Marcel] Duchamp is best known for his readymades, such as Fountain, where he took ordinary objects and repurposed them as art. Michelangelo, on the other hand, didn’t repurpose. When he wanted to create a statue of David, he didn’t look for a piece of marble that kind of looked like David: he spent three years laboring to create his masterpiece. David is not a readymade; it is a custommade.
This excerpt doesn’t quite do justice to Duchamp’s work. Being somewhat less known that Michelangelo, I’m afraid this makes the metaphor incomplete. That is, you may understand Fountain as readymade without really appreciating what that means.
In the spring of 1917, Duchamp, with the help of several friends, notoriously submitted a porcelain urinal to an unjuried exhibition held by the Society of Independent Artists in New York. Purchased from a store that sold plumbing fixtures, this object, which was titled Fountain and signed “R. Mutt” was rejected by a vote of the organizers, touching off a fierce debate.
Duchamp, who had been one of the organizers, resigned in protest.
In May of that year, the avant-garde magazine, The Blind Man — published by Duchamp and his friends — ran an editorial defending the work:
They [said] any artist…may exhibit.
Mr. Richard Mutt sent in a fountain. Without discussion this article disappeared and was never exhibited.
What were the grounds for refusing Mr. Mutt’s fountain: –
Some contented it was immoral. Vulgar.
Others, it was plagiarism, a plain piece of plumbing.
Now, Mr. Mutt’s fountain is not immoral, that is absurd, not more than a bath tub is immoral. It is a fixture that you see every day in plumber’s store windows.
Whether Mr. Mutt with his own hands made the fountain or not has not importance. He CHOSE it. He took an ordinary article of life, placed it so that it’s useful significance disappeared under the new title and point of view – created a new thought for that object.
While it may seem somewhat tangential to computational social science, the story of Mr. Mutt’s urinal makes me more fully appreciate the concept of readymade data.
Is it vulgar? Is it mundane? The art of Duchamp’s Fountain is in that very debate itself: he took an every day item and made it worthy of public discussion, encouraging us to question conventional wisdom and to ask questions we didn’t even know we had.
Similarly, there are important concerns about readymade data – questions of ethics and meaning, which should be rigorously debated. But that debate itself is an integral part of the scientific endeavor. The art of this work is in engaging critically with these concerns.
We may now be so inundated with data, so used to our movements and habits being passively tracked, that it’s easy to forget: there’s something profoundly radical about repurposing found data for social science research.
We CHOOSE it, we place it so that it’s useful significance disappears under the new point of view. In reimagining and reinterpreting these data we bring new knowledge into the world; we create a new thought for that object.
There is no shortage of pithy, pseudo-inspirational, questionably attributed social justice quotes.
Margaret Mead’s apocryphal “Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it’s the only thing that ever has” is a favored target among friends of mine, complete with a compilation of suggestions for making the expression more accurate.
I, for one, though, have always found myself intrigued by the quote commonly attributed to Mahatma Gandhi: “be the change you wish to see in the world.”
It’s the kind of thing I like to glibly quip in lieu of a well-placed do it yourself.
This is in line with how the phrase is commonly interpreted: if you want to seechange in the world, you have to bethe change: you have to engage in the work and make the change happen.
It’s this DIY spirit that makes “be the change” a favored expression among service organizations.
Of course, there’s no evidence that Gandhi ever said this.
Rather, the writing that comes closest to this sentiment comes from an Indian Opinionarticle published on August 9, 1913 in which Gandhi wrote:
If we could change ourselves, the tendencies in the world would also change. As a man changes his own nature, so does the attitude of the world change towards him….We need not wait to see what others do.
This suggests a more quixotic vision — it’s not just about doing the work, it’s about fundamentally reorienting your relationship to the world in order to force the world to fundamentally reorient its relationship to you.
In the Salt March, for example, Gandhi used the traditional practice of producing salt from saltwater to protest British regulation and monopolization of salt production. It was more than civil disobedience — it was an act intentionally designed bring the world the protestors wanted to life.
“Be the change,” then, is more than a call to service or an admonition to do the work. It is a challenge to unapologetically interact with the world as if it were the world you would have it be: to normalize realities by treating them as normal, to relentless tilt at windmills until the world accepts the truths you see.
There is something lovely in this sentiment, something inspirational in this vision of living in the world you want to live in, of building a better world by modeling a better world.
Yet, as with many things — reality is far more complicated, and we would be wise to critically interpret Gandhi in the context of his broader personal and philosophical approach.
While I would certainly be remiss to point to Gandhi’s words without acknowledging his deep anti-black racism and concerning sexual interest in young girls — the core commitment to non-violence for which Gandhi is so lauded is arguably problematic in its own right.
Indeed, it’s something of an understatement to say that Gandhi believed in non-violence. Rather, he believed in the transcendence of unshakeable virtue; that pureness of body and spirit could confront the most vile of evils; that suffering voluntarily brings such an inner strength as to provide the greatest thanksgiving, joy and deliverance — no matter what the cost.
That last sentence, incidentally, is taken largely from a November 26, 1938 piece titled simply, The Jews, in which Gandhi wrote: “If the Jewish mind could be prepared for voluntary suffering, even the massacre I have imagined could be turned into a day of thanksgiving and joy that Jehovah had wrought deliverance of the race even at the hands of the tyrant.”
So you can see, perhaps, why I would argue that Gandhi’s commitment to satyagraha went too far and even represents a moral failing when taken to its extremes.
It is also worth noting that the Indian Opinion passage above which served as the inspiration for “be the change,” comes from an article (p. 242) in which Gandhi essentially argues that love is the cure for snake-bites:
…one of the best defences against snake-bite is to have only as much as we need of wholesome food…to avoid anger and fear and, even when bitten by a snake, not to fall dead with fear before even a remedy has been tried. One should have confidence in the potent effect of the purity of one’s life and ultimately take courage in the thought that the length of one’s days is that ordained by God.
If only we could “rid ourselves of all enmity towards any living creatures, the latter also cease to regard us with hate,” Gandhi argues.
But regardless of whether I love or hate a snake — it may still bite.
Ultimately, though, the interpretation of “be the change you wish to see in the world” comes down to a question of power.
Power isn’t just about about the ability to control or coerce others, it is, in a sense, more fundamentally about the ability to control reality – to control the topics which get covered, the questions that get asked, and the perspectives that are considered. Power determines the bounds of normal and the imagination of what is possible. Power permeates our lived experience.
What’s inspirational about “be the change” is that it serves as a reminder that you have power, that your mere existence provides a pathway for shaping our shared experience of reality. “Be the change” is a proclamation that only you get to decide the kind of person you will be in this world; you get to decide what kind of world will be built from having a person like you in it.
The trouble is — there are far too many people who don’t get to decide. There are far too many people whose mere existence is under attack, who are met with hate and fear and violence just for the radical act of existing in this world.
You don’t get to “be the change” if you have to fight simply to “be.”
“Be the change,” then, is perhaps better interpreted as a statement of privilege; a commitment to allyship.
It is not enough to talk about making the world more just or more equitable; it is not even enough to engage in “the work” — though that’s certainly an important step.
No. If you have the privilege to be the change, if you have even a modicum of power over the tendencies of the world — then you hold that power in your every interaction, your every choice, your every experience.
Earlier this week, the United State Supreme Court issued two stays of injunctions, allowing a Trump administration policy barring transgender people from serving in the military to go into effect while cases against the policy proceed in lower courts.
If you’ve noticed some coverage of this case describing it as a “partial ban,” that’s because there are two exceptions baked into the now-implemented policy, which you can read in its entirety.
First, openly transgender individuals who are currently serving in the military will be allowed to continue their service, though there’s a caveat about “deployability” which I will return to shortly.
Just two and half years ago, on June 30, 2016, the Obama administration lifted the long-standing ban on transgender personnel. Known as the “Carter policy” since it was officially announced and implemented by then Defense Secretary Ashton B. Carter, the move came after the release of a commissioned RAND Corporation study which found that allowing transgender personnel to serve openly would have “minimal impact on readiness and health care costs.”
The same study estimated that between 1,320 and 6,630 transgender people were already serving on active duty. A more recent study, released by the Palm Center in in 2018, puts that number at closer to 14,700.
In the time since the ban was lifted, a portion of those service members — an estimated 900 active duty personnel — have begun the process of transitioning. It’s worth noting here that there are service members who came out before the Carter policy took effect, and it’s entirely possible that more have been serving openly under the Carter policy without transitioning. However, I haven’t been able to find any estimates on either of those populations.
The 2018 memo describing the Trump administration’s new policy on transgender service members indicates that those who began serving openly since the Carter policy will be exempt from the ban: “The reasonable expectation of the Service members that the Department would honor their service on the terms that then existed cannot be dismissed…[they] may continue to receive all medically necessary treatment, to change their gender marker…and to serve in their preferred [sic] gender, even after the new policy commences.”
However, the new policy also includes an exception to the exemption: “the Service member…may not be deemed to be non-deployable for more than 12 months or for a period of time in excess of that established by Service policy (which may be less than 12 months).”
On it’s face, this caveat seems reasonable — it is the Department of Defense’s standard policy that “Service members who are considered non-deployable for more than 12 consecutive months will be evaluated for a retention determination by their respective Military Departments.”
Yet, the new policy on service by transgender individuals seems to have little leeway, whereas the broader policy for all military personnel allows that the “Secretaries of the Military Departments may retain Service members who are non-deployable in excess of 12 consecutive months, on a case-by-case basis, if determined to be in the best interest of the Service.” It’s entirely unclear to me, however, which of these policies would take precedence.
Furthermore, the 12-months non-deployable clause can be “gamed.” While I am not aware of any formal data on this, service member Cathrine Schmid suggests that some transgender military personnel are already being considered non-deployable for longer than necessary “as a sort of backdoor ban.”
If such manipulation seems unlikely, consider that in 2017 — before the announcement of the Trump administration’s policy — a similar backdoor move was used to discharge Riley Dosh, the first openly transgender graduate of West Point.
…As I prepared for my graduation from West Point, I was handed a memo from the Pentagon that said despite completing every requirement asked of me, I would not be allowed to commission as an officer. The reason? Despite the lifting of the ban on trans military troops by the Obama administration — one reason why I came out — I still required a medical waiver to become an officer. Both the previous administration and West Point supported my commission, but the Trump administration did not. Unable to receive a commission, I was discharged upon graduation. I remain the last person to be discharged for my gender identity. Two months after my graduation, President Trump tweeted that transgender individuals would no longer be allowed to serve in the military.
While technically allowing current personnel to continue serving, the new policy creates a mechanism through which transgender troops can be forced out, and it certainly makes it clear they are not welcome.
But — I’ve only discussed one of the exceptions to the ban on transgender military service.
In a move reminiscent of don’t ask, don’t tell, the other exemption is for a sort of Schrödinger’s transgender person: a person who is somehow simultaneously trans and not trans. As the “New Transgender Policy” memo outlines:
Transgender persons who have not transitioned to another gender and do not have a history or current diagnosis of gender dysphoria — i.e., they identify as a gender other than their biological sex but do not currently experience distress or impairment of function in meeting the standards associated with their biological sex — are eligible for service, provided that they, like all other persons, satisfy all mental and physical health standards are are capable of adhering to the standards associated with their biological sex. This is consistent with the Carter policy, under which a transgender person’s gender identity is recognized only if the person has a diagnosis or history of gender dysphoria.
There’s a lot to follow in this paragraph, but it’s important to understand it. This argument is the most insidious part of the ban on transgender service members.
Gender dysphoria, as defined by the American Psychiatric Association, “involves a conflict between a person’s physical or assigned gender and the gender with which he/she/they identify…People with gender dysphoria may often experience significant distress and/or problems functioning associated with this conflict between the way they feel and think of themselves and their physical or assigned gender.”
A formal diagnosis of gender dysphoria from a medical professional is the first step in seeking medically-necessary care related to transgender health. However, it’s important to note that there’s a difference between experiencing gender dysphoria and seeking out a medical professional in order to receive a diagnosis of dysphoria.
The diagnosis serves as a gateway for additional medical care but does not alone demarcate who is transgender. A person may be transgender whether they receive a medical diagnosis of dysphoria or not.
Furthermore, while I’ll save the American Psychiatric Association’s history of gender diagnoses for another post, the inclusion of gender dysphoria as a psychological condition is not without controversy. As Dr. Daphna Stroumsa explains:
The inclusion of gender identity and transgender-related matters in the [American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders] reflects an inherent problem. Although diagnostic coding is necessary to facilitate access to medical and surgical transition care, the pathologizing and stigmatizing suggested by its designation as a mental disorder is not. Such designation gives rise to an inherent contradiction in terms: what is presented as a mental condition has recognized medical and surgical treatment. These treatments are aimed not at affecting or changing mental state but rather at addressing the physical components that lead to the dysphoria. Such logic makes GID or gender dysphoria a unique case of surgically treatable mental illness, which is an oxymoron.
In other words, while it represents a problematic pathologizing of the transgender experience, the existence of “gender dysphoria” as a validated medical condition helps transgender people receive the health care they need.
Now, let’s go back to that policy memo. “Transgender persons who have not transitioned to another gender and do not have a history or current diagnosis of gender dysphoria…are eligible for service.”
There are precisely two types of people who do not have “a history or current diagnosis of gender dysphoria”: people who do not experience gender dysphoria and people who suppress, hide, or otherwise bury their experience of gender dysphoria.
Those who do not or have never experienced gender dysphoria are not relevant to this ban at all — they are not transgender. Service members who are transgender, then, may only continue to serve if they steadfastly avoid disclosing or seeking any support for their experiences of dysphoria.
In other words, they can be transgender…as long as they’re not transgender.
…[The policy states that] trans people can serve as long as we don’t transition, are comfortable in our assigned sex at birth, and have no health care needs related to transition. But this is what defines a person as trans and why the ban is definitionally a ban on trans service. Yet, this idea that we can suppress our trans-ness or that our identity is less real than the identities of non-trans people is behind much of the anti-trans rhetoric we see in government policy and public discourse.
And this is why I argue that this clause is the most insidious part of the ban on military service by transgender personnel.
It’s bad enough that the policy will bar good people from serving their country. It’s bad enough that current service members will be forced out or may choose to leave rather than serve in the face such bigotry. It’s bad enough that this discrimination is being justified by unfounded concerns about cost and cohesion.
But the most disturbing thing is that at its core, this policy suggests that transgender people have the choice to not be transgender; that their identity is not real, that it can be easily suppressed. This policy suggests that the concept of “transgender” doesn’t truly exist.
And that is unconscionable.
Transgender people exist and they have existed for far longer than we’ve had the word. It is no passing fad or modern invention, it a fundamental piece of the human experience. A piece we’ve collectively been suppressing and deriding for far too long. If the existence of transgender people seems like it’s “new” it is because we’ve only recently begun to grapple with a truth that has always been there.
The ban on transgender personnel serving in the military is just one element in a coordinated attack on the transgender community; an attack that aims to roll back the clock, to shove a whole population of people back in the closet and permanently lock them in there.
We cannot let that happen.
A new study from the CDC estimates that 2 percent of high schoolers — around three hundred thousand Americans — are transgender. And presumably, that number is depressed by significant underreporting. These young people are real. They exist. We cannot let them grow up in a world that would deign to tell them otherwise.
The transgender community will not be erased.
Here are a few things you can do to help:
Donate to one of these great organizations — or at the very least sign up for their mailing lists.
The word ‘crazy’ has the remarkable power to instantly render invalid whatever person, perspective, or practice it is applied to.
It suggests behavior that is illogical or irrational; that is so unpredictable as to defy the bounds of ‘normal’ human reason. It therefore invalidates through implicit othering — crazy people can not be reasoned with, their behaviors can be neither interpreted nor explained, their beliefs carry little more meaning than noise.
Perhaps this is why ‘crazy’ is typically used as a pejorative.
Yet, the beliefs and behaviors that are deemed to be ‘crazy’ change over time. They are continually interpreted and reinterpreted to fit the narratives of the day. Madness, in other words, is a social construct.
Foucault documents this in detail, pointing to stories of the mad, insane, and crazy that seem absurd to our modern sensibilities. Scientifically-defended theories of hard bile and hot blood, concerns over contagious epidemics of women’s ‘hysteria,’ illness interpreted as a failure of morality.
Again and again in the West, cognition and behavior have been interpreted through a narrow normative lens: anyone who thinks or acts outside this framework is taken to be crazy.
‘Crazy’ then, is perhaps better understood not as a property of a person, but as a property of society. To call something crazy is to place it outside the bounds of standard social norms, to say that it is too far out there to be reasoned with rationally. It is the intellectual equivalent of throwing up your hands and declaring there is nothing to be done — a reasonable person simply cannot engage with crazy.
Yet, its very nature as a social construct raises the question: who determines what is crazy? Creative works are full of stories of in which those deemed mad are perhaps the only reasonable ones. The French film King of Hearts, for example, contrasts the world created by asylum inmates with the brutal and senseless killing of World War I.
I find myself particularly drawn to the word ‘crazy’ because it is inexplicably gendered. It’s not quite as causal as the relationship between old and spry — but women are much more likely to be described as ‘crazy’ and the word has a long history of being used to discredit women and their experiences.
Given my description of ‘crazy’ above, this makes sense — if you can’t reason with someone who is crazy, if you can’t meaningfully interpret their words or actions, then you are free to dismiss their claims. There is simply nothing to be done. In this sense, the epithet intrinsically provides authority to the person using the word while diminishing the power of the person it’s applied to. It’s actually quite a brilliant tactical maneuver.
For this reason, many people prefer to avoid the word ‘crazy.’ There are other good reasons to avoid it, too — as you may have already inferred from the shaky language of this piece, ‘crazy’ has a deeply problematic tendency to casually lump together several different concepts. It dismisses mental health challenges, disparages neurodiversity, and glibly ostracizes any deviance from the supposed norm.
Yet — as someone who is ‘crazy’ along multiple of these dimensions — I find the word can give me power, too.
I wrote above that ‘crazy’ locates a person outside the bounds of the ‘norm.’ I think that’s true, but — I don’t find that the word itself places a normative judgement on that positioning. That is, we interpret ‘crazy’ to be bad because we implicitly assume that being outside the norm is bad. We accept that crazy people cannot be reasoned with because we implicitly assume that people who who are outside the norm cannot be reasoned with. We feel embarrassed or ashamed when labeled as ‘crazy’ because we implicitly assume that falling within the norm is good.
I reject those claims.
For one thing, I don’t really believe in ‘normal.’ We are all crazy. But more deeply — what we generally take to be ‘normal’ only refers to an idealistic conception of a small slice of humanity. Why should any of us fall over ourselves trying to fit into a norm that doesn’t exist?
I refuse to feel shame for who I am.
In that sense, I find being labeled crazy to be quite freeing, actually. Oh, you thought you could diminish me by saying that I exist outside the norm? Oh, no no no, my friend – this is where I thrive.
Being crazy means being free to discover and create yourself, it means not worrying about conforming to the norm, and it means not letting anyone dictate your truth for you.
To be clear, there are still plenty of other things to worry about. I hardly mean to suggest that nothing is true and everything is permitted. Rather, the types of things one ought to worry about — being good, compassionate, respectful — are very different from trying to be ‘normal’ or trying to fit someone else’s mold of who you should be.
And that, perhaps, is the best thing about accepting the mantel of crazy: it gives other people permission to be crazy, too. When we shy away from talking about mental health, when we assume a neurotypical view, when we accept ‘crazy’ as a personal fault, we implicitly reinforce the idea that these are somehow shameful or wrong.
Embracing and even showcasing those pieces of ourselves not only can be personally fulfilling, it implicitly sends the message: None of us should have to hide who we are.
So that is why I frequently choose to refer to myself as ‘crazy,’ why I tend to talk about my thoughts, actions, choices, and diagnoses with such levity. I cannot hide who I am, and more than that — I don’t want anyone else to do so either.
So, though it may defy all norms and reason, I will continue to describe myself with that word. I will continue to think my crazy thoughts, act on my crazy impulses, and aim to be the best person I can be with no regrets for the fact that person will never be ‘normal.’ And I will do my best to create spaces where others feel they can genuinely do the same. I feel no shame or hesitation in this commitment, it is simply who I am: a total crazy person.
I’ll start today with a somewhat bold claim: science cannot exist without humanism.
Note that I’m not merely saying that science is improved by humanists or that it might be wise to have ethics keep pace with our technological advances. To be clear, I would argue for both those points as well; but my claim here goes deeper:
Science cannot exist without humanism.
In other words, the thing we call “science” can only properly exist through a critical examination of the myriad ways in which humans create and interpret the world around them. The humanities are not some nice add-on or a means to slap an “interdisciplinary” sticker on your work — they are, indeed, an intimate a part of the scientific process itself.
To clarify, when I use the word science here, I more properly mean good science — science which is self-critical, methodical, and dogged in its pursuit of genuine understanding. There are, unfortunately, far too many things which would claim the mantel of science while definitively being bad science. Most notably, this includes some truly horrific medical experiments, but there are also more innocuous examples of bad science covering issues of replication, statistical techniques, questionable methodological choices, and even outright fraud.
My argument, then, is that the humanist orientation is a primary factor in differentiating between good science and bad science. I’m not sure I would go so far as to argue that it’s a sufficient condition, but I’ll argue here that it is a necessary condition. Science cannot exist without humanism.
I have done little so far to explain precisely what I mean by science and precisely what I mean by humanism, so let’s back up about two thousand years in order to elaborate.
Aristotle argues for three fundamental types of knowledge: technè, episteme, and phronesis. While not everyone may be familiar with these classifications, these categories still very much underly the Western conception of knowledge, especially, perhaps, within academia.
Techné, or technical knowledge, is the province of professional schools. Doctors, lawyers, and MBAs are educated in the techné of their trades. Episteme is the domain of the sciences. Closest to our modern interpretation of “knowledge,” episteme is the slow, methodical discovery of universal truths. Finally, phronesis is the core concern of the humanities. In, perhaps, a sign of our collective devaluing of this work, phronesis is the least tied to our modern understanding of knowledge and thus is the most difficult to explain.
Often translated as “practical wisdom,” phronesis is inherently action oriented. One of Aristole’s core virtues, it is the ability to determine the right action in any context and to unquestioningly follow through on that action. It is about being virtuous but perhaps more subtly about knowing what is virtuous.
Mcevilley, who argues for the translation “mindfulness,” quotes Epicurus in describing phronesis:
“[Phronesis] patiently searches out the motives for every act of grasping and fleeing, and banishes those beliefs through which the greatest tumult enters the mind.”
While the word defies a simple English translation, you can see, perhaps, why I associate phronesis with the humanities: it is the knowledge of critically analysis, of situating ethical judgements in the context in which they occur. It is the work of perpetually asking the question, what should be done?
When Thomas More, Erasmus, and others began arguing for humanist approaches which centered human — as opposed to godly — agency as a force in the world, this naturally drew on earlier conceptions of phronesis.
Now, these categories of knowledge aren’t perfectly split in the academy. Tenure track pressures of publishing, service, and teaching encourage a certain techné of their own — though someone considered brilliant in their field can often get away with poor demonstration of techné. Additionally, there have been some rather spirited discussions about a technical/humanist divide in philosophy, though here even the technical side — epitomized by metaphysics and epistemology — may still be more phronesis than techné. And Flybjerg has argued that trying to be episteme is the largest failing of modern social science — that to have meaning, social science must strive to be less like physics and more concerned with the phronetic questions of how to build the Good Society.
Yet, despite various intra-disciplinary battles, these type of knowledge have become largely separated from each other — and that divide is punctuated by a clear heirarchy of value. The war between episteme and phronesis is especially fraught – as episteme is broadly valued as a public good while phronesis is devalued as an indulgent exercise in self-reflection.
This divide is particularly striking in our so-called “post-truth” world that nevertheless pursues a strong positivist mentality. While you may be surprised to learn that we’re living in a “positivist” era, in the philosophical sense, the term roughly refers to the assertion that somethings are demonstrably factual and everything else is a matter of opinion.
This is, arguably, a core scientific tenant — if you can measure something, if you can systematically test different hypotheses, you can demonstrate whether something is factually true or not. If you cannot do these things you can make no rational argument as to the truth or validity of a given claim.
The positivist view implicitly devalues humanistic work. Anything that cannot be proven is subjective, and anything that is subjective is hardly worth rigorous study. Anybody may have a mere opinion.
Yet the positive claim also overlooks a core humanist tenant — everything we observe, measure, and interpret is done through the lens of human experience. Even in the hardest of the hard sciences we are biased by what questions we think to ask, what funding we can get to pursue those questions, what methods we choose to apply, what works we choose to cite, what interpretations we find in our results, and whose scholarship we choose to value. Science is, fundamentally, a human endeavor.
If anything, the increasing tendency of “factual” things to be interpreted as “opinion” should only serve to emphasize the permeability of the positivist line. We cannot maintain a positivist system if we cannot even agree on what qualifies as factual.
Perhaps the easy way out of this bind is to belittle those who do not see the facts that we do, who, as far as we can tell, refuse to be properly thoughtful and educated. The challenge here, then, is differentiating a noble heretic who fights for Truth against a biased system from a troublesome troll who maliciously spreads misinformation cloaked in “factual” arguments. History has seen no shortage of either type of agent, and each are equally greeted with scorn in their time.
The truth of tomorrow is not necessarily the truth of today.
That’s not to descend into total relativism and claim there is no such thing as truth and that all of reality is merely a matter of opinion. Rather, I would argue, truly good science requires remaining constantly skeptical. A good scientist interrogates the the biases of their data, methods, and fundamental way of thinking — and that inherently means being skeptical of our individual and collective ability to accurately determine what is “true.”
This is not at all easy to do — we are each products of and contributors to our collective social context and it is arguably impossible to entirely separate ourselves from that context. Given that this challenge comes at the bottom of an increasing to-do list of practical career pressures, the whole task even more daunting.
So while we each ought to seek to be humanists in our scientific endeavors, perhaps we’d do well to be glad that there are whole departments of scholars engaging seriously in this difficult work; questioning which parts of our received reality are deeply true and which parts are warping our precious scientific perceptions.
We cannot continue to pretend that science can be separated from the human experience, that it is somehow immune from the biases and fallibility of the humans who conduct it. We must recognize that the humanities are a public good and, indeed, provide the very foundation which allows for our work.
So when I argue that science needs humanism this is what I mean; that all scientific endeavors are prone to error and we cannot fully, scientifically, assess their truth-claims without first understanding the possible scope and implications of those errors. While we might prefer to separate the order of the scientific process from the messiness of human systems, aiming to do so is fundamentally bad science; it discards too many relevant variables. Good science requires self-skeptism, it requires an awareness of what is missing as much as it requires an awareness of what is there. Science needs phronesis, it needs to examine what is right as much as it needs to examine what is true.
In July of 2013, I started writing publicly every (work) day. Then, after four and a half years, in November 2017, I stopped.
There are a lot of reasons why I started writing — and a lot of reasons why I let the habit go.
I was re-finding myself in 2013. After my father passed away in early 2012, I was absolutely shattered. I spent at least a year and a half just wandering the void; existing in the world without really living in it.
When at last I was ready to start thinking about picking up the pieces, I found I had become a very different person than I had been before. More caring, more compassionate, more acutely aware of the silent struggles we’ve all gotten so good at hiding from the world. And I felt more strongly than ever the need to put my own voice, skills, and energy to work towards the ongoing task of repairing the world.
This was a quandary for me. I’d long been committed to social justice; to doing what I could to make the world just a little bit better than I found it. But, at the same time, I had come to deeply internalize the belief which was consistently reinforced through so many of my experiences in the world: my voice didn’t matter. I didn’t matter.
I had aimed to put my time and energy towards good work simply because that was the right thing to do. It was laughable to think that anything I could do would ever amount to anything or that anyone would ever care for my opinion or insight.
It’s the sort of paradox which only makes sense within the bounded logic of one’s own head. I’d worked hard to elevate the agency of others; I’d argued that the voices and perspectives of all people are critical to building a more just world; I’d put so much of myself into advocating for these ideals — but I had never really believed them. How could I, if I didn’t believe in myself?
In my first post back in 2013, I described this challenge in relation to my plan to start writing publicly:
My struggle with blogging is that…in many ways, it requires a lot of ego. Well, I would say ego, but another may generously say “agency.” It requires standing up and saying, “I do have something to say, and I believe it’s worth your time to listen.” And that can be a lot to muster.
I see this challenge more broadly in the idea of being an active citizen, of truly engaging in public life…Even in smaller acts of engaging. To actively contribute to your community means believing that you have something to actively contribute.
Over the years, this sense of egoism continued to be the hardest struggle for me. Finding time and topics weren’t always easy, but those paled in comparison to the more fundamental challenge of constantly putting myself out there. Of acting like I had something worth saying even when I felt as though I were nothing at all.
But it was a good habit. It made me a better writer. It made me a better thinker. And doing all this writing publicly helped me find my voice. It helped me discovered who I am and showed me that, indeed — words do matter. Much to my surprise, I found that sometimes even my poor, broken words could help.
So I kept writing.
As foolish, egotistical, and self-important as it seemed. I kept writing.
But things changed over the years. I got busier with graduate school, I had other writing tasks I needed to prioritize, I needed to pass my qualifying exams and propose my dissertation. I have no end to my list of practical excuses.
There are reasons and there are reasons, though. Fundamentally, I was scared. I started meeting strangers who would seek me out to tell me how much they loved the way I write; who would tell me that I had somehow managed to put into words something they had been thinking or feeling. I started getting more pushback on every sloppy mistake I made as I rushed to fulfill my self-imposed quota of posting every single day. I started to more deeply appreciate the consequences of my words as actions — while it still seems impossible to imagine, I found that my voice did have power.
As I grappled with these issues in mid-2017, I reflected:
In some ways, public writing feels even more egotistical than before. Being a doctoral student raises the stakes of self-importance; I’m declaring a value for my contributions through my occupation before I even open my mouth. Doctoral students may be nobody in the fiefdoms of academia; but it remains a fairly fancy calling to the rest of the world. I can hardly consider myself to be a nobody while laying claim to the capacity to someday contribute to human knowledge.
This was a lot to take in. How could my voice matter? In what universe would people begin by assuming I was possessed by a comfortable air of self-confidence? What did it mean for me — a person holding so much privilege in this world — to be taking up space?
My writing started to feel like less of an exercise of civic duty and self-discovery and more of a venue for self-aggrandizement.
At the same time, I was becoming less impressed with the quality of my writing overall. I’d gotten tired, lazy — relying on tired tropes of self-righteousness without thorough thought or depth. This tone was popular in some circles, but it did little to advance the sort of dialogue I want to pursue. It didn’t reflect the sort of writer, scholar, or person I wanted to be.
So I stopped.
I’d once needed to find myself through writing in public and then I needed to find myself by reflecting in private.
But I’ve missed this. I’ve missed the intentional thought that comes from public writing. I’ve missed the ongoing learning I’ve gained through on- and offline conversations about my posts. I’ve missed hearing thoughtful criticism of my views and my writing — I remain grateful to every person who has trusted me enough to tell me when they think I’m wrong or when I could have expressed myself better. I’ve missed making time to think about things beyond what’s required of me.
Over the last few weeks, I’ve continually caught myself “writing in my head” as I used to do all the time. I’m not quite sure where that voice went in the fervor and anxiety of the past year, but I’ve started to realize that I need and value this space. Something has changed in me once again, it seems.
All of this is to say: I’m back. I won’t be posting every day, but I will be posting regularly — at least once a week.
I will write about science, math, social justice, and democratic theory. I will write about mental health and graduate school and random facts I picked up somewhere. I will write about whatever I need to say that week.
As always, I invite your thoughtful reflections as I continue this journey. We will certainly not always agree, but I will value your perspectives and consider your arguments seriously and genuinely.
They say that democracy is dead — that people can’t talk about anything of import any more. But I don’t believe that. I refuse to believe that. Democracy’s not dead — it’s only resting.
I was very honored to receive Northeastern’s Outstanding Graduate Student Award in the area of community service. As part of that award, I was asked to write a statement of my personal philosophy regarding service. To be honest, I found the prompt challenging as I don’t really consider most of my efforts “service” in the traditional sense — I’d be more inclined towards Harry Boyte’s term of public work — nevertheless, here is what I wrote:
This world is what we make it. Our societies, our monuments, our every day encounters – these are the product of human energy and interaction. In a very real sense, we build this world; we shape it in ways both great and terrible. As individuals, we are limited and finite, but together our collective capacity spans the long arc of human civilization. With this awesome power weighing upon our collective shoulders, we are left with a seeming simple but important question:
What should we do?
The brevity of this question belies its depth; each word has an important role to play:
What: What are the specific actions to be taken?
Should: What are the right actions and what are the right criteria for determining those actions?
We: Literally you and I. The humans writing and reading this letter. We each have a role to play in shaping the world around us. Our voices, perspectives, and actions matter. And of course:
Do: It is not enough to determine the appropriate actions, we must actually take them.
I like this question because it gives agency to both individuals and the communities to which they belong. As members of a society we should neither act with blind individualism – doing whatever we want whenever we want it – nor should we completely withdraw from public life, abdicating our responsibility to add our unique ideas and perspectives to the collective challenge of tackling complicated problems.
We each have a responsibility to share our voices; to roll up our sleeves and engage in the work; but perhaps even more importantly – we have a responsibility to ensure that the voices of those around us are heard; to build spaces where everyone can participate.
This duality is important because as individuals we play different roles in different contexts. As a first-generation-to-college woman in a STEM discipline, I’ve spent much of my life being told that my voice didn’t matter, that Ididn’t matter. Yet, as a highly educated white person, I still benefit from a lot of power and privilege. All of those identities are integral to who I am, and they each come into play in different settings – sometimes I need to be loud and vocal, and sometimes I’d do better to let others speak. At the end of the day, it isn’t about me – it’s about the strength of our collective endeavors.
This essay is supposed present my personal philosophy of service. As you may have gathered by now, I have a hard time with that prompt. To me, the word “service” invokes images of parachuting in for short-term efforts – ideally under the auspices of someone from the community who actually knows what’s needed. There is nothing wrong with that type of service; it’s important work if done well. But I prefer Harry Boyte’s term “public work.” We are each members of many, overlapping communities and our collective work is needed to build and maintain those communities. It is “service” insofar as it is service to the collective good, but it is work– it is the time, energy, and thought that goes into co-creating our shared world.
My personally philosophy, then is to perpetually ask, answer, and act on the question of “what should we do?” I put my energy towards building relationships of mutual trust, I put my time towards the collective work we agree must be done, and I put my financial resources towards causes I don’t personally have the expertise to support. I do my best to be a good citizen of my many communities – to listen, learn from, and support others while they listen, learn from, and support me. I try to build spaces where everyone knows they are welcome, where conflict doesn’t fester, and everyone accepts each other’s good intentions. To engage to the best of my ability in the unglamorous, every day tasks of associated life.
John Dewey writes that we must all “learn to be human” – that we must each develop “an effective sense of being an individually distinctive member of a community; one who understands and appreciates its beliefs, desires and methods, and who contributes to a further conversion of organic powers into human resources and values.” I am continually learning to be human. I just want to get good things do
Listening to an interview yesterday with Susan Striker, Associate Professor of Gender & Women’s Studies and author of the (recently updated) book Transgender History, I was struck by the core of her argument:
Transgender people have always been around, it’s just that now they are more visible than they used to be.
And they are visible – just last week, five openly transgender candidates won local or state elections. But such recently visibility shouldn’t be confused with “newness.” This isn’t some hot modern trend, but an intrinsic element of human nature that can be traced back throughout western civilization.
And perhaps paradoxically, at a time when advocacy for gay and lesbian rights has come so far, when the same-sex marriage is universally legal – transphobia and transmisogyny are on the rise.
Striker argues that this is the result of visibility – being out has serious costs in a world that prefers you stay hidden.
It’s a double-bind, really – there is well documented evidence that staying closeted results in real psychological and physical damage, yet the costs of being open – individually and collectively – are high.
This week is Transgender Awareness Week, an opportunity, as GLAAD says, to “raise the visibility of transgender and gender non-conforming people, and address the issues the community faces.”
Yet, it’s no accident that this week culminates with Transgender Day of Remembrance – a day to recognize and morn those who have lost their lives to anti-transgender violence.
Visibility has its costs.
And that, perhaps, is what make some of the critiques of the transgender community seem so laughably strange to me. Transgender people are harassed, harmed, and go through a whole lot of difficult stuff in the process of becoming themselves. Why on earth would anyone put themselves through that if the costs of remaining hidden weren’t higher than the costs of being seen?
Visibility has it’s costs, yes, but it’s also critically important.
It’s important for individuals because of the real harm caused by staying hidden, and it’s important for communities because this is how things change. Because as long as the norm continues to go unchallenged, more people will have to remain hidden; more harm will be done.
I am so impressed by the work of my transgender brothers and sisters. I don’t know where they find the strength to engage in this difficult work, to face such tremendous hate, every day.
Transgender Awareness Week is an opportunity for the transgender community to be visible, yes, but it’s also an opportunity for all us cisgender allies to ask ourselves, seriously and critically, what we have done to make a difference. What have we done to elevate the voices of transgender people, and what have we done to lower the cost of visibility; to educate and inform ourselves and those around us.
It sounds appalling, and it is appalling, though perhaps not for the reasons one might think.
First, some details on the poll itself: it was fielded by JMC Analytics. A firm, for what it’s worth, given a ‘C’ ranking by FiveThirtyEight. It was a landline poll, with a 4.1% margin of error.
The question alluded to in the lede read: “Given the allegations that have come out about Roy Moore’s alleged sexual misconduct against four underage women, are you more or less likely to support him as a result of these allegations?”
Among all respondents, 29% responded that they were more likely to vote for Moore, a number which rises to 37% when considering the responses of self identified evangelicals. (Incidentally, 28% of evangelicals said the allegations made them less likely to vote fore Moore.)
It’s further notable that there is no gender variation in response to this question. 28% of men and 30% of women report being more like to vote for more, 39% of men and 37% of women report being less likely, and 33% of men and 34% of women say it makes no difference.
The poll asks no questions about why respondents are more or less likely to vote for Moore, though JMC’s results summary gestures towards a possible explanation:
Those more likely to support Moore over the allegations favor him over Jones 84-13%. However, the numbers are just as polarized (81-9% for Jones) among those who say the incident makes them less likely to support Moore.
This poll result isn’t about religion, it’s about partisanship.
That’s not to say those who support Moore are just dirty partisans who need to get their priorities in order. Indeed, if I may venture a guess, I’d imagine that supporters who find themselves on the side of “more likely” interpret this whole thing as a partisan stunt meant to weaken the Republican party.
Importantly, such a view does not intrinsically require doubting victims’ legitimacy – indeed, it might better be interpreted as a doubting our collective democratic legitimacy. It’s not a sign of a healthy democracy when people – of all parties – imagine our national politics to have the cloak and dagger character of House of Cards.
That makes me sad.
It makes me sad that we’re so caught up in the politics of partisanship that we can’t engage seriously with the real work of democracy; of working together to figure out how we all get by in this messy world.
Headlines and memes which indicate that Republicans or Evangelicals support child molestation do a disservice to democracy.
They make me tired. We have serious work to do.
And some of that serious work stems from the fact that there are terrible people in all parties. Seriously, there are terrible, abusive men everywhere. Everywhere. We can’t pretend that such abuse is relegated to one party, one state, or one denomination.
The first step, as they say, is admitting we have a problem.
With any hope, there is a great reckoning coming. As we finally start listening to women, and believing women, and building a non-patriachial society where such terrible abuse isn’t built into the fabric.
But as part of that reckoning, we’ll need to figure out how to collectively respond when abuses by celebrities, politicians, and other men of power, come to light. Neither steadfast solitary nor internet-mob panic seem the optimal way to go.
Personally, I’d like to see Alabama Republicans given the opportunity to replace Roy Moore on the ballot. Turns out he’s a terrible person. That happens some times. Reschedule the general if you need to. The system should support voter choice, not constrain it. I’d like to see a system which allowed voters to respond to this issue in a thoughtful, responsible way.
After all, while I wish this abuse were an isolated incident, if we’re being honest with ourselves, we’d know – this is going to happen again, and it could happen with a candidate from any party.
Skeptics of the democratic ideal of self governance often point to the almost laughable impracticality of the vision. People are simply bad at being knowledgeable and making well-informed judgements.
Notably, this concern needn’t inherently be a slight. While the most elitist of skeptics will judgmentally decry the dreadful specter of “the masses” for perceived failings of willful ignorance or stupidity, some scholars offer a more nuanced view.
Consider, for example, the post-WWI writing of journalist Walter Lippmann. While his rhetorical flourishes reasonably earned him a reputation as an elitist and a technocrat, the full thread of his argument is much more subtle.
Lippmann – who had been intimately acquainted with propaganda efforts during the war – was notoriously concerned about giving too much power to “the public;” that “uninformed, sporadic mass of men” who will “arrive in the middle of the third act and will leave before the last curtain, having stayed just long enough perhaps to decide who is the hero and who the villain of the piece.”
But despite the colorful imagery, his argument wasn’t that the vast mass of men were too lazy or stupid to be entrusted with the vital task of democracy. Rather, his argument was simply that no single person could ever have the capacity to be all-knowledgeable on all things.
There is just too much.
Reasonably lacking in the time to perfectly master all of human knowledge, every single person is left to make the best decisions they can by drawing heavily from existing knowledge, perceptions, and instincts.
Lippmann, incidentally, coined the word “stereotype” to describe the phenomenon.
As social psychologists will tell you, “stereotyping” is not inherently bad. As beings constantly bombarded by information, we literally couldn’t function if we constantly had to reconstruct our basic understanding of everyday objects and encounters. We couldn’t live without heuristics.
But, they can also become problematic if we become too rooted in our thinking, if we don’t have or take the time to periodically push past our heuristics.
Political polarization is just one example of this. It is too easy, too easy, to heuristically label people who agree with you as “good” and people who disagree with you as “bad.” A mild version of this may be helpful in some cases of electoral politics – knowing that a candidate of party X supports the political platform I generally support is arguably meaningful information. But it most certainly becomes problematic when this heuristic labeling seeps into our every day life and every day encounters.
Markus Prior argues that polarization is an outcome of an increasingly efficient media environment. When people aren’t all “accidentally” exposed to the same evening news – as they were when the evening news was literally the only thing on TV – people tend to self-select into separate, biased news spheres.
Perhaps worse, they self-select out of news consumption all together. After all, there are far more enjoyable things to watch than the constant depressing drudgery of current events.
This causes a perfect storm for polarization – most people are generally uniformed, and when they peak their head up to get a sense of what’s going on, they make quick judgements inferred from a media outlet specially curated to cater to their existing beliefs.
There’s a reasonable amount of psychological and political literature to reinforce this story, but, I think, we lose something if we forget the Lippmann view.
The problem, Lippmann would argue, is not the stereotypes themselves, it’s the thoughtless and broad application of them which results from not having enough time to do otherwise.
In other words, while the wide variety of media options may lend themselves to polarization, the constant, 24-hour avalanche of news coverage is perhaps a bigger problem. It is literally impossible to keep up, to take it all in and study every issue in a thoughtful, non-biased way.
In the absence of time for such activity, and buried in our own personal pressures of work of and life, we adapt as best we can by making quick, vaguely informed decisions motivated largely by our pre-existing beliefs.
It’s not that “the public” can’t be trusted, Lippmann would argue, it’s that we all put too much faith in our own ability to rise above such challenges. It is always “other people” who are politically foolish. We – and the people we agree with – are, of course, more enlightened.
As if anyone has the ability to keep up with all the news.