Monthly Archives: September 2017

Robot Humor

Text processing algorithms are notoriously bad at processing humor. The subtle, contradictory humor of irony and sarcasm can be particularly hard to automatically detect.

If, for example, I wrote, “Sharknado 2 is my favorite movie,” an algorithm would most likely take that statement at face value. It would find the word “favorite” to be highly correlated with positive sentiment. Along with some simple parsing, it might then reasonably infer that I was making a positive statement about an entity of type “movie” named “Sharknado 2.”

Yet, if I were indeed to write “Sharknado 2 is my favorite movie,” you, a human reader, might think I meant the opposite. Perhaps I mean “Sharknado 2 is a terrible movie,” or, more generously, “Sharknado 2 is my favorite movie only insofar as it is so terrible that it’s enjoyably bad.”

This broader meaning is not indicated anywhere in the text, yet a human might infer it from the mere fact that…why would Sharknado 2 be my favorite movie?

There was nothing deeply humorous in that toy example, but perhaps you can see the root of the problem.

Definitionally, irony means expressing meaning “using language that normally signifies the opposite,” making it a linguistic maneuver which is fundamentally difficult to operationalize. A priori, how can you tell when I’m being serious and when I’m being ironic?

Humans are reasonably good at this task – though, suffering from resting snark voice myself, I do often feel the need to clarify when I’m not being ironic.

Algorithms, on the other hand, perform poorly on this task. They just can’t tell the difference.

This is an active area of natural language processing research, and progress is being made. Yet it seems a shame for computers to be missing out on so much humor.

I feel strongly that, should the robot uprising come, I’d like our new overlords to appreciate humor.

Something would be lost in a world without sarcasm.


The Nature of Failure

I had the pleasure of attending a talk today by Dashun Wang, Associate Professor at Northwestern’s Kellogg School of Management. While one of our lab groups is currently studying the ‘science of success,’ Wang – a former member of that lab, is studying the nature of failure.

Failure, Wang argued, is much more ubiquitous than success. Indeed, it is a “topic of the people.”

It is certainly a topic those of us in academia can relate to. While people in all fields experience failure, it can perhaps more properly be considered as a way of life in academia. The chances of an average doctoral student navigating the long and winding road to success in academia are smaller than anyone wants to think about. There aren’t enough jobs, there’s not enough funding, and the work is really, really hard. More than that, it’s ineffable: how do you know when you’re ‘generating knowledge’? What does that look like on an average day?

Mostly it looks like failure.

It looks like not knowing things, not understanding things, and not getting funding for the things about which you care most. It looks like debugging for hours and it looks like banging your head against the wall.

It looks like a lot of rejections and a lot of revise & resubmits.

Those successful in academia – subject, as they are to the fallacy of survival bias – often advise becoming comfortable with the feeling of failure. With every paper, with every grant, simply assume failure. It is even becoming common for faculty to share their personal CV of Failures as a way to normalize the ubiquity of failure in academia.

But, Wang argues, failure is the key to success.

I suppose that’s a good thing, since, as he also points out, “in life you don’t fail once, you fail repeatedly.”

Failure is a thinning process, no doubt – many people who experience significant failure never come back from it. But a series of failures is no guarantee of future failure, either.

People who stick with it, who use failures as an opportunity to improve, and who learn – not just from their most immediate failure but from their history of failure – can, with time, luck, and probably more failures, eventually succeed.


So Long and Thanks for All the Fish (Or, A Tribute to Cassini)

At 7:55 EST this morning, the Cassini spacecraft sent its final message to Earth before plunging into Saturn’s atmosphere. Reaching speeds over 77,200 miles (144,200 kilometers) per hour, Cassini experienced temperatures hotter than the surface of the sun, causing the spacecraft to char and break apart, its elemental components ultimately diluting in the atmosphere of the gas giant. As NASA poetically put it, Cassini is now a part of the planet it studied.

It sent data back to Earth right up until the end.

It may seem strange that the spacecraft was slated for destruction while previous missions, such as Voyagers 1 and 2 continue, with both probes still heading deeper into space after 40 years of exploration. Yet no such fate was appropriate for Cassini.

Among the most remarkable findings of the Cassini mission came from Saturn’s moons: icy Enceladus was found to have a global ocean and geyser-like jets spewing water vapor. Saturn’s largest moon, Titan, was discovered to have seas of liquid methane and an ocean of water and ammonia deep below the surface. Both moons provide promising avenues of research for that most thrilling and elusive topic: life itself.

Allowing Cassini to live out the rest of its life naturally, transmitting data well past when it had depleted its fuel reserves, would have put both those moons at risk. Cassini had to devote its final fuel to its own safe disposal.

It seems a strange thing to find the destruction of a spacecraft so moving. Cassini was machine: it felt nothing, desired nothing. It undertook an impressively successful mission and now, nearly 20 years after its launch from Cape Canaveral, it was time for that mission to come to an end.

Yet don’t we all wish to live and die so nobly? To make profound contributions through our efforts and to gracefully exit in a poetic blaze at the appropriate time?

It is beautiful to think of Cassini – a spacecraft I have loved and followed for over a decade – reduced to dust and becoming one with the planet to which it devoted much of its existence; and doing so in service to the remarkable moons with which it was intimately and uniquely acquainted.

If we are to believe Camus, all our fates are absurd; the workman toils everyday at the same tasks. Yet, in itself, this fact need not be tragic.

Truly, there is no meaning in the destruction of a spacecraft which has served well its purpose. Yet it is in these moments – when we find beauty and profoundness in the absurd; when we ascribe nobility to practical acts which mean nothing – these are the moments of consciousness. When we experience wonder generated from the mere act of living. The struggle itself is enough to fill a man’s heart.

So thank you, Cassini, for your decades of service. Thank you for the rich array of data you have shared with us, and thank you to the many, many people who made this mission possible. Because of you, I – and the rest of humanity – have seen and learned things we would have never experienced otherwise. There can be no greater gift than that.



Normalizing the Non-Standard

I recently read Eisenstein’s excellent, What to do about bad language on the internet, which explores the challenge of using Natural Language Processing on “bad” – e.g., non-standard – text.

I take Eisenstein’s use of the normative word “bad” here somewhat ironically. He argues that researchers dislike non-standard text because it complicates NLP analysis, but it is only “bad” in this narrow sense. Furthermore, while the effort required to analyze such text may be frustrating, efforts to normalize these texts are potentially worse.

It has been well documented that NLP approaches trained on formal texts, such as the Wall Street Journal, perform poorly when applied to less formal texts, such as Twitter data. Intuitively this makes sense: most people don’t write like the Wall Street Journal on Twitter.

Importantly, Eisenstein quickly does away with common explanations for the prevalence of poor language on Twitter. Citing Drouin and Davis (2009), he notes that there are no significant differences in the literacy rates of users who do or do not use non-standard language. Further studies also dispel notions of users being too lazy to type correctly, Twitter’s character limit forcing unnatural contractions, and phones auto-correcting going out of control.

In short, most users employ non-standard language because they want to. Their grammar and word choice intentionally convey meaning.

In normalizing this text, then, in moving it towards the unified standards on which NLP classifiers are trained, researchers explicitly discard important linguistic information. Importantly, this approach has implications for not only for research, but for language itself. As Eisenstein argues:

By developing software that works best for standard linguistic forms, we throw the weight of language technology behind those forms, and against variants that are preferred by disempowered groups. …It strips individuals of any agency in using language as a resource to create and shape their identity.

This concern is reminiscent of James C. Scott’s Seeing Like a State, which raises deep concerns about the power of a centralized, administrative state. In order to function effectively and efficiently, an administrative state needs to be able to standardize certain things – weights and measures, property norms, names, and language all have implications for taxation and distribution of resources. As Scott argues, this tendency towards standardization isn’t inherently bad, but it is deeply dangerous – especially when combined with things like a weak civil society and a powerful authoritarian state.

Scott argues that state imposition of a impose a single, official language is “one of the most powerful state simplifications,” which lays the groundwork for additional normalization. The state process of normalizing language, Scott writes, “should probably be viewed, as Eugen Weber suggests in the case of France, as one of domestic colo­nization in which various foreign provinces (such as Brittany and Occitanie) are linguistically subdued and culturally incorporated. …The implicit logic of the move was to define a hierarchy of cultures, relegating local languages and their regional cultures to, at best, a quaint provincialism.”

This is a bold claim, yet not entirely unfounded.

While there is further work to be done in this area, there is good reason to think that the “normalization” of language disproportionally effects people who are outside the norm along other social dimensions. These marginalized communities – marginalized, incidentally, because they fall outside whatever is defined as the norm – develop their own linguistic styles. Those linguistic styles are then in turn disparaged and even erased for following outside the norm.

Perhaps one of the most well documented examples of this is Su Lin Bloggett and Brendan O’Connor’s study on Racial Disparity in Natural Language Processing. As Eisenstein points out, it is trivially impossible for Twitter to represent a coherent linguist domain – users around the globe user Twitter in numerous languages.

The implicit pre-processing step, then, before even normalizing “bad” text to be in line with dominant norms, is to restrict analysis to English-language text. Bloggett and O’Connor find that  tweets from African-American users are over-represented among the Tweets that thrown out for being non-English.

Dealing with non-standard text is not easy. Dealing with a living language that can morph in a matter of days or even hours (#covfefe) is not easy. There’s no getting around the fact that researchers will have to make difficult calls in how to process this information and how to appropriately manage dimensionality reduction.

But the worst thing we can do is to pretend that it is not a matter of concern; to begin our work by thoughtlessly filtering and normalizing without giving significant thought to what we’re discarding and what that discarded data represents.


Graduate Workers Need a Union

Last night I attended a great panel hosted by the Graduate Employees of Northeastern University (GENU), a union of research and teaching assistants. The union is currently working towards holding its first election and becoming certified with the National Labor Relations Board (NLRB), an independent federal agency which protects employees’ rights to organize and oversees related laws.

Those of you immersed in academic life may have noticed a recent increase in organizing efforts among graduate workers at many institutions – this is due to a 2016 ruling by the NLRB that “student assistants working at private colleges and universities are statutory employees covered by the National Labor Relations Act.”

In other words, graduate employees have the right to organize.

Those not immersed in academic life, or less familiar with graduate education, might find this somewhat surprising. As someone said to me when I told them about this panel, “wait, you’re an employee? Aren’t you a student?”

Well, yes. I am an employee and a student. These two identities and lives are complexly intertwined and can be difficult to distinguish – when am I a worker and when am I a learner?

Often I am both simultaneously.

But I think about the perspective of the student program staff at the college where I worked for several years before starting my PhD. Collectively, we made a lot of student employment decisions – hiring student workers to help around the office and selecting paid student fellows to work at local organizations. Those students – primarily undergraduates – were workers, too, but every decision we made was centered around the question: how will this improve the student’s education?

That is, their student identity was always centered. Work expectations always deferred to course expectations. We looked to hire students who were prepared to learn a lot from their experiences, and we created structured mentorship and other activities to ensure student learning was properly supported and enhanced. The work was good work which needed to be done, but the primary purpose of these opportunities was always to create space for students to learn.

Graduate student work is…a bit more complicated. I have been fortunate in my own graduate experience, but I couldn’t even begin to enumerate the horror stories I’ve heard from other graduate employees whose work is most definitely work.

Even assuming good faculty members and good departments, the entire structure of American higher education is designed to exploit graduate students as cheap labor. Their labor may serve to enhance the undergraduate experience, but is rarely designed to enhance their own.

This problem is exacerbated by the fact that graduate student workers have virtually no power while the faculty, department, and administration they serve have a great deal of power over them. For graduate workers it is often not a possibility to simply “get another job” – a difficult undertaking for any vocation. International students are particularly vulnerable, as their Visa status could be taken away in a heartbeat.

As several of the panelists mentioned last night, many graduate students simply try to “keep their head down” in the face of this power imbalance. Stay quiet, don’t complain, and do your best to keep focused on the research you’re passionate about.

This is a reasonable copping response, but the reality is that silence never fixes a problem, and sometimes trouble will find you no matter how hard you try to avoid it.

Nearly all of the panelists had a story of someone who was unfairly targeted for termination, who was entirely taken by surprise when a department in which they “had no problems” suddenly had a serious problem with them.

Without a union these become the isolated stories of isolated individuals. They are personal problems to be worked out and ignored at the local level. In the absence of clear rules and expectations, they will happen again, and again, and again – in good departments and bad – with very little recourse for the individuals involved and with no resulting structural change to prevent it from happening again.

Unions build collective power. They build the ability of a people to come together, to share their ideas and concerns, and to work together with a common voice in order to achieve mutually-agreed upon outcomes.

As one of the panelists from a faculty union described, forming a union was a clarifying experience. It brought the community together and generated a clear, shared understanding of common problems and collective solutions. It created venues for enabling structural and policy changes that had been deeply needed for years.

Perhaps most fundamentally, it is important to understand that a union is not some abstract outside, thing. It is a living thing. It is the workers. It is a framework which allows us to work together, learn together, and build together. It is formed from our voices in order to address our concerns and to protect our interests.

We are the union.

And graduate student workers need a union.

The live-stream of the event, which focused specifically on STEM workers, can be seen here. Particularly for those at Northeastern, please check the GENU-UAW website, Facebook page, and Twitter.


Social and Algorithmic Bias

A commonly lamented problem in machine learning is that algorithms are biased. This bias can come from different sources and be expressed in different ways, sometimes benignly and sometimes dramatically.

I don’t disagree that there is bias in these algorithms, but I’m inclined to argue that in some senses, this is a feature rather than a bug. That is: all methodical choices are biased, all data are biased, and all models are wrong, strictly speaking. The problem of bias in research is not new, and the current wave of despair is simply a reframing of this problem with automated approaches as the culprit.

To be clear, there are serious cases in which algorithmic biases have led to deeply problematic outcomes. For example, when a proprietary, black box algorithm regularly suggests stricter sentencing for black defendants and those suggestions are taken to be unbiased, informed wisdom – that is not something to be taken lightly.

But what I appreciate about the bias of algorithmic methods is the visibility of their bias; that is – it gives us a starting point for questioning, and hopefully addressing, the inherent social biases. Biases that we might otherwise be blind to, given our own personal embedding in the social context.

After all, strictly speaking, an algorithm isn’t biased; its human users are. Humans choose what information becomes recorded data and they choose which data to feed into an algorithm. Fundamentally, humans – both specific researchers and through the broader social context – chose what counts as information.

As urban planner Bent Flyvbjerg writes: Power is knowledge. Those with power not only hold the potential for censorship, but they play a critical role in determining what counts as knowledge. In his ethnographic work in rural appalachia, John Gaventa similarly argues that a society’s power dynamics become so deeply entrenched that the people embedded in that society no longer recognize these power dynamics at all. They take for granted a shared version of fact and reality which is far from the unbiased Truth we might hope for – rather it is a reality shaped by the role of power itself.

In some ways, algorithmic methods may exacerbate this problem – as algorithmic bias is applied to documents resulting from social bias – but a skepticism of automated approaches opens the door to deeper conversations about biases of all forms.

Ted Underwood argues that computational algorithms need to be fundamentally understood as tools of philosophical discourse, as “a way of reasoning.” These algorithms, even something as seemingly benign as rank-ordered search results – deeply shape what information is available and how it is perceived.

I’m inclined to agree with Underwood’s sentiment, but to expand his argument broadly to a diverse set of research methods. Good scientists question their own biases and they question the biases in their methods – whether those methods are computational or not. All methods have bias. All data are biased.

Automated methods, with their black-box aesthetic and hopefully well-documented Git pages,  may make it easier to do bad science, but for good scientists, they convincingly raise the specter of bias, implicit and explicit, in methods and data.

And those are concerns all researchers should be thinking about.



Bag of Words

A common technique in natural language processing involves treating a text as a bag of words. That is, rather than restrict analysis to preserving the order in which words appear, these automated approaches begin by simply examining words and word frequencies. In this sense, the document is reduced from a well-ordered, structured object to a metaphorical bag of words from which order has been discarded.

Numerous studies have found the bag of words approach to be sufficient for most tasks, yet this finding is somewhat surprising – even shocking, as Grimmer and Stewart note – given the reduction of information represented by this act.

Other pre-processing steps for dimensionality reduction seem intuitively less dramatic. Removing stop words like “the” and “a” seems a reasonable way of focusing on core content words without getting bogged down in the details of grammar. Lemmatization, which assigns words to a base family also makes sense – assuming it’s done correctly. Most of the time, it doesn’t matter much whether I say “community” or “communities.”

But reducing a text – which presumably has been well-written and carefully follows the rules of it’s language’s grammar seems surprisingly profane. Do you lose so little when taking Shakespeare or Homer as a bag of words? Even the suggestion implies a disservice to the poetry of language. Word order is important.

Why, then, is a bag of words approach sufficient for so many tasks?

One possible explanation is that computers and humans process information differently. For a human reading or hearing a sentence, word order helps them predict what is to come next. It helps them process and make sense of what they are hearing as they are hearing it. To make sense of this complex input, human brains need this structure.

Computers may have other shortcomings, but they don’t feel the anxious need to understand input and context as it is received. Perhaps bag of words works because – while word order is crucial for the human brain – it provides unnecessary detail for the processing style of a machine.

I suspect there is truth in that explanation, but I find it unsatisfactory. It implies that poetry and beauty are relevant to the human mind alone – that these are artifacts of processing rather than inherent features of a text.

I prefer to take a different approach: the fact that bag of words models work actually emphasizes the flexibility and beauty of language. It highlights the deep meaning embedded in the words themselves and illustrates just how much we communicate when we communicate.

Linguistic philosophers often marvel that we can manage to communicate at all – the words we exchange may not mean the same thing to me as they do to you. In fact, they almost certainly do not.

In this sense, language is an architectural wonder; a true feat of achievement. We convey so much with subtle meanings of word choice, order, and grammatical flourishes. And somehow through the cacophony of this great symphony – which we all experience uniquely – we manage to schedule meetings, build relationships, and think critically together.

Much is lost in translating the linguistic signal between me and you. We miss each other’s context and reinterpret the subtle flavors of each word. We can hear a whole lecture without truly understanding, even if we try.

And that, I think, is why the bag of words approach works. Linguistic signals are rich, they are fiercely high-dimensional and full of more information than any person can process.

Do we lose something when we reduce dimensionality? When we discard word order and treat a text as a bag of words?

Of course.

But that isn’t an indication of the gaudiness of language; rather it is a tribute to it’s profound persistence.


Opinion Change

While there are differing views on whether or not a person’s opinions are likely to change, there’s a general sense of “opinion change” as some clear and discrete thing: one moment I think X, and the next moment I think Y…or perhaps, more conservatively, not X.

Coming to opinion change from a deliberation background, I’m not at all convinced that this is the right framework to be thinking in.

Perhaps in a debate the goal is to move your opponent from one discrete position to another, or to convincingly argue that your discrete position is better than another. But in deliberation – which very well may include aspects of debate – the very notion of “opinion change” seems misplaced.

I think of deliberation more as process of collaborative storytelling: you don’t know the ending a priori. You create the ending, collectively and uniquely. A different group would tell a different story.

As the story unfolds, you may shift your voice and alter your contributions, but the X -> Y model of “opinion change” doesn’t seem to fit at all. 

The challenge, perhaps, is that standard conceptions of opinion change take it as a zero-sum game. One person wins and another person loses. Or no one changes their mind and the whole conversation was a waste.

But deliberation isn’t like that. It is creative and generative. It is a collective endeavor through which ideas are born, not a competitive setting with winners and losers. In deliberation, all participants leave changed from the experience. They come to think about things in new ways and have the opportunity to look at an issue from a new perspective.

They may or may not leave with the same policy position they had going in, but either way, something subtle has changed. A change that may effect their future interactions and future judgements.

Standard conceptions of “opinion change” as a toggle switch are just too narrow to capture the rich, transformative interplay of deliberation.


The Uses of Anger

“Every woman has a well-stocked arsenal of anger,” says Audre Lorde in The Uses of Anger, her 1981 keynote talk at the National Women’s Studies Association Conference.

I have been thinking a lot about this piece recently. It feels sharply relevant today, 36 years after it was written.

Every woman has a well-stocked arsenal of anger. An arsenal built from fear; from the constant slights and dismissals; from living and functioning in a world which takes us for granted, insists we are not enough, and half-heartidly feigns distress over the violence used against us. Every woman has a well-stocked arsenal of anger.

I know I do.

Lorde argues this anger is a strength, that it has powerful, transformative uses. Anger, she argues, leads to change.

Importantly, in conflicts between the oppressed and their oppressors, there are not “two sides.” The anger of the oppressed leads to growth while the hatred of the oppressors seeks destruction. As Lorde writes:

Hatred is the fury of those who do not share our goals, and its object is death and destruction. Anger is the grief of distortions between peers, and its object is change.

Anger is the grief of distortions between peers. Anger arises when you and I fail to understand each other, when we fail to listen genuinely and to acknowledge each other’s experience. Anger arises when the world insists that your perceptions and experiences aren’t real.

It’s gaslighting on a societal scale.

But anger has it’s uses, Lorde says. “Anger is loaded with information and energy.”

Anger, articulated with precision and “translated into action in the service of our vision and out future is a liberating and strengthening act of clarification.”

Anger at the distortions between peers creates space for us to clarify and remove those distortions; to genuinely accept the experiences of others.

This is particularly important in the context of gender because the experiences of women vary radically across numerous dimensions of race, class, and identity.

In order to successful use our anger, we must “examine the contradictions of self, woman, as oppressor.”

Lorde is diplomatic on the topic, recognizing that she, too – a lesbian woman of color – has at times taken on the role of oppressing other women. But drawing on my own identity, I’m inclined to be more direct here: white women, and particularly white cis women have played a long and important role in building and maintaining systems of white supremacy and cisnormativity.

We have suffered our slings and arrows, no doubt, and with good reason our personal arsenals are well-stocked with anger. Yet we, too, are oppressors. We have oppressed our sisters directly and indirectly, intentionally and unintentionally. Recognizing this is, as Lorde describes, a painful process of translation. But is a process we must undertake; a process we must engage in order to radically change the systems of power, privilege, and oppression we are embedded in; the systems which oppress us and our neighbors.

Furthermore, Lorde argues that anger can bring out this change – guilt at our own complicity does nothing:

I have no creative use for guilt, yours or my own. Guilt is only another way of avoiding informed action, of buying time out of the pressing need to make clear choices, out of the approaching storm that can feed the earth as well as bend the trees.

Guilt is a proxy for impotence; for inaction. But anger is transformative. As Lorde writes:

…The strength of women lies in recognizing differences between us as creative, and in standing to those distortions which we inherited without blame but which are now ours to alter. The angers of women can transform differences through insight into power. For anger between peers births change, not destruction, and the discomfort and sense of loss it often causes is not fatal, but a sign of growth.


Labor and Civics

Earlier this week, we celebrated Labor Day in the United States – a day which only became a national holiday in the wake of the Pullman Strike; a dark ordeal in which 30 American workers were killed by U.S. Federal Troops.

Of course, most of the world celebrates the contributions of labor on May 1 – International Worker’s Day. But that was a bit too radical for the American palate, so sensible moderates – such as President Grover Cleveland, who authorized the use of force against American civilians during the Pullman Strike – consolidated on the September date.

And now, as Americans celebrate the unofficial end of summer and try to remember rules about when it is appropriate to wear white, they are encouraged to also remember the contributions of the American worker and the progress made by labor unions. The 8-hour work day, the 5-day work week, safety in the work place; these are just a few of the things which labor unions have given us.

But the contributions of unions run deeper than that; indeed they are at the very core of our democracy.

In classical Greek thought, laborers could not be citizens. While there was surely an elitist air to this view, it was driven more directly by a practical belief: citizenship is work.

To be a citizen in the classic sense was not merely to be the recipient of certain guarantees and protections – e.g., rights of safety, security, and redress – it was to contribute wholly to the improvement and wellbeing of your society.

In a practical sense, a citizen could not engage in physical labor because he (yes, “he”) must devote his time and energy to the real work of citizenship. Any other vocation would reduce and ultimately remove his ability to work as a citizen.

Of course, such a view was also elitist and absurd – a society cannot function without laborers and a system in which laborers are excluded from citizenship automatically creates an irreparable class system.

But, on the other hand, the Greeks had a point: citizenship is work, and one cannot engage in that work if they are wholly consumed with other responsibilities.

Neither a person who has to work 3 jobs just to make ends meet nor a high-powered executive who responds to emails in the middle of the night will be in a position to contribute to the work of citizenship.

Labor unions provide protections for workers. They serve as collective bargaining units which give the collective of workers more power than a single worker alone. They provide a vital role in ensuring safe, just, and productive workplaces.

But more deeply, they provide the foundation for democratic engagement – both as a venue where every day people are empowered to share their voice, and as a tool for ensuring that people who work – most of us, quite frankly – have the space to engage in the hard work of citizenship.

In short, our democracy would not function without labor unions, and when we weaken them, we weaken our democracy.