Monthly Archives: March 2016

The AP and Nazi Germany

Harriet Scharnberg, German historian and Ph.D. student at the Institute of History of the Martin Luther University of Halle-Wittenberg made waves yesterday with the release, in the journal Studies in Contemporary History, of her paper, Das A und P der Propaganda: Associated Press und die nationalsozialistische Bildpublizistik.

The paper finds that, prior to the expulsion of all foreign media in 1941, the AP collaborated with Nazi Germany; signing the Schriftleitergesetz (editor’s law) which forbid the employment of “non-Aryans” and effectively ceded editorial control to the German propaganda ministry.

These are claims which the AP vehemently denies:

AP rejects the suggestion that it collaborated with the Nazi regime at any time. Rather, the AP was subjected to pressure from the Nazi regime from the period of Hitler’s coming to power in 1933 until the AP’s expulsion from Germany in 1941. AP staff resisted the pressure while doing its best to gather accurate, vital and objective news for the world in a dark and dangerous time.

AP news reporting in the 1930s helped to warn the world of the Nazi menace. AP’s Berlin bureau chief, Louis P. Lochner, won the 1939 Pulitzer Prize for his dispatches from Berlin about the Nazi regime. Earlier, Lochner also resisted anti-Semitic pressure to fire AP’s Jewish employees and when that failed he arranged for them to become employed by AP outside of Germany, likely saving their lives.

Lochner himself was interned in Germany for five months after the United States entered the war and was later released in a prisoner exchange.

Regardless which finding present a more accurate historical truth, I find this controversy quite fascinating.

According to the Guardian, the AP was the only was the only western news agency able to stay open in Hitler’s Germany, while other outlets were kicked out for refusal to comply with Nazi regulations.

This exclusivity lends credence to the claim they the news agency did, in some way, collaborate – since it seems improbably that the Nazis would have allowed them to continue without some measure of compliance. It also suggests a shameful reason for this compliance: choosing to stay, even under disagreeable terms, was a smart business decision.

But it also highlights the interesting challenge faced by foreign correspondents covering repressive regimes.

For German news media, it was a zero-sum game: either comply with the Schriftleitergesetz or face charges of treason – a charge that would likely have serious repercussions for one’s family as well.

The AP, from what I can tell, seems to have skirted some middle ground.

By their account, the AP did work with a “photo agency subsidiary of AP Britain” which, in 1935 “became subject to the Nazi press-control law but continued to gather photo images inside Germany and later inside countries occupied by Germany.”

While images from this subsidiary were supplied to U.S. newspapers, “those that came from Nazi government, government-controlled or government–censored sources were labeled as such in their captions or photo credits sent to U.S. members and other customers of the AP, who used their own editorial judgment about whether to publish the images.”

The line between collaboration and providing critical information seems awfully fuzzy here.

Critics would claim that the AP was simply looking out for it’s own bottom-line, sacrificing editorial integrity for an economic advantage. The AP, however, seems to argue that it was a difficult time and they did what they had to do to provide the best coverage they could – they did not collaborate, but they played by the rules just enough to maintain the accesses needed to share an important story with the world.

facebooktwittergoogle_plusredditlinkedintumblrmail

Education, Democracy, and The Establishment

Last week, drawing on the work of Walter Lippmann, I raised several concerns about the about inclusion of popular voice in democracy.

In some ways, these concerns seem at odds – what is democracy if not the free governing of the people by the people? To reduce the voice of ‘the people’ in any political system is to draw it away from democracy and, perhaps more critically, to violate democratic ideals.

It cannot be denied that there is a tension here. A tension between the noble goal of empowerment of every day citizens and the truly hard work of governing itself.

What good is allowing the people to govern if ‘the people’ are not truly fit to govern?

At its core, this debate boils down to one of education versus problem solving. Myles Horton, educator, organizer, and long time director of the Highlander Folk School, spoke about this debate through the lens of organizing:

If the purpose is to solve the problem, there are a lot of ways to solve the problem that are so much simpler than going through all this educational process…But if education is to be part of the process, then you may not actually get that problem solved, but you’ve educated a lot of people. You have to make that choice.

If you’re a community organizer whose goal is to solve a problem in the community, you may need ‘the people’ in the sense that you need the strength of their support; you need the power that comes from numbers. Any good community organizer would also want the identification of the problem and definition of a solution to come from the community; but this is still a somewhat shallow form of engagement.

An organizer, working in partnership with the community they are organizing, guides the direction of action; provides professional feedback and support on what strategies and tactics are most likely to succeed. This type of organizing is more empowering than what community members might experience otherwise and can lead directly to much-needed positive outcomes in the community.

But it is not education.

Horton describes a particularly memorable scene in which, gun to his head, he refused to tell a community member what action to take. “Go ahead and shoot if you want to, but I’m not going to tell you,” he recalls.

In recollecting the moment, Horton explains his reasoning. If he had told what to do “all would be lost.”

He saw himself not as an organizer, trying to work towards a just system, but rather as an educator, developing citizens capable of building their own just systems.

From this, I find that theorists such as Lippmann are right: if we want a political system which most fairly distributes resources, which is just and thoughtful in its approach, the broad and unfiltered inclusion of the mass of public voices is not the best way to accomplish that goal.

But such a concern overlooks a critical point: is that indeed our goal?

If instead we want a political system which empowers every person to participate; which truly believes that all people – all people – have a right and responsibility to engage in public work; if we want a society that truly values the input, insights, and voice of every single member – that is a different goal to work for.

And, indeed, such an educational approach is not the best way to achieve immediate political goals.

If you want to change policy, engage the people; if you want to change systemic structures, educate the people.

Of course, all this hardly settles the debate: if no amount of education and preparation could prepare ‘the people’ to govern, such efforts would find long-term as well as short-term failure.

As a matter of practicality, one can argue this course without degrading the people too much. That is, to say that ‘the people’ are unalterably unfit for the lofty task we set them to is not intrinsically to claim that commoners are too stupid, lazy, or uncaring for this task.

The world is a complicated place. With the constant influx of information and the deep histories that have brought us to the societies we have today, no individual person could hardly be expected to have all the knowledge and expertise needed to justly rule.

Considering that this task would be deeply challenging for even an idealized world leader, whose sole task is to consider such issues and whose efforts are supported by a staff of experts – you can hardly expect an average person, whose time and worries are reasonably devoted to other matters, to be up to the task.

Arguing this path isn’t an insult to the common man; it is rather a recognition of impossible goal society’s ideals have set for them.

The challenge that I see is that we find ourselves caught between these two paths. It is a sort of pseudo-democracy, in which we comfort ourselves that we, the people, are the ones to govern, but in which we each deem the majority of our peers as unfit for the task.

In this way, we can always blame the “them”: if political engagement were only restricted to those who are correct (like us), than we could have the ideal government we long for. Such disenfranchisement would be the most efficient way to achieve our ends, but – knowing how unjust it would be if “they” were to disenfranchise “us” – we instead settle into a deep melancholia for the world.

And, if one thing is certain, such political ennui fulfills its own unfortunate goal – to maintain the status quo and cement the standing of those with the most power; effectively disenfranchising both the “us” and the “them.”

facebooktwittergoogle_plusredditlinkedintumblrmail

A Brief History of Saint Days

So, I went down a bit of a rabbit hole this morning trying to figure out answers to what I thought were somewhat straightforward questions. First, when did people in various western European countries stop celebrating their Saints’ day – or name day, if you will – and second, how did the various reorganizations of the liturgical calendar affect name day celebrations?

I rather thought there would be plenty of information and resources to explore these questions, but I’m afraid I’ve merely found fragments.

The Catholic Church has celebrated feast days for important saints nearly since its inception. St. Martin of Tours, born in 316 in Sabaria (now Szombathely, Hungary) is thought to be the first saint – or at least the first to not die as a marytr.

Saint days quickly became a staple of the early Catholic church. As Christian Rohr has argued, these were not just days of religious observance, but were deeply seeped in the symbols and politics of their times:

When the feudal and the chivalrous system had been fully established during the High Middle Ages these leading social groups had to find an identity of their own by celebrating courtly feasts. So, they distinguished themselves from the rest of the people. Aristocratic festival culture, consisting of tournaments, courtly poetry and music, but also of expensive banquets, was shown openly to the public, representing the own personality or the own social group in general. Town citizens and craftsmen, however, were organized in brotherhoods and guilds; they demonstrated their community by celebrating common procession, such as on the commemoration day of the patron saint of their town or of their profession.

These courtly feasts were “held on high religious celebration days” – over half took place on Whitsunday. For craftsmen, Rohr points to the French city of Colmar, where “the bakers once stroke for more than ten years to receive the privilege to bear candles with them during the annual procession for the town patron.”

And, somewhere amid these deeply interwoven strands of religion, economics, and power, people began celebrating their own Saints’ day. That is, as most people shared a name with one of the saints, that saint’s feast day would have special significance for them.

It’s unclear to me exactly when or how this came about. Most references I read about these name day celebrations simply indicate that they have “long been popular.”

Name days celebrations today – though generally more secular in their modern incarnation – take place in a range of “Catholic and Orthodox countries…and [have] continued in some measure in countries, such the Scandinavian countries, whose Protestant established church retains certain Catholic traditions.”

But here’s the interesting thing: at least based on Wikipedia’s list of countries where name day celebrations are common, the practice is much more common in Eastern Orthodox countries than in Roman Catholic ones.

Now, the great East–West Schism – which officially divided the two churches – took place in 1054. My sense – though I’ve had trouble finding documentation of this – is that celebrating one’s saints’ day was a common practice in both east and west at that time. Name day celebrations do take place in the western European countries of France, Germany, and – importantly – Italy, which seems to indicate that the difference in name day celebration rates is not merely a reflection of an east-west divide.

It’s entirely unclear to me what led to this discrepancy. One theory is that this a by-product of the Reformation – during which time, at least in the UK, various laws banned Catholics from practicing.

But, I also find myself wondering about the effects of various reorganizations of the (Roman Catholic) liturgical calendar – eg, the calendar of Saint Days and other religious festivals. The calendar has been adjusted many times over the years, including as recently as 1969, when Pope Paul VI, explaining that “in the course of centuries the feasts of the saints have become more and more numerous,” wrote justified the new calendar:

…the names of some saints have been removed from the universal Calendar, and the faculty has been given of re-establishing in regions concerned, if it is desired, the commemorations and cult of other saints. The suppression of reference to a certain number of saints who are not universally known has permitted the insertion, within the Roman Calendar, of names of some martyrs of regions where the proclaiming of the Gospel arrived at a later date. Thus, as representatives of their countries, those who have won renown by the shedding of their blood for Christ or by their outstanding virtues enjoy the same dignity in this same catalogue.

Most notably and controversially, Saint Christopher was deemed to not be of the official Roman tradition, though celebration of his feast day is still permitted under some regional calendars. If you’re curious, you can read a list of the full changes made to liturgical calendar in 1969.

Many of these changes, such as the removal of Symphorosa and her seven sons, likely had little effect on anyone’s name day celebration. But, by mere probability, I would think that at some point over the years, someone had their Saint removed from the liturgy – which I imagine would probably be a rather disarming event. Though I suspect that wasn’t a big enough factor in diminishing the strength of the celebration over time.

Well, that is all that I have been able to find out. I have many unanswered questions and many more which keep popping up. If you have some expertise in Catholic liturgy and have any theories or answers, please let me know. Otherwise, I suppose, it will remain another historical mystery.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

The Easter Rebellion and Lessons From Our Past

I had planned today to write something commemorating the centenary of Ireland’s Easter Rising; the quickly-crushed insurrection which paved the way for the Irish Free State.

But such reflections seem somewhat callous against the grim backdrop of current world events.

Just this weekend, a suicide bomber killed at least 70 – mostly children – in an attack on a park in Lahore, Pakistan.

I debated this morning whether to write about that instead. Whether to grieve the mounting death toll from attacks around the world, or whether to question, again, our seemingly preferential concern for places like Brussels and Paris. Or perhaps to highlight the inequities evident in such headlines as CNN’s In Pakistan, Taliban’s Easter bombing targets, kills scores of Christians.

The majority of those killed were Muslim.

Perhaps these details hardly matter; it is all of it a horror.

But if I were to write about every global tragedy, these pages would find room for little else. There is no end to suffering, no limit of atrocity.

Perhaps I should write instead about Radovan Karadzic, the former Bosnian Serb leader, who – twenty years after orchestrating the ethnic cleansing of Srebrenica – was just convicted of genocide, war crimes and crimes against humanity by a United Nations tribunal.

Of course, such news also serves as a reminder that Omar al-Bashir, the current, sitting president of Sudan, is wanted by the International Criminal Court (ICC) for war crimes and crimes against humanity. He is also widely considered to be a perpetrator of genocide, though the ICC demurred from making that charge. The ICC issued its arrest warrant in 2009, citing numerous crimes committed since 2003. Bashir won reelection in 2010 and again in 2015.

It is all too much.

Perhaps I should write about the Easter Rising – a notable event for my own family – after all.

In the midsts of World War I, on Easter Monday 1916, 1,600 Irish rebels seized strategic government buildings across Dublin. From the city’s General Post Office, Patrick Pearse and other leading of the rising, issued a Proclamation of the Provisional Government of the Irish Republic:

We declare the right of the people of Ireland to the ownership of Ireland and to the unfettered control of Irish destinies, to be sovereign and indefeasible. The long usurpation of that right by a foreign people and government has not extinguished the right, nor can it ever be extinguished except by the destruction of the Irish people.

The overwhelming superiority of British artillery soon put an end to the provisional government.  Over 500 people were killed; more than half were civilians. In The Rising historian Fearghal McGarry argues that Irish rebels attempted to avoid needless bloodshed, while, according to  one British soldier, the British troops, “regarded, not unreasonably, everyone they saw as an enemy, and fired at anything that moved.”

During the fighting, the British artillery attacks were so intense that the General Post Office (GPO) was left as little but a burnt-out shell. As an aside, the GPO housed generations of census records and other government documents – making my mother’s efforts to recreate my family tree permanently impossible.

After the the rebellion had been crushed, fifteen people identified as leaders were executed by firing squad the following week.

This week is rightly a time of commemoration and celebration in Ireland. The brutality of the British response galvanized the Irish people – among whom the uprising had initially been unpopular. The tragedy of the Easter Rising thus led to Irish freedom and, after many more decades, ultimately to peace.

It’s a long and brutal road, but amid all the world’s horrors, confronted by man’s undeniable inhumanity to man, perhaps it is well to remember: we do have the capacity for change.

facebooktwittergoogle_plusredditlinkedintumblrmail

Populism and Democracy

Yesterday, I discussed some of the concerns Walter Lippmann raised about entrusting too much power to “the people” at large.

Such concerns are near blasphemy in a democratically-spiritual society, yet I consistently find myself turning towards Lippmann as a theorist who eloquently raises critical issues which, in my view, have yet to be sufficiently addressed.

At their worst, Lippmann’s arguments are interpreted as rash calls for technocracy: if “the people” cannot be trusted, only those who are educated, thoughtful, and qualified should be permitted to voice public opinions. In short, political power should rightly remain with the elites.

I find that to be a misreading of Lippmann and a disservice to the importance of the issues he raises.

In fact, Lippmann’s primary concern was technocracy – the governing of an elite caring solely  for their own interests and whose power ensured their continued dominion. Calling such a system “democracy” merely creates an illusion of the public’s autonomy, thereby only serving to cement elites’ power.

I do not dispute that Lippmann finds “the public” wanting. He clearly believes that the population at large is not up to the serious tasks of democracy.

But his charges are not spurious. The popularity of certain Republican candidates and similarly fear-mongering politicians around the world should be enough to give us pause. The ideals of democracy are rarely achieved; what is popular is not intrinsically synonymous with what is Good.

This idea is distressing, no doubt, but it is worth spending time considering the possible causes of the public failures.

One account puts this blame on the people themselves: people, generally speaking, are too lazy, stupid, or short sighted to properly execute the duties of a citizen. This would be a call for some form of technocratic or meritocratic governance – perhaps those who don’t put in the effort to be good citizens should be plainly denied a voice in governance.

Robert Heinlein, for example, suggests in his fiction that only those who serve in the military should be granted the full voting rights of citizenship. “Citizenship is an attitude, a state of mind, an emotional conviction that the whole is greater than the part…and that the part should be humbly proud to sacrifice itself that the whole may live.”

Similarly, people regularly float the idea of a basic civics test to qualify for voting. You aren’t permitted to drive a car without proving you know the rules of the road; you shouldn’t be allowed to vote unless you can name the branches of government.

Such a plan may seem reasonable on the surface, but it quickly introduces serious challenges. For generations in this country, literacy tests have been used to disenfranchise poor voters, immigrants, and people of color. And even if such disenfranchisement weren’t the result of intentional discrimination – as it often was – the existence of any such test would be biased in favor of those with better access to knowledge.

That is – those with power and privilege would have no problems passing such a test while our most vulnerable citizens would face a significant barrier. To make matters worse, these patterns of power and privilege run deeply through time – a civics test for voting quickly goes from a tool to encourage people to work for their citizenship to a barrier that does little but reinforce the divide between an elite class and non-elites.

And this gives a glimpse towards another explanation for the public’s failure: perhaps the problem lies not with “the people” but with the systems. Perhaps people are unengaged or ill-informed not because of their own faults, but because the structures of civic engagement don’t permit their full participation.

Lippmann, for example, documented how even the best news agencies fail in their duty to inform the public. But the structural challenges for engagement run deeper.

In Power and Powerlessness, John Gaventa documents how poor, white coal miners regularly voted in local elections – and consistently voted for those candidates supported by coal mine owners. These were often candidates who actively sought to crush unions and worked against workers rights. Any fool could see they did not have the interest of the people at heart…but the people voted for them anyway, often in near-unamous elections.

To the outsider, these people seem stupid or lazy – the type whose vote should be taken away for their own good. But, Gaventa argues, to interpret that is to miss what’s really going on:

Continual defeat gives rise not only to the conscious deferral of action but also to a sense of defeat, or a sense of powerlessness, that may affect the consciousness of potential challengers about grievances, strategies or possibilities for change….From this perspective, the total impact of a power relationship is more than the sum of its parts. Power serves to create power. Powerlessness serves to re-enforce powerlessness.

In the community Gaventa studied, past attempts to exercise political voice dissenting from the elite had lead to people loosing their jobs and livelihoods. If I remember correctly, some had their homes burned and some had been shot.

It had been some time since such retribution had been taken, but Gaventa’s point is that it didn’t need to be. Elites had established their control so thoroughly, so completely, that poor residents did what was expected of them without hardly a thought. They didn’t need to be threatened so rudely; their submission was complete.

Arguably, theorists like Lippmann see a similar phenomenon happening more broadly.

If you are deeply skeptical of the system, you might believe it to be set up intentionally to minimize the will of the people. In the States at least, our founding fathers were notoriously scared of giving “the people” too much power. They liked the idea of democracy, but also saw the flaws and dangers of pure democracy.

In Federalist 10, James Madison argued:

From this view of the subject it may be concluded that a pure democracy, by which I mean a society consisting of a small number of citizens, who assemble and administer the government in person, can admit of no cure for the mischiefs of faction. A common passion or interest will, in almost every case, be felt by a majority of the whole; a communication and concert result from the form of government itself; and there is nothing to check the inducements to sacrifice the weaker party or an obnoxious individual. Hence it is that such democracies have ever been spectacles of turbulence and contention; have ever been found incompatible with personal security or the rights of property; and have in general been as short in their lives as they have been violent in their deaths. Theoretic politicians, who have patronized this species of government, have erroneously supposed that by reducing mankind to a perfect equality in their political rights, they would, at the same time, be perfectly equalized and assimilated in their possessions, their opinions, and their passions.

To give equal power to all the people is to set yourself up for failure; to leave nothing to check “an obnoxious individual.”

Again, there is something very reasonable in this argument. I’ve read enough stories about people being killed in Black Friday stampedes to know that crowds don’t always act with wisdom. And yet, from Gaventa’s argument I wonder – do the systems intended to check the madness of the crowd rather work to re-inforce power and inequity; making the nameless crowd just that more wild when an elite chooses to whip them into a frenzy?

Perhaps this system – democracy but not democracy – populism but not populism – is self-reinforcing; a poison that encourages the public – essentially powerless – to use what power they have to support those crudest of elites who prey on fear hatred to advance their own power.

As Lippmann writes in The Phantom Public, “the private citizen today has come to feel rather like a deaf spectator in the back row …In the cold light of experience he knows that his sovereignty is a fiction. He reigns in theory, but in fact he does not govern…”

facebooktwittergoogle_plusredditlinkedintumblrmail

On Public Opinion

Walter Lippmann was notoriously skeptical of “the people.”

The Pulitzer Prize winning journalist was all too familiar with the art of propaganda, with the ease with which elites could shape so-called “public opinion.”

In 1920, Lippmann – who had worked for the “intelligence section” of the U.S. government during the first World War – published a 42-page study on “A Test of the News” with collaborator Charles Merz.

“A sound public opinion cannot exist without access to the news,” they argued, and yet there is “a widespread and a growing doubt whether there exists such an access to the news about contentious affairs.”

That doubt doesn’t seem to have diminished any in the last hundred years.

Civic theory generally imagines an ideal citizen to be one who actively seeks out the news and possesses the sophistication to stay non-biasedly informed of current events. But debate over the practically of that ideal is moot if even such an ideal citizen cannot gain access to accurate and unbiased news.

Lippmann and Merz sought to empirically measure the quality of the news by examining over three thousand articles published the esteemed New York Times during the Russian Revolution (1917-1920).

What they found was disheartening:

From the point of view of professional journalism the reporting of the Russian Revolution is nothing short of a disaster. On the essential questions the net effect was almost always misleading, and misleading news is worse than none at all. Yet on the face of the evidence there is no reason to charge a conspiracy by Americans. They can fairly be charged with boundless credulity, and an untiring readiness to be gulled, and on many occasions with a downright lack of common sense.

Whether they were “giving the public what it wants” or creating a public that took what it got, is beside the point. They were performing the supreme duty in a democracy of supplying the information on which public opinion feeds, and they were derelict in that duty. Their motives may have been excellent. They wanted to win the war; they wanted to save the world. They were nervously excited by exciting events. They were baffled by the complexity of affairs, and the obstacles created by war. But whatever the excuses, the apologies, and the extenuation, the fact remains that a great people in a supreme crisis could not secure the minimum of necessary information on a supremely important event.

And lest we think such failures are relegated to history, consider the U.S. media’s coverage leading up to the Iraq War. Here, too, it seems fair to say that whatever the motives of media, they were indeed derelict in their duty.

Such findings gave Lippmann a deep sense of unease for “popular opinion.”

“The public,” he writes in The Phantom Public (1925), “will arrive in the middle of the third act and will leave before the last curtain, having stayed just long enough perhaps to decide who is the hero and who the villain of the piece.”

The public makes its judgements on gut instinct and imperfect knowledge. Most do not understand a situation in full detail – they know neither the history nor the possible implications of their views. They are consumed with the details of their own daily lives, raising their eyes to politics just long enough to briefly consider what might be best for them in that moment.

Such a system is sure to end in disaster – with public opinion little more than a tool manipulated by elites.

As Sheldon Wolin describes in Political Theory as Vocation, such a system would be ‘democracy’ in name but not in deed:

The mass of the population is periodically doused with the rhetoric of democracy and assured that it lives in a democratic society and that democracy is the condition to which all progressive-minded societies should aspire. Yet that democracy is not meant to realize the demos but to constrain and neutralize it by the arts of electoral engineering and opinion management. It is, necessarily, regressive. Democracy is embalmed in public rhetoric precisely in order to memorialize its loss of substance. Substantive democracy—equalizing, participatory, commonalizing—is antithetical to everything that a high-reward, meritocratic society stands for.

This is the nightmare Lippmann sought to avoid – but it also the undeniable reality he saw around him.

In elevating “the voice of the people” to “the voice of god,” our founders not only made a claim Lippmann considers absurd, but paved the way for a government of elites, by elites, and for elites – all in the hollow, but zealously endorsed, name of “the people.”

facebooktwittergoogle_plusredditlinkedintumblrmail

How Human Brains Give Rise to Language

Yesterday, I attended a lecture by Northeastern psychology professor Iris Berent on “How Human Brains Give Rise to Language.” Berent, who works closely with collaborators in a range of fields, has spent her career examining “the uniquely human capacity for language.”

That’s not to say that other animals don’t have meaningful vocalizations, but, she argues, there is something unique about the human capacity for language. Furthermore, this capacity cannot simply be attributed to mechanical differences – that is, human language is not simply a product of the computational power of our brains or the ability of our oral and aural processing.

Rather, Berent argues, humans have an intrinsic capacity for language. That is, as Steven Pinker describes in The Language Instinct,  “language is a human instinct, wired into our brains by evolution like web-spinning in spiders or sonar in bats.”

While this idea may seem surprising, in some ways it is all together reasonable: humans have specialized organs for seeing, breathing, processing toxins, and more – is it really that much more of a jump to say that the human brain is specialized, that the brain has a specialized biological system for language?

Berent sees this not as an abstract, philosophical question, but rather as one that can be tested empirically.

Specialized biological systems exhibit an invariant, universal structure, Berent explained. There is some variety among human eyes, but fundamentally they are all the same. This logic can be applied to the question of innate language capacity: if language is specialized, we would expect to find for principles: we would expect what Noam Chomksy called a “universal grammar.”

In searching for a universal grammar, Berent doesn’t expect to find such a thing on a macro scale: there’s no universal rule that a verb can only come after a noun. But rather, a universal grammar would manifest in the syllables that occur – or don’t occur – across the breadth of human language.

To this end, Berent constructs a series of syllables which she expects will be increasingly difficult for human brains to process: bl > bn > bd > lb.

That is, it’s universally easier to say “blog” than to say “lbog,” which “bnog” and “bdog” having intermediate difficulty.

One argument for this is simply the frequency of such constructions – in languages around the world “bl” occurs more frequently than “lb.”

Of course, this by no means proves the existence of an innate, universal grammar, as we cannot account for the socio-historical forces that shaped modern language, nor can we be sure such variance isn’t due to the mechanical limitations of human speech.

Brent’s research, therefore, aims to prove the fundamental universality of such syllables – showing that there is a universal hierarchy of what human brain prefers to process.

In one experiment, she has Russian speakers – who do use the difficult “lb” construction – read such a syllable out loud. She then asks speakers of languages without that construction (in this case English, Spanish, and Korean), how many syllables the sound contained.

The idea here is that if your brain can’t process “lbif” as a syllable, it will silently “repair” it to the 2-syllable “lebif.”

In numerous studies, she found that as listeners went from hearing syllables predicted to be easy to syllables predicted to be hard, they were in fact more likely to “repair” the word. Doing the experiment with fMRI and Transcranial Magnetic Stimulation (TMS) further revealed that people’s brains were indeed working harder to process the predicted-harder syllables.

All this, Berent argues, is evidence that a universal grammar does exist. That today’s modern languages are more than the result of history, social causes, or mechanical realities. The brain does indeed seem to have some specialized language system.

For myself, I remain skeptical.

As Vyvyan Evans, Professor of Linguistics at Bangor University, writes, “How much sense does it make to call whatever inborn basis for language we might have an ‘instinct’? On reflection, not much. An instinct is an inborn disposition towards certain kinds of adaptive behaviour. Crucially, that behaviour has to emerge without training…Language is different…without exposure to a normal human milieu, a child just won’t pick up a language at all.”

Evans rather points to a simpler explanation for the emergence of language: cooperation:

Language is, after all, the paradigmatic example of co‑operative behaviour: it requires conventions – norms that are agreed within a community – and it can be deployed to co‑ordinate all the additional complex behaviours that the new niche demanded…We see this instinct at work in human infants as they attempt to acquire their mother tongue…They are able to deploy sophisticated intention-recognition abilities from a young age, perhaps as early as nine months old, in order to begin to figure out the communicative purposes of the adults around them. And this is, ultimately, an outcome of our co‑operative minds. Which is not to belittle language: once it came into being, it allowed us to shape the world to our will – for better or for worse. It unleashed humanity’s tremendous powers of invention and transformation.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Hardest Problems are the Easiest to Ignore

I was somewhat surprised this morning – though perhaps I should not have been – to find coverage of terrorist attacks in Brussels to be the sole focus of the morning news.

I wasn’t surprised by the news of an attack somewhere in the world – a grim reality we’ve all grown sadly accustomed to – but I was surprised at the intensity of coverage. Broadcast morning news coverage isn’t, you see, my typical source for international news.

Suddenly it was all they could talk about.

Where was this attention when a suicide bomber attacked a busy street in Istanbul over the weekend? Or when three dozen people died in the Turkish capital of Ankara last week?

Even from a wholly self-interested perspective, recent attacks in Turkey seem noteworthy as the EU increasingly relies on Turkey to address the Syrian refugee crisis.

But even as I wondered why Belgium elicited so much more concern than Turkey, I felt the sinking sense of an answer.

Where, indeed, was the coverage of attacks in Beirut just days before the now more infamous attacks in Paris?

On it’s surface, this bias in coverage and compassion seems to most obviously be one of culture, or cultural perspective, for lack of a better word. Perhaps people in France and Belgium are perceived to be “more like us” than people in Lebanon or Turkey. The disparity is essentially racism with an international flavor.

Another theory would be one of newsworthiness – Turkey, Lebanon, and many places in the Middle East regularly suffer from terrorist attacks. In a cold sense of the word, such an attack is not news – it is expected.

Such an explanation, though, has the ring of a hollow excuse. The sort of defense you come up with when accused of something unseemly. And the two ideas – that we show greater concern for those in Western Europe because they are “more like us” and that we are more interested in unexpected events – are not entirely unrelated.

In the States, people of color die every day in our cities. And most often, their deaths go unreported and unremarked on by society at large. A murder in a white suburb, though, is sure to grab headlines.

Neighbors grapple to make sense of the shocking news. Things like this don’t happen here. This is a safe community.

It’s not that suburbs are intrinsically more safe, I would argue, but rather that we as a society, would never allow violence in suburbs to rise to the levels it has within the inner-city. Suburbs are already where our wealthy residents live, but in addition to that privilege, we collectively treat them with more time, attention, and care.

Violence in suburbs and attacks in western cities are shocking reminders that we’ve been ignoring the wounds of this world. That we’ve pushed aside our our responsibility to confront seemingly intractable challenges, closing our eyes and hoping those ills only affect those who are different.

All this reminds me of Nina Eliasoph’s thoughtful book, Avoiding Politics: How Americans produce apathy in everyday life.

Working with various civic groups, Eliasoph notes how volunteers eagerly tackle seeming simple problems while avoiding the confrontation that comes from the most complex issues. In one passage, Eliasoph describes the meeting of a parents group in which one of the attendees was “Charles, the local NAACP representative” and “parent of a high schooler himself.”

He said that some parents had called him about a teacher who said “racially disparaging things” to a student…Charles said that the school had hired this teacher even though he had a written record in his file of having made similar remarks at another school. Charles also said there were often Nazi skinheads standing outside the school yard recruiting at lunchtime.

The group of (mostly white) parents quickly shut Charles down. Responding, “And what do you want of this group. Do you want us to do something.” Eliasoph notes this was not “as a question, but with a dropping tone at the end.”

Afterwards, Eliasoph quotes the meeting minutes:

Charles Jones relayed an incident for information. He is investigating on behalf of some parents who requested help from the NAACP.

The same minutes contained “half of a single-spaced page” dedicated to “an extensive discussion on bingo operations.”

Eliasoph’s other interactions with the group indicates that they aren’t intentionally racist – rather, they are well-meaning citizens to whom the deep challenge of race relations seems too much to handle; they would rather make progress on bingo.

And this is where the cruelest twist of power and privilege come in: it is easy to ignore these hard problems, to brush them off as unavoidable tragedies, to simply shake your head and sigh – all of this is easy, as long as it’s not happening to you.

facebooktwittergoogle_plusredditlinkedintumblrmail

Jessica Jones and the Banality of Evil

Most of the characters in Marvel’s Netflix show Jessica Jones are not very Good – in the deeper, capital-G sense of the word.

They’re not very good people.

Some are certainly worse than others, and some are even moderately good, but few, if any, stand out as paragons of virtue. Indeed, the main villain of the story – Zebediah Killgrave, who uses his powers of mind-control to manipulate people for his violent and disturbing ends – is hardly the tale’s only bad guy.

He is simply the most powerful.

Early on in the season, Jones’ friend Trish Walker laments Kilgrave’s egoism: Men and power, it’s seriously a disease.

Kilgrave is dangerous not because he’s a depraved, disturbed individual – but rather it is his power which makes him dangerous. Another man with the same power might be just as villainous, and Kilgrave without his powers would be just another unremarkable man.

Indeed, over the course of the season we see this transformation to power take place in Officer Will Simpson, who spirals out of control as he becomes increasing reliant on a drug that boosts his adrenaline.

It’s not just the drug that makes Simpson a menace: his personality had always veered towards anger and violence. Rather the addition of a superhuman ability transforms him from unremarkably disagreeable to near-supervillian status.

Yes, all women, the whole season seems to scream.

In many way, these themes remind me of Hannah Arendt’s famous reflections on the “banality of evil,” from Eichmann in Jerusalem.

While in no way defending Eichmann – who was clearly immoral and depraved – in the end, Arendt finds him wholly unremarkable – a bureaucratic man whose terrible acts were driven by his own uncaring quest for power. In the setting of Nazi German, Eichmann unleashed great evil – but without the power of his position and context, he would have been just another, unremarkable, power-hungry man.

As Arendt writes:

In the face of death he had found the cliché used in funeral oratory. Under the gallows his memory played him the last trick he was “elated” and he forgot that this was his own funeral. It was as though in those last minutes he was summing up the lesson that this long course in human wickedness had taught us – the lesson of the fearsome word-and-thought-defying banality of evil.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Confidence Man

In 1849, the New York Herald reported on the arrest of a gentleman by the name of William Thompson.

I use the term ‘gentleman’ here broadly. As the Herald reported:

For the last few months a man has been traveling about the city…he would go up to a perfect stranger in the street, and being a man of genteel appearance, would easily command an interview. Upon this interview he would say after some little conversation, “have you confidence in me to trust me with your watch until to-morrow;” the stranger at this novel request, supposing him to be some old acquaintance not at that moment recollected, allows him to take the watch, thus placing “confidence” in the honesty of the stranger, who walks off laughing and the other supposing it to be a joke allows him so to do. In this way many have been duped…

To those who had heard of these strange interactions, Thompson was known as the “Confidence Man.”

He was, in fact, the first “confidence man” – a term which has sense been colloquially shortened to “con man.”

facebooktwittergoogle_plusredditlinkedintumblrmail