What line did Putin cross in Crimea?

Posted March 30th, 2014 by Thanassis Cambanis and filed in Writing


[The Internationalist column published in the The Boston Globe Ideas.]

WHAT HAPPENED IN UKRAINE over the past month left even veteran policy-watchers shaking their heads. One day, citizens were serving tea to the heroic demonstrators in Kiev’s Euromaidan, united against an authoritarian president. Almost the next, anonymous special forces fighters in balaclavas were swarming Crimea, answering to no known leader or government, while Europe and the United States grasped in vain for ways to influence events.

Within days, the population of Crimea had voted in a hastily organized referendum to join Russia, and Russia’s president, Vladimir Putin, had signed the annexation treaty formally absorbing the strategic peninsula into his nation.

As the dust settles, Western leaders have had to come to terms not only with a new division of Ukraine, but its unsettling implications for how the world works. Part of the shock is in Putin’s tactics, which blended an old-fashioned invasion with some degree of democratic process within the region, and added a dollop of modern insurgent strategies for good measure.

Vladimir Putin at the Plesetsk cosmodrome launch site in northern Russia.

Vladimir Putin at the Plesetsk cosmodrome launch site in northern Russia./PRESIDENTIAL PRESS SERVICE VIA REUTERS


But when policy specialists look at the results, they see a starker turning point. Putin’s annexation of the Crimea is a break in the order that America and its allies have come to rely on since the end of the Cold War—namely, one in which major powers only intervene militarily when they have an international consensus on their side, or failing that, when they’re not crossing a rival power’s red lines. It is a balance that has kept the world free of confrontations between its most powerful militaries, and which has, in particular, given the United States, as the most powerful superpower of all, an unusually wide range of motion in the world. As it crumbles, it has left policymakers scrambling to figure out both how to respond, and just how far an emboldened Russia might go.


“WE LIVE IN A DIFFERENT WORLD than we did less than a month ago,” NATO Secretary General Anders Fogh Rasmussen said in March. Ukraine could witness more fighting, he warned; the conflict could also spread to other countries on Russia’s borders.

Up until the Crimea crisis began, the world we lived in looked more predictable. The fall of the Berlin Wall a quarter century ago ushered in an era of international comity and institution building not seen since the birth of the United Nations in 1945. International trade agreements proliferated at a dizzying speed. NATO quickly expanded into the heart of the former Soviet bloc, and lawyers designed an International Criminal Court to punish war crimes and constrain state interests.

Only small-to-middling powers like Iran, Israel, and North Korea ignored the conventions of the age of integration and humanitarianism—and their actions only had regional impact, never posing a global strategic threat. The largest powers—the United States, Russia, and China—abided by what amounted to an international gentleman’s agreement not to use their military for direct territorial gains or to meddle in a rival’s immediate sphere of influence. European powers, through NATO, adopted a defensive crouch. The United States, as the world’s dominant military and economic power, maintained the most freedom to act unilaterally, as long as it steered clear of confrontation with Russia or China. It carefully sought international support for its military interventions, even building a “Coalition of the Willing” for its 2003 invasion of Iraq, which was not approved by the United Nations. The Iraq war grated at other world powers that couldn’t embark on military adventures of their own; but despite the irritation the United States provoked, American policymakers and strategists felt confident that the United States was obeying the unspoken rules.

If the world community has seemed bewildered by how to respond to Putin’s moves in Crimea over the last month, it’s because Russia has so abruptly interrupted this narrative. Using Russia’s incontestable military might, with the backing of Ukrainians in a subset of that country, he took over a chunk of territory featuring the valuable warm-water port of Sevastopol. The boldness of this move left behind the sanctions and other delicate moves that have become established as persuasive tactics. Suddenly, it seemed, there was no way to halt Russia without outright war.

Some analysts say that Putin appears to have identified a loophole in the post-Cold War world. The sole superpower, the United States, likes to put problems in neat, separate categories that can be dealt with by the military, by police action or by international institutions. When a problem blurs those boundaries—pirates on the high seas, drug cartels with submarines and military-grade weapons—Western governments don’t know what to do. Today, international norms and institutions aren’t configured to react quickly to a legitimate great power willing to use force to get what it wants.

“We have these paradigms in the West about what’s considered policing, and what’s considered warfare, and Putin is riding right up the middle of that,” said Janine Davidson, a senior fellow at the Council on Foreign Relations and former US Air Force officer who believes that Putin’s actions will force the United States to update its approach to modern warfare. “What he’s doing is very clever.”

For obvious reasons, a central concern is how Putin might make use of his Crimean playbook next. He could, for example, try to engineer an ethnic provocation, or a supposedly spontaneous uprising, in any of the near-Russian republics that threatens to ally too closely with the West. Mark Kramer, director of Harvard University’s Project on Cold War Studies, said that Putin has “enunciated his own doctrine of preemptive intervention on behalf of Russian communities in neighboring countries.”

There have been intimations of this approach before. In 2008, Russian infantry pushed into two enclaves in neighboring Georgia, citing claims—which later proved false—that thousands of ethnic Russians were being massacred. Russia quickly routed the Georgian military and took over Abkhazia and South Ossetia. Today the disputed enclaves hover in a sort of twilight zone; they’ve declared independence but were recognized only by Moscow and a few of its allies. Ever since then, Georgian politicians have warned that Russia might do the same thing again: The country could seize a land corridor to Armenia, or try to absorb Moldova, the rest of Ukraine, or even the Baltic States, the only former Soviet Republics to join both NATO and the European Union.

Others see Putin’s reach as limited at best to places where meaningful military resistance is absent and state control weak. Even in Ukraine, Russia experts say, Putin seemed content to wield influence through friendly leaders until protests ran the Ukrainian president out of town and left a power vacuum that alarmed Moscow. Graham, the former Bush administration official, said it would be a long shot for Putin to move his military into other republics: There are few places with Crimea’s combination of an ethnic Russian enclave, an absence of state authority, and little risk of Western intervention.

The larger worry, of course, is who else might want to follow Russia’s example. China is the clearest concern, and from time to time has shown signs of trying to throw its weight around its region, especially in disputed areas of the South China Sea. But so far it has been Chinese fishing boats and coast guard vessels harassing foreign fishermen, with the Chinese navy carefully staying away in order not to trigger a military response. For the moment, at least, Putin seems willing to upend this delicately balanced world order on his own.


THE INTERNATIONAL community’s flat-footed response in Crimea raises clear questions: What should the United States and its allies do if this kind of land grab happens again—and is there a way to prevent such moves in the first place?

“This is a new period that calls for a new strategy,” said Michael A. McFaul, who stepped down as US ambassador to Russia a few weeks before the Crimea crisis. “Putin has made it clear that he doesn’t care what the West thinks.”

So far the international response has entailed soft power pressure that is designed to have an effect over the long term. The United States and some European governments have instated limited economic sanctions targeting some of Putin’s close advisers, and Russia has been kicked out of the G-8. There’s talk of reinvigorating NATO to discourage Putin from further adventurism. So far, though, NATO has turned out to be a blunt instrument: great for unifying its members to respond to a direct attack, but clumsy at projecting power beyond its boundaries. As Putin reorients away from the West and toward a Greater Russia, it remains to be seen whether soft-power deterrents matter to him at all.

Beyond these immediate measures, American experts are surprisingly short on specific suggestions about what more to do, perhaps because it’s been so long since they’ve had to contemplate a major rival engaging in such aggressive behavior. At the hawkish end, people like Davidson worry that Putin could repeat his expansion unless he sees a clear threat of military intervention to stop him. She thinks the United States and NATO ought to place advisers and hardware in the former Soviet republics, creating arrangements that signal Western military commitment. It’s a delicate dance, she said; the West has to be careful not to provoke further aggression while creating enough uncertainty to deter Putin.

Other observers in the field have made more modest economic proposals. Some have urged major investment in the economies of contested countries like Ukraine and Moldova, at the scale of the post-World War II Marshall Plan, and a long-term plan to wean Western Europe off Russian natural gas supplies, through which Moscow has gained enormous leverage, especially over Germany.

Davidson, however, believes that a deeper rethink is necessary, so that the United States won’t get tied up in knots or outflanked every time a powerful nation like Russia uses the stealthy¸ unpredictable tactics of non-state actors to pursue its goals. “We need to look at our definitions of military and law enforcement,” she said. “What’s a crime? What’s an aggressive act that requires a military response?”

McFaul, the former ambassador, said we’re in for a new age of confrontation because of Putin’s choices, and both the United States and Russia will find it more difficult to achieve their goals. In retrospect, he said, we’ll realize that the first decades after the Cold War offered a unique kind of safety, a de facto moratorium on Great Power hardball. That lull now seems to be over.

“It’s a tragic moment,” McFaul said.

Medical care is now a tool of war

Posted February 22nd, 2014 by Thanassis Cambanis and filed in Writing

cross_bullet_holes copy again

[Originally published in The Boston Globe Ideas section.]

BEIRUT — The medical students disappeared on a run to the Aleppo suburbs. It was 2011, the first year of the Syrian uprising, and they were taking bandages and medicine to communities that had rebelled against the brutal Assad regime. A few days later, the students’ bodies, bruised and broken, were dumped on their parents’ doorsteps.

Dr. Fouad M. Fouad, a surgeon and prominent figure in Syrian public health, knew some of the students who had been killed. And he knew what their deaths meant. The laws of war—in which medical personnel are allowed to treat everybody equally, combatants and civilians from any side—no longer applied in Syria.

“The message was clear: Even taking medicine to civilians in opposition areas was a crime,” he recalled.

As the war accelerated, Syria’s medical system was dragged further into the conflict. Government officials ordered Fouad and his colleagues to withhold treatment from people who supported the opposition, even if they weren’t combatants. The regime canceled polio vaccinations in opposition areas, allowing a preventable disease to take hold. And it wasn’t just the regime: Opposition fighters found doctors and their families a soft target for kidnapping; doctors always had some cash and tended not to have special protection like other wealthy Syrians.

Doctors began to flee Syria, Fouad among them. He left for Beirut in 2012. By last year, according to a United Nations working group, the number of doctors in Aleppo, Syria’s largest city, had plummeted from more than 5,000 to just 36.

Since then, Fouad has joined a small but growing group of doctors trying to persuade global policy makers—starting with the world’s public health community—to pay more urgent attention to how profoundly new types of war are transforming medicine and public health. In a recent article in the medical journal The Lancet, Fouad and a team of researchers looked closely at the conflicts in Iraq and Syria and found that the impact of what they call the “militarization of health care” in modern wars goes far beyond the safety of combat zone doctors, ensnaring even uninvolved civilians, with effects that can persist for years.

Other groups have begun focusing on the change as well. The International Committee of the Red Cross and Doctors Without Borders have documented and condemned disruptions of medical care by combatants. The entirety of the most recent issue of the journal Public Health is dedicated to a critical assessment of the failure of the World Health Organization to adapt to the new realities of conflict.

Fouad and his Lancet coauthors say—reasonably—that any new global policy norms for wartime health care ultimately need to be hashed out in the security and political realms, not by doctors. But doctors, especially public-health specialists, have a crucial role to play: They gather the data and define the issues that drive much of global health policy. And as war has become a free-for-all, dissolving the rules that long protected medical care, Fouad and his coauthors suggest that their own field has been slow to awaken to the importance of that change.

“To be honest, we are stuck in this problem, and we don’t know what to do,” said Omar Al-Dewachi, a physician and anthropologist at the American University of Beirut, and the lead author of the Lancet paper. “The first thing is to start a conversation, and come up with new tools.”

What will replace the current system is far from clear, they say, but it’s time to start figuring it out: Right now, war has a quarter-century headstart.


UNTIL RECENTLY, medical care was something of a bright spot in the history of conflict. Major European powers, shocked by the suffering and grisly deaths of their soldiers in the Crimean War, agreed in 1864 to the First Geneva Convention. It granted medical workers a special neutral status on the battlefield, and upheld the right of all wounded to medical care regardless of nationality.

It was the first article of international humanitarian law and became the cornerstone of all subsequent Geneva Conventions. When we talk about “crimes against humanity” and “war crimes,” we’re usually referring to the body of law that arose over the next century and half, built on the narrow foundation of neutral, universal medical care for combatants in the battle zone. There were always breakdowns and violations, but the laws of war were remarkably effective at limiting abuse, establishing taboos, and shaming the worst offenders.

That relative comity disappeared with the end of the Cold War. When the rival superpowers were locked in combat, they had an incentive to promote the laws of war; they didn’t want their own fighters mistreated if there were another world war. But with the United States and Soviet Union no longer in direct armed confrontation, small wars across the globe flared with new ferocity and fewer scruples.

The wars of the 1990s spread in shocking new ways, with widespread torture, starvation, and genocidal murder campaigns. Rather than fighting other soldiers, armed groups often concentrated on battling civilians. The Geneva Conventions barely figured for the combatants in the former Yugoslavia, Somalia, Rwanda, the Congo, and Afghanistan. The United States contributed to that decline after 9/11 when it suspended Geneva Convention protections for prisoners in the “war on terror,” and normalized drone strikes against targets in civilian areas.

The protections around medical care started to collapse as well. Dr. Jennifer Leaning, director of Harvard University’s FXB Center for Health and Human Rights, has worked in conflict zones for decades and has surveyed the eroding conditions of medical care. Increasingly, she found, the biggest victims in armed conflicts weren’t the combatants but the civilian populations suffering in scorched-earth or ethnic cleansing campaigns in which doctors and hospitals became explicit, rather than incidental, targets.

The final strike against medical neutrality, Leaning says, came in the last decade during America’s wars in Iraq and Afghanistan. Insurgents targeted anyone connected to the “Western” side of the conflict, even local health care workers treating patients in public hospitals. The CIA used a polio inoculation campaign to gather information in its hunt for Osama bin Laden; ever since, Pakistani mullahs have condemned vaccination workers. By the time civil war broke out in Syria, the equal right to medical care in combat zones existed only on paper.

“What is now happening is the violation of deeply held legal norms that have taken 150 years of work,” Leaning said in an interview. “That is what is appalling.”

It’s been commonplace in the last decade in Iraq and Syria for militias to enter hospitals with guns drawn, and order doctors to treat their comrades instead of civilians. In the early 1990s in Mogadishu, such behavior was an oddity. In Baghdad in 2006, Shia death squads took over entire hospitals and infiltrated the health ministry, denying health care to Sunnis and even hunting down rivals in their sickbeds.

Doctors are also starting to document how a war-torn region’s health problems can continue even when dramatic violence subsides. Once a functioning health care system is destroyed, it can take years or decades to rebuild. Al-Dewachi worked as a physician in Iraq after the 1991 Gulf War, and has had a close view of how a war’s medical impact can persist and spread. With Iraq’s hospital system in shambles and doctors constantly emigrating to safer places, patients have flowed over borders, often seeking medical treatment at great cost in the relatively stable hospitals of Beirut. Even when Iraq is supposedly calm, the stream of patients never abates, he said. “It’s an invisible story of the war,” Al-Dewachi said. “The long-term effects continue even when the fighting stops.”


WITH THE OLD SYSTEM broken, what should replace it? This is where it gets hard. Stateless rebels and insurgent groups, by definition, aren’t signatories to any international agreements. And the entire shape of modern warfare looks nothing like the formal battlefields that gave rise to the Geneva Conventions.

“We have to build new tools, new concepts, new institutions, that adapt to this concept of conflict,” Fouad said.

On the ground, under fire, health workers have improvised solutions. One common response has been to withdraw completely, only returning if combatants agree to respect the neutrality of clinics. At various times, groups as tough as Doctors Without Borders and the Red Cross have temporarily shut down operations when they were targeted in vicious conflict zone. Some aid groups have used private diplomacy to negotiate protected, equal access to government and rebel areas.

Leaning notes that some medical-aid groups have resorted to armed guards for clinics and vaccine workers, while other health care workers have evolved to function like military medics, embedded with combat forces and providing care on the run.

As for the longer-term effects, the recent Lancet paper suggests some ways for the public health community to rethink its approach to medical care in war zones—starting with its definition of what counts as a war zone.

Health care is normally a massive undertaking that operates through fixed channels—governments, national budgets, and clinics, with clear borders and supply chains. The paper suggests it’s time to scrap this notion when it comes to war zones: One facet of modern conflict is that it obeys no geographical limits. The researchers suggest that the global health community adopt a notion of shifting “therapeutic geographies” that acknowledges people caught in modern conflicts may change where they live—and where they get health care—from day to day, week to week.

That concept, abstract as it sounds, would mark a significant departure in global public health. The World Health Organization, the single most important international body dealing with health matters, still operates almost entirely through diplomatic channels, dealing only with the sovereign government even in complex, multisided conflicts like Syria’s. That means that when the regime wants to isolate a rebel province, WHO can’t vaccinate people there and other UN agencies might not be allowed to deliver emergency food aid. Health organizations and other humanitarian agencies will have to work with nonstate actors and militias, as well as governments, if they want to be able to operate throughout a war-affected area.

Public health research can also put more energy into measuring the human toll of war beyond the battlefield. Part of the recent Lancet paper is a strong call for doctors to start quantifying the effects of modern war on health, looking broadly at its full impact. “At this point, we need to just pay attention and describe what’s going on,” said Al-Dewachi.

The effects of better data could be political as well as medical, the authors suggest: A clear picture of the full health impact of war might well change the justification for future “humanitarian interventions.”

Today, Fouad’s former home of Aleppo is largely a ghost town, its population displaced to safer parts of Syria or across the border to Turkey and Lebanon. The city’s former residents carry the medical consequences of war to their new homes, Fouad said—not just injuries, but effects as varied as smoking rates, untreated cancer, and scabies. Wars like those in Syria and Iraq don’t follow the old rules, and their effects don’t stop at the border.

The researchers are energized by their quest to reorient the public health field, but they betray a certain world weariness when asked what might replace the current order, and provide better care for the millions harmed by today’s boundary-less wars.

“If I knew,” Al-Dewachi said, “I would be involved with it.”

A revolutionary playwright: Saadallah Wannous

Posted January 12th, 2014 by Thanassis Cambanis and filed in Writing

_ND45557 copy 2

Steve Wakeem, as Sheikh Qassim, the Mufti, delivers a religious decree. Photo: ALEXY FRANGIEH

[Originally published in The Boston Globe Ideas section.]


Citizens in Damascus were up in arms. An autocrat’s impetuous power grabs and flagrant infidelity had split the city’s ruling clans, while fundamentalist clerics issued blanket fatwas against “immoral behavior.”

The year was 1880. The contretemps quickly subsided, a footnote in Ottoman history. But in December it formed the backbone of a gripping play that delivered a stark critique of power and conservative social mores in the Arab world.

Staged for the first time in English by the American University of Beirut, “Rituals of Signs and Transformations,” a 1994 play by the late Syrian playwright Saadallah Wannous, made a splash even in a city known for its relatively freewheeling culture. In literary circles, Wannous has long been considered a giant of Arab literature, but his work has rarely been performed in the region where he lived and worked.

Before his death in 1997, Wannous achieved renown not just as a Syrian dissident writer, but as a playwright on a par with Bertolt Brecht and Wole Soyinka. Politically, his plays in Arabic were akin to Vaclav Havel’s in Iron Curtain Czechoslovakia: They used the thin fictions of the theater to offer social criticism that would be otherwise unthinkable.

Like Havel, Wannous always saw himself as more than a playwright: He spent his life articulating a critique of authoritarianism, religious hypocrisy, and social repression. Up until his death, he was convinced that his plays were laying the groundwork for a complete reinvention of Arab society.

Syrian playwright Saadallah Wannous reads a message at UNESCO’s World Theatre Day.

Syrian playwright Saadallah Wannous reads a message at UNESCO’s World Theatre Day. 



Today, something of a rediscovery of Wannous’s work is underway: “Rituals” has been performed recently in Cairo and Paris, and a collection of his plays has just been published by the City University of New York. Despite it being in English translation, every performance of the play “Rituals” this month played to a full house of people eager to see Wannous’s ribald skewering of official sanctimony and social rigidity.

But the social revolution the playwright hoped for is still far off—and the eager audience for his rarely performed work is evidence of the immense hunger for an honest intellectual dialogue about the crisis in Arab society. Today, even in the arts, dissent like Wannous’s remains unusual here. And that work like his is so compelling, and yet hard to find, is a testament to just how narrow the scope of the region’s political dialogue has become.


WANNOUS WAS BORN in a poor rural village in 1941, a member of the Alawite minority whose members would come to control the Syrian government, and came of age during the 1960s, the peak years of both the Cold War and Arab nationalism. Syria tried to solve its domestic problems by uniting with Egypt, part of a doomed project to create single “pan-Arab” government for the Middle East and North Africa. It failed, leaving Syria and the rest of the Arab world to seek protection as clients of the Cold War powers. Damascus fell squarely in the Soviet camp, and the authoritarian state it created was modeled directly on those of its Eastern bloc counterparts.

In an irony that many in those Communist states would have recognized, Wannous drew a paycheck for much of his life from the very regime that he excoriated in his plays. After studying journalism in Cairo, he held jobs as a critic and a government bureaucrat, while writing plays on his own time. He edited cultural coverage for government newspapers, and established his own journals when his critical views made that impossible. During a stint in Paris as a cultural journalist, he met key writers of his time, including Jean Genet and Eugène Ionesco. Wannous ended up in charge of the Syrian government’s theater administration, which followed a Soviet model, generously supporting an arts scene that buttressed the regime’s values.

By the 1970s, the Assad regime had consolidated power and was fomenting anti-Israel militants around the region while avoiding direct conflict on its own border. Wannous rejected wholesale the primacy of the ruling Alawite minority, and his moral stances, presented in unusually vital dialogue and characters drawn as human beings rather than archetypes, established him as a major Arab writer of his generation.

Wannous’s work shares themes with other global dissident literature: In his play “The King is the King,” for example, a beggar successfully takes the place of the monarch, putting the lie to the claim that there’s anything unique about a divine ruler—or the dictator of Syria, for that matter. Every night at curtain call during its Damascus run, the director placed the stage-prop crown on Wannous’s head, to thundering applause.

“To this day, I don’t understand why it was allowed,” his daughter Dima said in an interview.

During his lifetime, Wannous himself said that he was allowed to write and live in Syria only so that the regime could pretend to the world that it tolerated free speech. He took full advantage of that liberty, savaging the failures of Arab nationalism and the autocrats who spawned it. He broke several taboos in “The Rape,” which as an epilogue featured a character named Wannous discussing the prospects for healing with an Israeli psychiatrist—a conversation that in real life would have been illegal.

As a body of work, his plays amounted to an argument that Arab society needed to break out of the political and social constraints that kept it locked in place. Confronted by his region’s stagnation and powerlessness, Wannous preserved a kind of optimism: The solution, he believed, lay within the Arab world and its citizens. He chafed at the convention of writing in classical Arabic, and wished writers felt free to reach more people by addressing their audience in colloquial language. He himself wrote his first drafts in the colloquial and then translated them into classical Arabic.

He bucked convention in other ways, too. In “Rape,” he depicts an Israeli soldier whose crimes against Palestinians distort his own psyche. But, crucially, he portrays other Israeli characters with empathy—not everyone is a villain—and he suggests there is value in engagement between Arabs and their Israeli enemies.

In “Rituals of Signs and Transformations,” which many critics consider his masterpiece, Wannous took aim at all the Arab world’s sacred cows in one shot. In the play, the mufti of Damascus—the city’s top religious official—feuds with a local rival, the naqib, leader of the nobility. But then the naqib is arrested cavorting with a prostitute, threatening the authority of all the religious leaders. The mufti sets aside his religious principles and schemes to save his erstwhile enemy. The only completely honorable characters are the naqib’s wife, who divorces him and chooses to work as a prostitute herself, and a local tough who’s dumped by his male lover when he decides he wants to be open about his love. A policeman who tries to enforce the law and tell the truth is thrown in jail and branded insane.

The play was never performed in Syria during Wannous’s lifetime. The official excuse was that the sets were too bulky to import from Beirut, where a truncated version had been staged, but it couldn’t have helped that the real-life mufti of Aleppo issued a fatwa against the play. Even in Beirut, the director avoided state censorship by calling the mufti by another name.

That might sound like an evasion from another era, but little has changed. The director of last month’s production also changed the title of the mufti, using the less-religious “sheikh.” Lebanon’s censor regularly shuts down plays, concerts, and other performances; to reduce the likelihood of censorship, the AUB producers didn’t charge for tickets. During the final performance, emboldened by the show’s success, some of the actors reverted to Wannous’ original language, calling the character “mufti.”

“I told them that if we got fined, they would have to pay it,” said the director, Sahar Assaf.


SEEN TODAY, “Rituals” still falls afoul of numerous taboos in the Middle East, from its attack on authority and religious hypocrisy to its unvarnished portrayal of child abuse, rape, and the persecution of gay people and women who seek equal rights. The audiences squirmed as much during the tender love scene between two men as during the soliloquies that reveal the mufti as a power-hungry schemer and another vaunted religious scholar as a serial child molester.

The continued power of Wannous’s work illustrates both his success as an artist and his failure, at least so far, to unleash the societal transformation for which he yearned. He aspired for a society where individuals could live free from tyranny, including the tyranny of his own Alawite sect. These are still not opinions that establishment Syrians are expected to discuss aloud. His daughter Dima, a journalist and fiction writer, last summer wrote approvingly about the spread of the war to Alawite areas. “Now they too will know what it means to be Syrian,” she said. She drew death threats from her father’s relatives.

“He thought his plays would transform Arab society, and all his life he was disappointed that they did not,” said Robert Myers, the playwright and AUB professor who cotranslated “Rituals.” “He thought his plays would ignite a revolution in thinking.”

One of the Wannous’s best-known lines comes not from a play but from the speech he delivered for UNESCO’s World Theater Day in 1996. It was the first time an Arab had been granted the honor, and it’s fair to assume Wannous knew he was speaking for posterity. “We are condemned to hope,” he said, “and what is taking place today cannot be the end of history.” At the time, he was speaking of the decrepit state of theater in the Arab world, but the phrase has lived on as a wider slogan.

Today the tradition Wannous embodied—the high-profile artist-intellectual driving a national conversation—is almost vanished itself. Throughout the Arab world, the publishing business has long been in decline. Millions watch high-end Arabic television serials, but only a few thousand attend theater productions. Most of the great dissident intellectuals are dead, and none in the new generation have achieved political influence. “Today’s intellectuals didn’t start the revolutions. That’s why they have to follow the people instead of leading them,” said Dima Wannous. She finds it particularly galling that sectarian religious fundamentalists have come to dominate the uprising in Syria, which she has been forced to flee because of threats.

Yet it’s also possible to see in Syria a testament to the staying power of Wannous’s ideas. The dream of a secular democratic state, the local initiatives to deliver food and health care, and especially, the early days of the revolt that pitted nonviolent citizens against a pitiless regime, all hark back to the ideals of Wannous—namely, his rejection of received authority and belief that if something better happens, it will arise from within. In 2011, when demonstrators first marched demanding the resignation of Bashar Assad, the signs some brandished read: “We are condemned to hope.”

Is Dubai the future of cities?

Posted December 1st, 2013 by Thanassis Cambanis and filed in Writing

Burj Khalifa soars above the other buildings in Dubai.


Burj Khalifa soars above the other buildings in Dubai.

[Originally published in The Boston Globe Ideas section.]

BEIRUT, Lebanon — The Palestinian poet and filmmaker Hind Shoufani moved to Dubai for the same reasons that have attracted millions of other expatriates to the glitzy emirate. In 2009, after decades in the storied and mercurial Arab capital cities of Damascus and Beirut and a sojourn in New York, she wanted to live somewhere stable and cosmopolitan where she also could earn a living.

Five years later, she’s won a devoted following for the Poeticians, a Dubai spoken-word literary performance collective she founded. The group has created a vibrant subculture of writers, all of them expats.

To its critics—and even many of its fans—“culture” and “Dubai” barely belong in the same sentence. The city is perhaps the world’s most extreme example of a business-first, built-from-the-sand boomtown. But Shoufani and her fellow Poeticians have become a prime exhibit in a debate that has broken out with renewed vigor in the Arab world and among urban theorists worldwide: whether the gleaming boomtowns of the Gulf are finally establishing themselves as true cities with a sustainable economy and an authentic culture, and, in the process, creating a genuine new path for the Middle East.

Burj Khalifa, the world's tallest tower


Burj Khalifa, the world’s tallest tower.


This is a question of both economic interest and huge sentimental importance. The Arab world is already home to a series of capitals whose greatness reaches deep into antiquity. The urban fabric and dense ancient quarters of Baghdad, Damascus, Cairo, and Beirut have long nourished Arab culture and politics. But, racked by insurrection, unemployment, and fading fortunes, they have also begun to seem, to many observers, more mired in the past than a template for the future.

The Dubai debate broke out again in October when Sultan Al Qassemi, a widely read gadfly and member of one of the United Arab Emirates’ ruling families, wrote a provocative essay arguing that the new Gulf cities, Dubai most notable among them, had once and for all eclipsed the ancient capitals as the “new centers of the Arab world.” A flurry ofwithering essaysnewspaper articles, and denunciations followed. “I touched a sensitive nerve,” Al Qassemi said in an interview.

His critics object that Dubai is hardly a model—as they point out, 95 percent of the city’s population is not even naturalized, but made up of expatriates with limited rights. And there’s another problem as well. Every one of the Gulf boomtowns—besides Dubai, they include Abu Dhabi, Qatar, Manama, and Kuwait City—has been underwritten, directly or indirectly, by windfall oil profits that won’t last forever.

A mosque in Cairo.


A mosque in Cairo.


In her seminal work “Cities and the Wealth of Nations,” Jane Jacobs argued that cities that were a “monoculture” last only as long as the boom that created them, whether it involved bauxite, rubber plants, or oil. To thrive in the long term, cities need adaptable, productive economies with diverse, high-quality workers and enough capitalist free-for-all so that unsuccessful businesses fail and new ones spring up. Otherwise they risk the fate of single-industry cities like New Bedford, Detroit, or the completely abandoned onetime mining city at Hashima Island in Japan.

Can Dubai and its peers successfully make that transition? Started as the kind of monocultures that Jacobs argued are doomed to fail, they are now trying to harness their money and top-down management to create a broader web of interconnected industries in the cities and their surrounds.

Dubai is the cutting edge of this experiment. With its reserves depleted, its growth comes from a diverse, post-oil economy, although it still receives significant financial support from other Emirates that are still pumping petrochemicals. Its rulers are determined to make their city a center for culture and education, building museums and institutes, sponsoringfestivals and conferences, with the expectation that they can successfully promote an artistic ecosystem through the same methods that attracted new business. What happens next stands to tell us a lot about whether an artificial urban economy can be molded into one that is complex and sustainable. If it can, that may matter not just for the Middle East, but for cities everywhere.


JACOBS, A PIONEERING WRITER on cities and urban economics who died in 2006, is perhaps best known for “The Death and Life of the Great American City,” her paean to Greenwich Village and small-scale urban planning. But in her 1984 follow-up about the economies of cities and their surrounding hinterlands, Jacobs showed a harder nose for business. To be wealthy and dynamic, she argued, cities needed not to depend on military contracts or to be hampered by having to subsidize other, poorer territories—pitfalls that have driven the decline of many a capital city. In her book, she touted Boston and Tokyo as creative, diversified economic engines. But many of the world’s storied capital cities, like Istanbul and Paris, she wrote, were fatally bound to declining industries and poor, dependent provinces.

Today, that description perfectly encapsulates the burden carried by the Arab world’s great cities. Baghdad, Damascus, and Cairo historically hosted multiple vibrant economic sectors: finance, research, manufacturing, design, and architecture. Eventually, though, they were hollowed out. Oil money, aid, and trade eliminated local industry, and the profits of these cities were siphoned away to support the poverty-stricken rural areas around them.

As these cities fell behind, a very different new urban model was rising nearby, along the Persian Gulf. As the caricature of the Gulf states goes, nomadic tribes unchanged for millennia suddenly found themselves enriched beyond belief when oil was discovered. The nouveaux riches cities of the Gulf were born of this encounter between the Bedouin and the global oil market.

The reality is more nuanced and interesting. The small emirates along the Gulf coast had long been trade entrepôts, and Dubai was among the most active. Its residents were renowned smugglers, with connections to Persia, the Arabian peninsula, and the Horn of Africa. When oil came, the Emirates already had a flourishing economy. And because their reserves were relatively small, they moved quickly to invest the petro-profits into other sectors that could keep them wealthy when the oil and gas ran out. Dubai, Abu Dhabi, and Sharjah (all in the Emirates) pioneered this model, with neighboring Manama, Qatar, and Kuwait City following it closely.

Skeptics have decried the new Gulf cities, often vociferously, ever since the oil sheikhs announced their grand ambitions to build them in the 1970s. In his 1984 classic “Cities of Salt,” the great novelist Abdelrahman Munif chronicled the rise of the Arab monarchs in the Gulf. He explained the title to Tariq Ali in an interview: “Cities of salt means cities that offer no sustainable existence,” Munif said. “When the waters come in, the first waves will dissolve the salt and reduce these great glass cities to dust. With no means of livelihood they won’t survive.”

And yet, despite the apparent contempt of cultural elites, when civil war swept Lebanon, the Arab world’s financial center moved to the Gulf. Soon other sectors blossomed: light manufacturing, tourism, technology, eventually music and television production.

Dubai led the way. It built the infrastructure for business, and business quickly came. Over the decades, investment and workers flowed to a desert city of malls and gated communities, which had a huge airport, well-maintained streets, and clear rules of the road. Abu Dhabi, Manama, and Doha followed suit, although they took it more slowly; with continuing oil and gas revenue, they didn’t need to take the risk of growth as explosive as Dubai’s. Unlike the austere cities of Saudi Arabia, all the Gulf’s coastal trading cities had a tradition of a kind of tolerance. Other religions were welcome, and so were foreigners, so long as they didn’t question the absolute authority of the ruling family.

In the last four decades of the oil era, that model has evolved into the peculiar institution of a city-state dependent on a short-term foreign labor pool from top to bottom. The most extreme case is Dubai, where less than 5 percent of the 2 million people are citizens. Citizens form a minority in all the other Gulf cities as well. Wages for expatriates—especially workers in construction and service sectors like the airlines—are kept low, and foreign laborers are isolated from better-off city residents in labor camps. Construction workers who complain or try to unionize have been deported. White-collar residents who have criticized Emirati rulers or who have supported movements like the Muslim Brotherhood have had their contracts canceled or their residencies not renewed.

The economic crash of 2008 wiped out some of Dubai’s more excessive projects (although the signature underwater hotel finally opened this year). The real estate bubble burst; expats abandoned their fancy cars at the airport. “There was this glee that the city was over. But it was resilient,” said Yaser Elsheshtawy, a professor of architecture at the UAE University. The Gulf cities bounced back. Millions of new workers, from Asia, Europe, as well as the Arab world, have migrated to the Gulf since then.

“The Dubai model might be good, it might be bad, but it deserves to be looked at with respect,” Elsheshtawy said. Egyptian by birth, Elsheshtawy has lived on three continents, and he’s grown tired of having to defend his choice to work in the Emirates. After he read dozens of ripostes to Al Qassemi’s polemic, including many that he felt smacked of cultural snobbery toward anyone who lived in the “superficial” Gulf cities, Elsheshtawy penned an eloquent defense of Dubai called “Tribes with Cities” on his blog Dubaization. He doesn’t like everything about the Gulf, but Elsheshtawy believes that Dubai and the other booming Gulf cities, “unburdened by ancient history” and blessed by a mix of cultures, can provide the world “the blueprint for our urban future.”


DUBAI AND ABU DHABI, the showcase cities of the Emirates, often seem like a they’re run by a sci-fi chamber of commerce. They’ve got the world’s tallest building, the biggest new art collections in starchitect-designed museums, the busiest airports, and growing populations. Beneath that surface, though, lies a structure that worries even many supporters: Freedoms are tightly constrained, and most of the population is made of explicitly second-class noncitizens. Other growing cities chafe under censorship or political restrictions—Beijing, Hong Kong, and Singapore spring to mind. But there’s a difference between those places, where citizen-stakeholders live out their entire lifetimes, and a city where almost everyone is fundamentally a visitor.

Even Al Qassemi, the Emirati who believes the new cities have pioneered a better economic model, has argued that the citizenship restriction will hurt Dubai and cities that follow its model. “Without naturalization, all the Arabs who move here and are creating these cities will see them only as stepping stones to greener pastures,” Al Qassemi said. “People make money and they leave.”

There’s a glaring moral problem with a city ruled by a tiny clan where most of the workers have no rights. But the last few decades suggest that citizenship and political freedom aren’t prerequisites for GDP growth. Jacobs wrote a lot about what cities need, but the only kind of freedom she wrote about was the freedom to innovate and create wealth. The new Gulf cities have carefully provided a state-of-the-art, fairly enforced body of regulations for corporations—precisely the kind of rule of law they actively deny to foreign workers.

In treating businesses more solicitously than individuals, the Gulf city model may depend on a twist that Jacobs never foresaw: They don’t care whether people stick around. In fact, these new cities assume they will be able to innovate precisely because they won’t be encumbered by citizens whose skills are no longer needed. If Dubai needs fewer construction and more service workers, or fewer film producers and more computer programmers, it simply lets its existing contracts lapse and hires the people it needs on the global market. The churn isn’t a flaw in the model; it’s part of its foundation.

That may explain why even Dubai’s defenders are not planning to stick around. Shoufani, the poet, says she cherishes the secure space to create that Dubai has given her, but she still plans to move on in a few years. So does Elsheshtawy, the architecture professor whose academic studies of urban space have helped counter the narrative of Dubai as a joyless, dystopian city interested only in the pursuit of money. He plans to retire somewhere else. It may not matter to Dubai’s fortunes, however, as long as people arrive to take their place.

The next few years will begin to tell how this experiment has turned out. Just as Jane Jacobs said, it doesn’t matter so much how a city was born. It matters how its economy operates. If Dubai and its imitators outlive the oil revenues and regional instability that helped them boom, it will be a lesson for cities everywhere in how to invent a viable urban economy—even if it leads to a kind of city that Jacobs herself might have loathed to live in.

A new brain trust for the Middle East

Posted September 28th, 2013 by Thanassis Cambanis and filed in Writing



[Originally published in The Boston Globe Ideas section.]

BEIRUT—In any American university, what the six researchers in this room are doing would be totally unremarkable: launching a new project to use the tools of social science to solve urgent problems in their home countries.

But for the young Arab Council for the Social Sciences, this work is anything but routine. The group’s fall meeting, initially planned for Cairo, was canceled at the last minute when Egyptian state security agents demanded a list of participants and research questions. The group relocated its meeting to Beirut, meaning some researchers couldn’t come because of visa problems. Still others feared traveling to Lebanon during a week when the United States was considering air strikes against neighboring Syria. Ultimately, the team met in an out-of-the-way hotel, with one participant joining via Skype.

Their first, impromptu agenda item: whether any Arab country could host their future meetings without any political or security risk.

The council’s struggle reflects, in microcosm, a much bigger problem facing the Middle East. With old regimes overthrown or tottering, for the first time in generations a spirit of optimism about change has swept through the region. The Arab uprisings cracked open the door to new ideas for how a modern Arab nation should govern itself—how it could rebalance authority and freedom, religious tradition and civil rights. But a key source of those new ideas is almost completely shut off. With few exceptions, universities and think tanks have been yoked under tight state control for decades. Military and intelligence officials closely monitor research, fearing subversion from political scientists, historians, anthropologists, and other scholars whose work might challenge official narratives and government power.

Just one year after its launch, the ACSS is hoping to fill that vacuum, giving backing and support to scholars who want to do independent, even critical thinking on the problems their societies face. It’s a daunting order in a region where for generations talented scholars have routinely fled abroad, mostly to Europe and North America.

“There’s been almost a criminalization of research here,” says Seteney Shami, the Jordanian-born, American-trained anthropologist tapped to get the new council running. “We want to change the way people think about the region.”

It took nearly five years of planning, but the new council is now finishing its first year of operation. It has awarded roughly $500,000 to 50 researchers. Its goal is ambitious: to build an enduring network that will connect individual researchers who until now have mostly labored alone, under censorship, or overseas. The council aims to give those thinkers institutional punch—not just funding for projects that aren’t popular with local regimes or Western universities, but muscle to fight authorities who still maneuver to block even the most innocuous-sounding research missions. Today issues of ethnic, sectarian, and sexual identity are still taboo for governments—and they’re precisely the focus for most of the researchers who met in Beirut.

Shami and the other founders envision the council as only a first step, helping push for new openness in universities and in publishing. Ultimately, they believe, it stands to change not only how Arab countries are perceived abroad, but the way those countries are governed. Given the incredible turbulence in the Arab world today, it’s easy to see why a new flowering of social science is needed—and also why it’s a risky proposition for whoever hopes to set it in motion.


WHEN WE THINK about where ideas come from, we often picture lone thinkers toiling indefatigably until they achieve their “eureka” moments. But in fact, the ideas that change the way we organize or understand our daily world grow most readily from ecosystems that can train such scholars, test their claims, and ultimately spread and promote their new thinking. The modern West takes this system for granted; universities are perhaps the most important nodes in a network that also includes foundations, think tanks, and relatively hands-off government funding agencies.

Even societies like China, with its tradition of centralized state control, support a vigorous web of universities and institutes that produce and test ideas. In China, the government might drive the research agenda—rural educational outcomes, say, or international trade negotiations—but researchers are expected to produce rigorous results that stand up to outside scrutiny.

Not so in the Arab world. Despite a historic scholarly tradition, and a vigorous cohort of contemporary thinkers, the intellectual institutions in Arab countries are today almost universally subordinated to state control. As the dictators of the 1950s matured and grew stronger, they feared—correctly—that universities would nurture political dissent and that students were susceptible to free-thinking. (Even today, groups like the Muslim Brotherhood induct most of their leaders into politics through university student unions.) So they dispatched intelligence officers to control what professors taught, researched, and published, and to curtail student activism of a political flavor.

To the extent that think tanks, institutes, and journals were allowed to exist at all, they became either government mouthpieces or patronage sinecures. Plenty of individual scholars have continued to work in an independent vein, and many do research or publish work outside the influence of the ruling regime. For the most part, though, they do so abroad; those who speak openly in their home countries have often encountered gross repression, like the Egyptian sociologist Saad Eddin Ibrahim, an establishment thinker who was imprisoned in 2000 for taking foreign grant money. His prosecution—despite his close ties to the ruling dictator’s family— served as a reminder to other scholars not to stray too far from the state’s goals.

The obstacles to researchers who hope to confront the region’s problems run even deeper than that: in most Arab nations, even basic data on public issues like water consumption, childhood education, and women’s health are treated as state secrets. So are government budgets, and anything to do with the military, the police, and industry. Egypt still tightly guards access to land registries running as far back as the Ottoman Era; Lebanon famously hasn’t conducted a census since 1932, and refuses to release any government data about the population size or its religious composition. International agencies like the World Bank are allowed to conduct surveys and research as part of development aid projects, but only on the condition that they keep the data confidential.

As a result, great Arab capitals like Damascus, Beirut, Baghdad, and Cairo, once among the world’s great centers of learning, have suffered a systematic impoverishment of intellectual life, especially in the realms in which it is now most needed.


IN MANY WAYS Seteney Shami’s career illuminates those challenges precisely. She left her native Jordan first for the American University of Beirut and then to earn an anthropology PhD from the University of California at Berkeley. Even as a young scholar, she knew she might face problems at home simply for writing about matters of identity among her own ethnic group, the minority Circassians, so she had her dissertation removed from public circulation. In the 1990s, she returned to the Middle East and built a well-regarded graduate program in anthropology at Jordan’s Yarmouk University—one of few rigorous graduate-level social sciences departments in the region. The experiment was short-lived, collapsing after only a few years when government patronage hires swamped the university faculty. Several scholars, including Shami, left.

She spent the next decade at US-funded foundations, working in Cairo for the Population Council and later in New York for the Social Science Research Council. She found that the scholars with whom she collaborated—and to whom she sometimes gave grants—relished the networks they built and the ideas that flowed when they had a chance to work with colleagues from different Arab countries, as well as in Europe and the United States. But there was a hitch: When the grant money ended, so did the network. In the West, such a change wouldn’t matter so much; the scholars would always have their home institutions to fall back on. Not so for a scholar in Jordan or Egypt, whose home institution might be far from supportive of his or her work.

“There is no institutional incentive to produce research in most Arab universities,” says Sari Hanafi, a sociologist at the American University of Beirut who is on the board of the new council. Hanafi conducted his own study of academic elites in the region, and discovered that most regional research was confined to safe descriptive projects—“production that will not question religious authority or the political system.” Essentially, it’s scholarship that doesn’t produce any new knowledge, or offer the possibility of change.

So a group of scholars including Shami and Hanafi began to conceive of something that would rectify the problem. Their vision isn’t a “think tank” in the Washington sense, but really almost a substitute for what universities do, creating a new and permanent forum to research, talk about, and solve serious social problems, outside government influence.

They secured funding from the Swedish government and later from Canada, the Carnegie Corporation, and the Ford Foundation. Among Arab countries, only Lebanon had laws that allowed a foreign international organization to operate free from government interference; even here, it took two full years for ACSS to clear all the red tape. In a way, the group’s timing was fortuitous: By the time the organization came online in 2011, with a few dozen scholars participating, the Arab uprisings were in full swing.

The first projects illustrate the flair ACSS is bringing to the sometimes stuffy world of social science. One study focuses on inequality, mobility, and development, and has brought together economists and other “hard” social scientists to explore issues like the impact of refugees from Syria’s civil war. A second, called “New Paradigms Factory,” unabashedly seeks to midwife the region’s next big ideas, asking scholars to challenge its dominant notions of sovereignty, national identity, and government power. The third major project, “Producing the Public in Arab Societies,” probes minority identity, community organizing, and alternative media sources—sensitive issues that governments steer away from, but that will be key to any new ordering of Arab civic life.


SOME AMONG the inaugural crop of researchers worry that despite its forward-looking visions, the window for an experiment like the Arab Council for the Social Sciences might already be closing. Two years ago, when Shami recruited her first team of scholars and grant recipients, a wave of optimism was washing over the Arab world. “It was a time to be brave, to test the boundaries of free expression,” she said.

Now, many of the same old restrictions have returned: Egyptian secret police, who had backed off after dictator Hosni Mubarak’s fall, resumed their open scrutiny of intellectuals after the military coup in July. Qatar, Bahrain, and the United Arab Emirates, fearing revolts from their own citizens, have also tightened their approach to academic inquiry, shutting down local research institutes and banning critical academics from attending conferences and teaching at international universities. Several of the scholars who met in Beirut as part of the “Producing the Public” working group declined to speak on the record even in general terms about the ACSS, fearing that any publicity at all might interfere with their work.

The first ACSS projects will take several years to yield results, and Shami says the long-term impact will depend, in part, on whether Arab governments continue to actively persecute intellectuals they perceive as critics. Meanwhile, Shami says, the group intends to go where it is most acutely needed—funding the edgiest and most relevant scholarship, and pushing for more open access to archives and data. To make up for the lack of regional peer-reviewed journals, it might begin publishing research itself. With the grandiose-sounding goal of transforming the entire discourse about the Arab world, Shami is aware that ACSS might fail; but she’s not interested, she said, “in biding our time and holding annual conferences.” If it proves necessary, she said, ACSS might create its own university.

Already, though, ACSS has had some visible impact. Omar Dahi, an economist at Hampshire College in Massachusetts, won a grant from the new council to study the way Syrian refugees are supporting themselves and how they are transforming the nations to which they’ve fled. He’s spending the semester in Beirut, and he expects to produce a series of articles that will bring rigor to a heated and topical policy debate. The council, he says, allows Arab scholars “to answer their own questions, and not only the questions being asked in North American and Western European academia.”

“It’s not easy, and ACSS alone is not going to be able to do it,” Dahi said. “But you have to start somewhere.”

The Secret History of Democratic Thought in the Middle East

Posted August 19th, 2013 by Thanassis Cambanis and filed in Writing

BG Ideas pic

Supporters of ousted President Mohammed Morsi protested at the Republican Guard building in Nasr City, Cairo. AP PHOTO/HASSAN AMMAR

[Published in The Boston Globe Ideas.]

IS DEMOCRACY POSSIBLE in the Middle East? When observers worry about the future of the region, it’s in part because of the dispiriting political narrative that has held sway for much of the last half century.

The conventional wisdom is that secular liberalism has been all but wiped out as a political idea in the Middle East. The strains of the 20th century—Western colonial interference, wars with Israel, windfall oil profits, impoverished populations—long ago extinguished any meaningful tradition of openness in its young nations. Totalitarian ideas won the day, whether in the form of repressive Islamic rule, capricious secular dictatorships, or hereditary oligarchs. As a result, the recent flowerings of democracy are planted in such thin soil they may be hopeless.

This understanding shapes policy not only in the West, but in the Middle East itself. The American government approaches “democracy promotion” in the Middle East as if it’s introducing some exotic foreign species. Reformists in the Arab world often repeat the canard that politicized Islam is incompatible with democracy to justify savage repression of religious activists. And even after the revolts that began in 2010, a majority of the power brokers in the wider Middle East govern as if popular forces were a nuisance to be placated rather than the source of sovereignty.

An alternative strain of thinking, however, is starting to turn those long-held assumptions on their head. Historians and activists are unearthing forgotten chapters of the region’s history, and reassessing well-known figures and incidents, to find a long, deep, indigenous history of democracy, justice, and constitutionalism. They see the recent uprisings in the Arab world as part of a thread that has run through its story for more than a century—and not, as often depicted, a historical fluke.

The case is most clearly and recently laid out in a new book called “Justice Interrupted: The Struggle for Constitutional Government in the Middle East” by Elizabeth F. Thompson, a historian at the University of Virginia, who tries to provide a scholarly historical foundation to a view gaining traction among activists, politicians, and scholars.

Thompson sees the thirst for justice and reform blossoming as long as 400 years ago, when the region was in the hands of the Ottoman Empire. In the generations since, bureaucrats, intellectuals, workers, and peasants have seized on the language of empire, law, and even Islam to agitate for rights and due process. Though Thompson is an academic historian, she sees her work as not just descriptive but useful, helping Arabs and Iranians revive stories that were deliberately suppressed by political and religious leaders. “A goal of this book is to give people a toolkit to take up strands of their own history that have been dropped,” Thompson said in an interview.

Not everyone agrees with her view: Canonical Middle Eastern history, exemplified by Albert Hourani’s 1962 study “Arabic Thought in the Liberal Age,” holds that liberalism did flourish briefly, but was extinguished as a meaningful force in the early years of the Cold War. Even today Hourani’s analysis is invoked to argue that there’s no authentic democratic current to fuel contemporary Arab politics.

But Thompson’s work resonates with a host of Middle Eastern academics, as well as activists, who are advocating new forms of government and who see their efforts as consistent with local culture and history. It may offer a way out of the pessimism gripping many Arab political activists today, finding connections between apparently disparate reformist forces in the region, and political ideas that are often seen as irreconcilably opposed. Most intriguing, she finds elements of this constitutional liberalism even within fundamentalist Islamist movements that democratizers most worry about. These threads suggest a possible way forward, a way to build a constitutional, democratic consensus on indigenous if often overlooked traditions. Islamists and secular Arabs, it turns out, have found common ground in the past, even written constitutions together. The same could happen again now.


NO ONE , including Thompson, would claim that democracy and individual freedom have been the main driver of Middle Eastern politics. Before World War I, almost the entire region lay under the dominion of absolute monarchs claiming a mandate from God—either the Ottoman Sultan, or the Shah of Iran. Later, Western colonial powers divided up the region in search of cheap resources and markets for their goods.

Yet lost in this history of despots and corrupt dealers is a long stream of democratizing ideas, sometimes percolating from common citizens and sometimes from among the ruling elite. In the 19th and 20th centuries, western countries were beginning to move away from authoritarian monarchies and toward the belief that more people deserved legal rights. During this same time period in the Middle East, a similar conversation about law, sovereignty, and democracy was taking place, encompassing everything from the role of religion in the state to the right of women to vote.

Although authoritarian governments largely won the day, Thompson argues that the story doesn’t end there: Instead, she weaves together a series of biographies to trace the persistence of more liberal notions of Middle Eastern society. She begins with an Ottoman civil servant named Mustafa Ali who, in 1599, wrote a passionate memo exhorting the Sultan to reform endemic corruption and judicial mismanagement, because injustices were causing subjects to revolt—thus making the empire less profitable.

From 1858 to 2011, a series of leaders—most of them politicians and also prolific writers—amassed substantial public followings and pushed, though usually without success, for constitutional reforms, transparent accountable governments, and the institutions key to a sustainable democracy. Thompson was surprised, she said, to find the case for liberal democracy and rights in the writings of Iranian clerics, Zionist Jews, Palestinian militants, and early Arab Islamists.

With support from the Maronite church, a group of Lebanese peasants formed a short-lived breakaway mountain republic in 1858, dedicated to egalitarian principles. The blacksmith who led the revolt, Tanyus Shahin, insisted on fair taxation and equal protection of the law. His followers took over the great estates and evicted the landlords, but their main demand was for legal equality between peasants and landowners.

An Egyptian colonel named Ahmed Urabi led a revolt against the Ottoman ruler in 1882, inaugurating a tradition of mass revolt that had its echo in Tahrir Square in 2011. Urabi in his memoir recounts that when the Ottoman monarch dismissed his demands for popular sovereignty in their final confrontation, Urabi replied: “We are God’s creation and free. He did not create us as your property.” Decades later, in 1951, Akram Hourani rallied 10,000 peasants to resist Western colonialism and local corruption in Syria. Eventually, he and his followers in the Baath Party were sidelined by generals who turned the party into a military vehicle.

Some of the stories that Thompson tells are less obscure, like those of the founders of modern Turkey—the one sizable Islamic democracy to emerge from the former Ottoman empire or the Iraqi Communist Party, which had its heyday in the decade after World War II, and whose constitutional traditions remain an important force today even if the party itself is almost completely irrelevant.

Perhaps most encouragingly, in a region known for clashes of absolutes, she finds an encouraging strain of compromise—in particular in the early 20th century, when secular nationalists negotiated with Islamists in Syria to hammer out a constitution they could both support. It was swept aside when France took over in 1923.

“The Middle East is going to see these crises in Tahrir and Taksim and Iran until it can get back to a moment of compromise, which existed a hundred years ago with Islamic liberalism, where you can have your religion and your democracy, too,” Thompson said.

Thompson said she was surprised to find support for constitutionalism and due process in the writings of Hassan El-Banna, the founder of the Muslim Brotherhood, and even Sayyid Qutb, the ideologue whose writings inspired Al Qaeda. They believed that consensual constitutions could achieve even their religious aims, without disenfranchising citizens who opposed them.

Some of the characters in this tale have largely vanished to history. Others remain hotly contested symbols in today’s politics. The name of Halide Edib, a feminist and avatar of Turkish nationalism in the early 1900s, is still invoked by the governing Islamist party as well as its secular critics. In Egypt, which enjoyed a period of boisterous liberal parliamentary politics between the two world wars, activists today are trying to revive the writings of early Islamists who believed that an accountable constitutional state, with rights for all, would be better than theocracy.


IN THOMPSON’S VIEW , this world did not simply vanish: It lives on in contemporary Arab political thought, most interestingly in Islamist politics.

It’s easy to assume that religiously driven movements are all antidemocratic—and indeed, some have proven so in practice, like the ayatollahs in Iran or the Muslim Brotherhood in Egypt. But Thompson offers a more nuanced view, showing that many of these religious movements have internalized central elements of liberal discourse. The Muslim Brothers wanted to dominate Egypt, but they attempted to do so not by fiat but through a new constitution and a free-market economy.

Princeton historian Max Weiss says his own study of the Levant backs Thompson’s central argument that constitutionalism thrives in the Middle East: For more than a century, a powerful contingent of thinkers, activists, and politicians in the region have embraced rule of law, constitutional checks and balances, and liberal economics. Even when they’ve lost the political struggles of the day, they’ve remained active, shaped institutions like courts and universities, and provided an important pole within national debates.

For those in power, “constitutional” government can often be used as a fig leaf: Nathan Brown, an expert on Islamism and Arab legal systems at The George Washington University, observes that leaders like the monarchs in the Persian Gulf have often wielded constitutions as just another means of extending their absolute rule. And they’re not alone: Egyptian judges, Syrian rebels, and Gulf sheikhs often use law and constitution to “entrench and regularize authoritarianism, not to limit it,” he says.

But among the people themselves, there is a longstanding hope for the rule of law rather than the rule of generals, or of imams. Knowing this history is important, Thompson argues, because it establishes that democracy is a local tradition, with roots among secular as well as religious Middle Easterners. Reformers, liberals, even otherwise conservative advocates for transparency and human rights are often tainted as “foreign” or “Western agents,” imposing alien ideas on Middle Eastern culture. This slur is especially potent given the West’s checkered history in the region, which more often than not involved intervention on behalf of despots rather than reformers.

Even if democracy is far from winning the race, its supporters can take courage from how many Middle Easterners have demanded it in their own vernacular. As Thompson’s book demonstrates, it’s very much a local legacy to claim.

Meet the International Revolutionary Geek Squad

Posted June 24th, 2013 by Thanassis Cambanis and filed in Writing


[Originally published in The Boston Globe Ideas section.]

BEIRUT — Alex, a Swiss bicyclist and Internet geek, thought he’d get a welcome break from his work as a computer engineer and a teacher when he moved to Damascus for a year to study Arabic. It was January 2011, a few months before the Arab uprisings spread to Syria.

But once he was there, Alex noticed with irritation that he couldn’t access Facebook and a seemingly random assortment of websites. Some Google search results were blocked, especially if they turned up pages containing forbidden terms like “Israel.”

He developed tricks to navigate the Internet freely, and sharpened his online evasion skills. If the government was so heavily monitoring and censoring Web surfing, he reasoned, it was surely spying on Internet users in other ways as well. He beefed up ways to encrypt his e-mail and Skype, and learned how to scour his own computer for remote eavesdropping software.

These skills ended up being more than just a personal hobby. When Syrians began to demonstrate against the regime of Bashar Assad, Alex found that his techniques were of urgent use to the friends he had made in the cafes of Damascus. Syrians were turning to activism, and they needed help.

“What was before a nuisance for me was now a danger to my friends,” said Alex, who didn’t want his last name published so as not to endanger any of his Syrian contacts.

Alex ended up spending two years in Beirut training Syrian antiregime activists on how to encrypt their data and protect their phones and laptops from the secret police, in what turned into a full-time job. Alex had become one of a small and secretive group of Internet security experts who work not with governments or companies but with individuals, teaching dissidents the skills they need to evade regime surveillance. Internet activists estimate there are about a hundred technical experts worldwide who work directly with dissidents.

As surveillance steps up and activists get more wired, the practical challenges for these digital security experts offer a unique glimpse of the frontline struggle between free speech and government control, or, as many of them put it, between freedom and authoritarianism. And with surveillance more than ever a concern for Americans at home, the knowledge of these security activists casts a revealing light on the peculiar role of the United States, home of both a powerful tech sector that has generated some of the most skillful evaders of surveillance and a government with an unparalleled ability to peer into our activities.

Indeed, even the people who know how to keep e-mail secret from the Syrians or Iranians say that it would be difficult to make sure the American government cannot eavesdrop on you. “It’s hard to find a service that it isn’t vulnerable to the CIA or NSA,” Alex said in an interview in Beirut. “It’s easier if you’re here, or in Syria.”


ACTIVISTS IN AUTHORITARIAN states face a range of basic problems when they sit down at a computer. They need to communicate privately in an environment where the regime likely runs the local Internet service. They may want to send news about domestic problems to international audiences; they may want to mobilize their fellow citizens for a cause the regime is trying to suppress. Whatever they do, they need to keep themselves out of trouble, and also avoid endangering their collaborators by unwittingly revealing their identities to the government

Tech experts like Alex offer them a mix of standard security protocols and tools designed specifically with lone activists in mind. The first step is “threat modeling.” Where does the danger come from and what are you trying to hide? A well-known dissident might not be worried about revealing her identity, but might want to protect the content of her phone conversations or e-mails. A relatively unknown activist might be more concerned with hiding her online identity, so that the government won’t connect her real-life identity to her blog posts.

Users new to the world of surveillance and evasion must master a new set of tools. There are proxy servers that allow access to blocked websites without tracking users’ browsing history or revealing their IP address. Security trainers teach activists how to encrypt all their data and communications. And because circumstances change, Web security advocates emphasize the importance of multiple, redundant channels—different e-mails, messaging programs and social media platforms—so that when one is compromised, there are other alternatives.

A repressive regime like Bashar Assad’s can effectively stymie dissent with crude old-fashioned ruses. On one occasion, the government arrested a rebel doctor while he was logged in to his Skype account. Agents posed as the doctor, sending all his contacts a file that supposedly contained a list of field hospitals. Instead, it installed program called a keylogger that allowed the Syrians to monitor everything the doctor’s contacts did on their computers.

Alex warns all the activists he trains that all their encryption measures could come to naught if they are caught, like the doctor was, while their computer is running—or if they give up their encryption password under interrogation. “They can always torture you for your password, and then all your data is compromised,” he said. There’s no foolproof protection against that.

Though these security measures can go a long way, consultants also find themselves needing to balance the effort it takes with the unique urgency of some of the dissidents’ lives. In the heat of violent conflict, encryption doesn’t always take priority. “Many of them are just too busy to care, to follow all the disciplined procedures,” Alex said. “It got to the point where it felt useless to teach them how to encrypt Skype when thousands of tons of TNT were falling from the sky.”


AS ACTIVISTS have tapped online resources in their struggles, a range of security specialists have sprung up to assist them. Some, like Alex, are independent operators; many of them arose loosely around a single crisis and then expanded their efforts.

In response to Tehran’s Web censorship in 2009, a group of Iranian-Americans established an organization called Access Now to train human rights groups and other organizations on more secure communications. In the four years since it has expanded worldwide and now sends technical specialists to work with activists in the former Soviet Union, the Middle East, and Africa. It also acts as a lobbying group, pressing for uncensored access to the Internet. “Access to an unfiltered and unsurveilled Internet is a human right,” says Katherine Maher, the group’s spokeswoman. “We should have the rights to free speech and assembly online as we have offline in the real world.”

A few years later, when the Arab uprisings began, activists again faced crucial concerns about technology and surveillance. Activists throughout the Arab world planned demonstrations online, and used social media as a major artery of communication. In Egypt, the government was so desperate to thwart the protest movement that in January 2011 it briefly cut off the entire nation’s Internet. Telecomix, a freewheeling collective that began in response to privacy concerns in Europe, was one of many groups that helped build workarounds so that Egyptians could communicate with one another and with the outside world in the early days of the uprising.

In Egypt, Alix Dunn cofounded a sort of nerd-wonk research group called The Engine Room in early 2011 to study and improve the ways that activists get tech support from the small community of available experts. “There are people who got really excited because all of a sudden IT infrastructure suddenly became part of something so political,” Dunn said. “They could be geeky and politically supportive at the same time.”

The advice is not always technical. For instance, in Egypt, Alaa Abdel Fattah, one of the country’s first bloggers and later a strategist for the 2011 uprising, championed a strategy of complete “radical openness.” He convinced other activists that they should assume that any meetings or communications could probably be monitored by the secret police, so activists should assume they’re always being overheard. Secret planning for protests should take place person to person, off the grid; in all other matters, activists should be completely open and swamp the secret police with more information than they could process. In the early stages of Egypt’s revolution, that strategy arguably worked; activists were able to outwit the authorities, starting marches in out-of-the-way locations before police could get there.


GIVEN THE RECENTrevelations about the US government’s online surveillance programs, it’s striking to note that much of the effort to improve international digital security for dissidents has been spurred by aid from the US government. The month after the Arab uprisings began, the US Department of State pledged $30 million in “Internet Freedom” grants; most of them have gone, directly or indirectly, to the sort of activist training that Alex was doing in Damascus.

In some ways, the latest American surveillance revelations haven’t changed the calculus for activists on the ground. Maher notes that almost all the State Department-funded training instructs activists around the world to assume that their communications are being intercepted. (Her organization doesn’t take any US government funding.)

“It’s broadly known that almost every third-party tool that you can take is fundamentally compromised, or could be compromised with enough time and computing power,” Maher said.

But there are new wrinkles. Some of the safest channels for dissidents have been Skype and Gmail—two services to which the US government has apparently unfettered access. It’s virtually impossible for a government like Iran’s to break the powerful encryption used by these companies. Alex, the trainer who worked with Syrians, says that a doctor in Aleppo doesn’t need to worry about the NSA listening to Skype calls, but an activist doing battle with a US corporation might.

Officially, American policy promotes a surveillance-free Internet around the world, although Washington’s actual practices have undercut the credibility of the US government on this issue. How will Washington continue to insist, for example, that Iranian activists should be able to plan protests and have political discussions online without government surveillance, when Americans cannot be sure that they are free to do the same?

For activists grappling with real-time emergencies in places like Syria or long-term repression in China, Russia, and elsewhere, the latest news doesn’t change their basic strategy—but it may make the outlook for Internet freedom darker.

“These revelations set a terrible precedent that could be used to justify pervasive surveillance elsewhere,” Maher said. “Americans can go to the courts or their legislators to try and challenge these programs, but individuals in authoritarian states won’t have these options.”

American Energy Independence: The Great Shake-up

Posted May 26th, 2013 by Thanassis Cambanis and filed in Writing


[Originally published in The Boston Globe Ideas section.]

EVER SINCE AMERICANS had to briefly ration gas in 1973, “energy independence” has been one of the long-range goals of US policy. Presidents since Richard Nixon have promised that America would someday wean itself of its reliance on foreign oil and gas, which leaves us vulnerable to the outside world in a way that was seen as a gaping hole in America’s national security. It also handcuffs our foreign policy, entangling America in unstable petroleum-producing regions like the Middle East and West Africa.

Given the United States’ huge appetite for ­fuel, energy independence has always seemed more of a dream than a realistic prospect. But today, nearly four decades later, energy independence is starting to loom in sight. Sustained high oil prices have made it economically viable to exploit harder-to-reach deposits. Techniques pioneered over the last decade, with US government support, have made it possible to extract shale oil more efficiently. It helps, too, that Americans have kept reducing their petrochemical consumption, a trend driven as much by high prices as by official policy. Total oil consumption peaked at 20.7 million barrels per day in 2004. By 2010, the most recent year tracked in the CIA Factbook, consumption had fallen by nearly a tenth.

Last year, the United States imported only 40 percent of the oil it consumed, down from 60 percent in 2005. And by next year, according to the US Energy Information Administration, the United States will need to import only 30 percent of its oil. That’s been driven by an almost overnight jump in domestic oil production, which had remained static at about 5 million barrels per day for years, but is at 7 million now and will be at 8.5 million by the end of 2014. If these trends continue, the United States will be able to supply all its own energy needs by 2030 and be able to export oil by 2035. In fact, according to the government’s latest projections, the country is on track to become the world’s largest oil producer in less than a decade.

Yet as this once unimaginable prospect becomes a realistic possibility, it’s far from clear that it will solve all the problems it was supposed to. As much as boosters hope otherwise, energy independence isn’t likely to free America from its foreign policy entanglements. And at worst, say some skeptics who specialize in energy markets, it might create a whole new host of them, subjecting America to the same economic buffeting that plagues most oil exporters, and handing China even more global influence as the world’s behemoth consumer.

The prospect is prompting a profound rethinking among America’s top diplomats, and experts across a broad swath of the foreign policy world are beginning to explore the kind of global shake-up it might bring. Some are optimistic. “The shifts are likely to be significant, with profound long-term implications,” wrote Citigroup’s Edward Morse in “Energy 2020: Independence Day,” a report published this spring. “Burgeoning US energy independence brings with it an opportunity to re-define the parameters of post-Cold War foreign policy.”

As much as the shift brings opportunities, however, it is also likely to open the United States up to liabilities we have not yet had to face. The consequences may be both good and bad, enriching and destabilizing for US interests—but they will certainly have a major impact on our geopolitics, in ways that the policy world is only just beginning to understand.


WHEN RICHARD NIXON was president, America consumed about one-third of the world’s oil, importing about 8.4 million barrels per day chiefly from the Middle East. The status quo hummed along until the Arab-Israeli war of 1973. The United States sent weapons to Israel, and the Arab states retaliated with a six-month oil embargo, refusing to sell oil to America. It was the only time in history that the “oil weapon” was effectively used, and it made a permanent impression on the United States.

Over time, the American response to the embargo came to include three major initiatives that still shape energy policy today. First, the government promoted lower oil consumption by pushing coal and natural gas power plants, home insulation, and mileage standards for cars. Second, the country drilled for more of its own oil. Third, and perhaps most important from a foreign-policy standpoint, the United States promoted a unified global oil market in which any country had the practical means to buy oil from any other. That meant that even if some countries couldn’t do business with each other—say, Iran and the United States—it wouldn’t affect the overall price and availability of oil. Other countries could fill in the gap.

The dreams of energy independence crossed party lines. Though liberals and conservatives differ on the means—how much we should rely on new drilling versus energy conservation—both parties have endorsed the quest. It was one of the few issues on which Presidents Carter and Reagan agreed.

America has made steady progress over the years, to the point where the nation’s total oil consumption has actually begun to drop. As this has happened, the high cost of global energy has also made it profitable to increase domestic production of natural gas and oil. A few months ago, both the US Energy Information Administration and the International Energy Agency predicted that if current production trends continue, the United States will overtake Saudi Arabia and Russia as the world’s largest oil producer in 2017.

Taken together, our slowing appetite and booming production mean that with a suddenness that has surprised many observers, the prospect of energy independence—technically speaking, at least—looms in the windshield.

Energy independence looks different today, however, than it did in the oil-shocked 1970s. For one thing, the energy market is a linchpin of the world order, and any big shift is likely to have costs to stability. Some analysts have warned that America’s growing oil production will create a glut that lowers prices, eating up the profits of oil countries and destabilizing their regimes. (That’s in the short term, anyway; worldwide, oil demand is still rising fast.) Falling prices mean that countries that depend on oil will face sudden cash shortages. It’s easy to imagine how destabilizing that could be for a natural-resource power like Russia, for the monarchs of the Persian Gulf, or for the dictators in Central Asia. No matter how distasteful their rule, the prospect of an unruly transition, or worse still, a protracted conflict, in any of those countries could cause havoc.

In the long term, this is not necessarily a bad thing: Weakening oppressive or corrupt governments could ultimately be beneficial for the people of those countries. And a shift in the balance of power away from the Gulf monarchies of OPEC and toward the United States could have a democratizing effect. In any event, though, lower oil prices and a dynamic energy market make the current stable order unpredictable.

China’s economic rise has also changed the global energy equation. For now, China is largely without its own petroleum supplies and is ­replacing the United States as the largest importer. As China steps into the United States’ shoes as the world’s largest oil customer, it will gain influence in oil-producing regions as American influence wanes. It might also feel compelled to invest more heavily in an aggressive navy, fearing that the United States will no longer shoulder the responsibility of policing shipping lanes in the Persian Gulf and elsewhere—a costly security service that America pays for but which benefits the entire network of global trade.

Domestically, there’s also the “resource curse,” which afflicts countries that depend too heavily on extracted commodities like minerals or petroleum. Such industries don’t add much value to a society beyond the price the commodity fetches at market, and that price is notoriously fickle, meaning fortunes and jobs rise and fall with swings in global prices. The resource curse often implies corruption and autocracy as well. But economists are less concerned about that, since the United States already has an effective government and laws to thwart corruption, and because oil will still make up a minuscule overall share of the economy. Last year oil and gas extraction amounted to just 1.2 percent of the American gross domestic product.


THERE ARE STILL plenty of people who think that energy self-sufficiency will be an unalloyed good. Jay Hakes, who has pursued the goal as an energy official under the last three Democratic presidents, says that America will reap countless political and economic dividends. It will help the trade deficit, give American companies and workers benefits when oil prices are high, and insulate the country from supply shocks. It will also give Washington wider latitude when dealing with oil-producing countries, on which it will depend less. “There are some downsides,” he acknowledges, “but they’re outweighed by all the positives.”

One benefit that self-sufficiency won’t bring, it seems clear, is a sudden independence from the politics of the Middle East. The region produces about half the world’s oil, and Saudi Arabia alone has so much oil that it can raise its capacity at a moment’s notice to make up for a shortfall anywhere else in the world.

Already, America is largely independent of Middle Eastern oil as a consumer: Only about 15 percent of our supply comes from the region. But we do depend on a stable world market—even more so if we become a net exporter ourselves. So even if we don’t buy Saudi oil, we’ll still need a stable Saudi regime that can add a few million barrels a day to world flows, at a moment’s notice, to offset a disruption somewhere else.

Michael Levi, a fellow at the Council on Foreign Relations and author of the book “The Power Surge: Energy, Opportunity, and the Battle for America’s Future,” believes that the biggest risk of achieving a goal like energy independence is complacency: Without the pressures that importing oil has brought, we may have little reason to innovate our way out of fossil fuels altogether. The policies themselves have achieved a great deal of good, he points out—stabilizing the world’s energy markets, reducing consumption, and pushing us beyond “independence” toward renewable sources like wind and solar power (though today these still make up a vanishingly small portion of the US energy supply).

Levi argues that an American oil bonanza could easily remove the political incentives for long-term planning and sacrifice. “I get scared that we’ll become complacent and make foolish decisions because we believe we’ve become energy independent,” Levi says. Energy independence was a useful slogan to motivate America, but in reality, a sensible energy policy has to balance a plethora of competing concerns, from geopolitics and the environment to consumer demand and fuel’s importance to the economy.

“The real way to be energy independent,” he said, “is actually to not use oil.”

How cities reshape themselves when trust vanishes

Posted April 21st, 2013 by Thanassis Cambanis and filed in Writing


Blast barriers in Baghdad, 2008. DONOVAN WYLIE / MAGNUM PHOTOS 

[Originally published in The Boston Globe Ideas.]

BEIRUT — Everything that people love and hate about cities stems from the agglomeration of humanity packed into a tight space: the traffic, the culture, the chance encounters, the anxious bustle. Along with this proximity come certain feelings, a relative sense of security or of fear.

Over the last 13 years I have lived in a magical succession of cities: Boston, Baghdad, New York, and Beirut. They all made lovely and surprising homes—and they all were distorted to varying degrees by fear and threat.

At root, cities depend on constant leaps of faith. You cross paths every hour with people of vastly different backgrounds and trust that you’ll emerge safely. Each act of violence—a mugging, a murder, a bombing—erodes that faith. Over time, especially if there are more attacks, a city will adapt in subtle but profound and insidious ways.

The bombing last week was a shock to Boston, and a violation. As long as it’s isolated, the city will recover. But three people have died, and more than 170 have been wounded, and the scar will remain. As we think about how to respond, it’s worth also considering what happens when cities become driven by fear.


NEW YORK, WASHINGTON, and to a lesser extent the rest of America have exchanged some of the trappings of an open society for extra security since the Sept. 11 attacks. Metal detectors and guards have become fixtures at government buildings and airports, and it’s not unusual to see a SWAT team with machine guns patrolling in Manhattan. Police conduct random searches in subways, and new buildings feature barriers and setbacks that isolate them from the city’s pedestrian life.

Baghdad and Beirut, however, are reminders of the far greater changes wrought by wars and ubiquitous random violence. A city of about 7 million, low-slung Baghdad sprawls along the banks of the Tigris River. It’s the kind of place where almost everyone has a car, and it blends into expansive suburbs on its fringes. When I first arrived in 2003, the city was reeling from the shock-and-awe bombing and the US invasion. But it was the year that followed that slowly and inexorably transformed the way people lived. First, the US military closed roads and erected checkpoints. Then, militants started ambushing American troops and planting roadside bombs; the ensuing shootouts often engulfed passersby. Finally, extremists and sectarian militias began indiscriminately targeting Iraqi civilians and government personnel—in queues at public buildings, at markets, in mosques, virtually everywhere. Order crumbled.

Baghdad became a city of walls. Wealthy homeowners blocked their own streets with piles of gravel. Jersey barriers sprang up around every ministry, office, and hotel. As the conflict widened, entire neighborhoods were sealed off. People adjusted their commutes to avoid tedious checkpoints and areas with frequent car bombings. Drive times doubled or tripled. Long lines, with invasive searches, became an everyday fact of life. The geography of the city changed. Markets moved, even the old-fashioned outdoor kind where merchants sell livestock or vegetables. Entire sub-cities sprang up to serve Shia and Sunni Baghdadis who no longer could travel through each other’s areas.

Simple civic pleasures atrophied almost overnight. No one wanted to get blown up because they insisted on going out for ice cream. The famed riverfront restaurants went dormant; no more live carp hammered with a mallet and grilled before our eyes. The water-pipe joints in the parks went out of business. Most of the social spaces that defined the city shut down. Booksellers fled Mutanabe Street, the intellectual center of the city with its antique cafes. The amusement park at the Martyrs Monument shut its gates. Hotel bar pianists emigrated. Dust settled over the playgrounds and fountains at the strip of grassy family restaurants near Baghdad University.

In Beirut, where I moved with my family earlier this year, a generation-long conflict has Balkanized the city’s population. Here, most groups no longer trust each other at all. From 1975 to 1991 the city was split by civil war, and people moved where they felt safest. A cosmopolitan city fragmented into enclaves. Christians flocked to East Beirut, spawning a dozen new commercial hubs. Shiites squatted in the village orchards south of Beirut, and within a decade had built a city-within-a-city almost a million strong—the Dahieh, or “the Suburb.” The original downtown became a demilitarized zone, its Arabesque arcades reduced to rubble, and today has been rebuilt as a sterile, Disney-like tourist and office sector. Ras Beirut, my neighborhood, deteriorated from proudly diverse (and secular) to “Sunni West Beirut,” although it still boasts pockets of stubborn coexistence. In today’s Beirut, my block, where a Druze warlord lives across the street from a church and subsidizes the parking fees of his Shia, Sunni, and Christian neighbors, is a stark exception.

Mixing takes place, but tentatively, and because of frequent outbreaks of violence over the years, Beirutis have internalized the reflexes to fight, defend, and isolate. The result is a city alight with street life, cafes, and boutiques but which can instantaneously shift to war footing. One friend ran into his bartender on a night off at a checkpoint with bandoliers of bullets strapped to his chest. Even when the city appears calm, most people have laid in supplies in case an armed flare-up forces them to stay in their homes for a week. My friend’s teenage daughter keeps a change of clothes in her schoolbag in case she can’t return to her house. Uniformed private guards are everywhere, in every park, on every promenade, at every mall.


WITHIN A FEW weeks of our move to Beirut this year, my 5-year-old son traded his old fantasy, in which he played the doctor and assigned us roles as patients and nurses, for a new one: security guard. While we were setting up for a yard party, he arranged a few plastic chairs by the door. “I’ll check people here,” he declared. He also asked me a lot of questions about the heavily armed soldiers who stand watch on our street: “Will the army shoot me if I make a mistake?”

This is not the childhood he would have in Boston, even after this week. War-molded cities are nightmare reflections of failed states, places where government has gone into free fall, police don’t or can’t do their jobs, and normal life feels out of reach. Beirut is a kind of warning: Physically it appears normal on most days. But trust is gone. The public sphere feels wobbly and impermanent.

Boston is still lucky, with assets that Beirut lost generations ago; it has functional institutions, old communities with tangled but shared histories, and unifying cultural traditions. Boston has police that can get a job done and a baseball team that ties together otherwise divided corners of the city. One lesson of the city I live in now is that circumstances can sever these lifelines faster than we expect. Our connections require continued, perhaps redoubled, care. Without trust, a city can still be a magnificent place to live. Until all at once, it isn’t.

Should America Let Syria Fight On?

Posted April 7th, 2013 by Thanassis Cambanis and filed in Writing


Syrian rebel fighters posed for a photo after several days of intense clashes with the Syrian army in Aleppo, Syria, in October. (AP: NARCISO CONTRERAS)

[Originally published in The Boston Globe Ideas.]

THE NEWS FROM SYRIA keeps getting worse. As it enters its third year, the civil war between the ruthless Assad regime and groups of mostly Sunni rebels has taken nearly 100,000 lives and settled into a violent, deadly stalemate. Beyond the humanitarian costs, it threatens to engulf the entire region: Syria’s rival militias have set up camp beyond the nation’s borders, destabilizing Turkey, Lebanon, and Jordan. Refugees have made frontier areas of those countries ungovernable.

United Nations peace talks have never really gotten off the ground, and as the conflict gets worse, voices in Europe and America, from both the left and right, have begun to press urgently for some kind of intervention. So far the Obama administration has largely stayed out, trying to identify moderate rebels to back, and officially hoping for a negotiated settlement—a peace deal between Assad’s regime and its collection of enemies.

Given the importance of what’s happening in Syria, it might seem puzzling that the United States is still so much on the sidelines, waiting for a resolution that seems more and more elusive with each passing week. But it is also becoming clear that for America, there’s another way to look at what’s happening. A handful of voices in the Western foreign policy world are quietly starting to acknowledge that a long, drawn-out conflict in Syria doesn’t threaten American interests; to put it coldly, it might even serve them. Assad might be a monster and a despot, they point out, but there is a good chance that whoever replaces him will be worse for the United States. And as long as the war continues, it has some clear benefits for America: It distracts Iran, Hezbollah, and Assad’s government, traditional American antagonists in the region. In the most purely pragmatic policy calculus, they point out, the best solution to Syria’s problems, as far as US interests go, might be no solution at all.

If it’s true that the Syrian war serves American interests, that unsettling insight leads to an even more unsettling question: what to do with that knowledge. No matter how the rest of the world sees the United States, Americans like to think of themselves as moral actors, not the kind of nation that would stand by as another country destroys itself through civil war. Yet as time goes on, it’s starting to look—especially to outsiders—as if America is enabling a massacre that it could do considerably more to end.

For now, the public debate over intervention in America has a whiff of hand-wringing theatricality. We could intervene to staunch the suffering but for circumstances beyond our control: the financial crisis, worries about Assad’s successor, the lingering consequences of the Iraq war. These might explain why America doesn’t stage a costly outright invasion. But they don’t explain why it isn’t sending vastly more assistance to the rebels.

The more Machiavellian analysis of Syria’s war helps clarify the disturbing set of choices before us. It’s unlikely that America would alter the balance in Syria unless the situation worsens and protracted civil war begins to threaten, rather than quietly advance, core US interests. And if we don’t want to wait for things to get that bad, then it is time for America’s policy leaders to start talking more concretely—and more honestly—about when humanitarian concerns should trump our more naked state interests.


MANY AMERICAN observers were heartened when the Arab uprisings spread to Syria in the spring of 2011, starting with peaceful demonstrations against Bashar al-Assad’s police state. Given Assad’s long and murderous reign, a democratic revolution seemed to offer hope. But the regime immediately responded with maximum lethality, arresting protesters and torturing some to death.

Armed rebel groups began to surface around the country, harassing Assad’s military and claiming control over a belt of provincial cities. Assad has pursued a scorched earth strategy, raining shells, missiles, and bombs on any neighborhood that rises up. Rebel areas have suffered for the better part of a year under constant strafing and sniper fire, without access to water, health care, or electricity. Iran and Russia have kept the military pipeline open, and Assad has a major storehouse of chemical weapons. While some rebel groups have been accused of crimes, the regime is disproportionately responsible for the killing, which earlier this year passed the 70,000 mark by a United Nations estimate that close observers consider an undercount.

As the civil war has hardened into a bloody, damaging standoff, many have called for a military intervention, pressing for the United States to side with one of the moderate rebel factions and do whatever it takes to propel it to victory. Liberal humanitarians focus on the dead and the millions driven from their homes by the fighting, and have urged the United States to join the rebel campaign. The right wants intervention on different grounds, arguing that the regional security implications of a failed Syria are too dangerous to ignore; the country occupies a significant strategic location, and the strongest rebel coalition, the Nusra Front, is an Al Qaeda affiliate. Given all those concerns, both sides suggest that it’s only a question of when, not if, the United States gets drawn in.

“Syria’s current trajectory is toward total state failure and a humanitarian catastrophe that will overwhelm at least two of its neighbors, to say nothing of 22 million Syrians,” said Fred Hof, an ambassador who ran Obama’s Syria policy at the State Department until last year, when he quit the administration and became a leading advocate for intervention. His feelings are widely shared in the foreign policy establishment: Liberals like Princeton’s Anne-Marie Slaughter and conservatives like Fouad Ajami have made the interventionist case, as have State Department officials behind the scenes.

Intervention is always risky, and in Syria it’s riskier than elsewhere. The regime has a powerful military at its disposal and major foreign backers in Russia and Iran. An intervention could dramatically escalate the loss of life and inflame a proxy struggle into a regional conflagration.

And yet there’s a flip side to the risks: The war is also becoming a sinkhole for America’s enemies. Iran and Hezbollah, the region’s most persistent irritants to the United States and Israel, have tied up considerable resources and manpower propping up Assad’s regime and establishing new militias. Russia remains a key guarantor of the government, costing Russia support throughout the rest of the Arab world. Gulf monarchies, which tend to be troublesome American allies, have invested small fortunes on the rebel side, sending weapons and establishing exile political organizations. The more the Syrian war sucks up the attention and resources of its entire neighborhood, the greater America’s relative influence in the Middle East.

If that makes Syria an unattractive target for intervention, so too do the politics and position of the combatants. For now, jihadist groups have established themselves as the most effective rebel fighters—and their distaste for Washington approaches their rage against Assad. Egos have fractured the rebellion, with new leaders emerging and falling every week, leaving no unified government-in-waiting for outsiders to support. The violent regime, meanwhile, is no friend to the West.

“I’ll come out and say it,” wrote the American historian and polemicist Daniel Pipes, in an e-mail. “Western powers should guide the conflict to stalemate by helping whichever side is losing. The danger of evil forces lessens when they make war on each other.”

Pipes is a polarizing figure, best known for his broadsides against Islamists and his critique of US policy toward the Middle East, which he usually says is naive. But in this case he’s voicing a sentiment that several diplomats, policy makers, and foreign policy thinkers have expressed to me in private. Some are career diplomats who follow the Syrian war closely. None wants to see the carnage continue, but one said to me with resignation: “For now, the war is helping America, so there’s no incentive to change policy.”

Analysts who follow the conflict up close almost universally want more involvement because they are maddened by the human toll—but many of them see national interests clearly standing in the way. “Russia gets to feel like it’s standing up to America, and America watches its enemies suffer,” one complained. “They don’t care that the Syrian state is hollowing itself out in ways that will come back to haunt everyone.”


IS IT EVER ACCEPTABLE to encourage a war to continue? In the policy world it’s seen as the grittiest kind of realpolitik, a throwback to the imperial age when competing powers often encouraged distant wars to weaken rivals, or to keep colonized nations compliant. During the Cold War the United States fanned proxy wars from Vietnam to Afghanistan to Angola to Nicaragua but invoked the higher principle of stopping the spread of communism, rather than admitting it was simply trying to wear out the Soviet Union.

In Syria it’s impossible to pretend that the prolonging of the civil war is serving a higher goal, and nobody, even Pipes, wants the United States to occupy the position of abetting a human-rights catastrophe. But the tradeoffs illustrate why Syria has become such a murky problem to solve. Even in an intervention that is humanitarian rather than primarily self-interested, a country needs to weigh the costs and risks of trying to help against the benefit we might realistically expect to bring—and it’s a difficult decision to get involved when those potential costs include threats to our own political interests.

So just what would be bad enough to induce the United States to intervene? An especially egregious massacre—a present-day Srebenica or Rwanda—could fan such outrage that the White House changes its position. So too would a large-scale violation of the Chemical Weapons Convention—signed by most states in the world, but not Syria. But far more likely is that the war simmers on, ever deadlier, until one side scores a military victory big enough to convince the outside powers to pick a winner. The White House hopes that with time, rebels more to its liking will gain influence and perhaps eclipse the alarming jihadists. That could take years. Many observers fear that Assad will fall and open the way to a five- or ten-year civil war between his successor and a well-armed coalition of Islamist militias, turning Syria into an Afghanistan on the Euphrates. The only thing that seems likely is that whatever comes next will be tragic for the people of Syria.

Because this chilly if practical logic is largely unspoken, the current hands-off policy continues to bewilder many American onlookers. It would be easier to navigate the conversation about intervention if the White House, and the policy community, admit what observers are starting to describe as the benefits of the war. Only then can we move forward to the real moral and political calculations at stake: for example, whether giving Iran a black eye is worth having a hand in the tally of Syria’s dead and displaced.

For those up close, it’s looking unhappily like a trip to a bygone era. Walid Jumblatt, the Lebanese Druze warlord, spent much of the last two years trying fruitlessly to persuade Washington and Moscow to midwife a political solution. Now he’s given up. Atop the pile of books on his coffee table sits “The Great Game,” a tale of how superpowers coldly schemed for centuries over Central Asia, heedless of the consequences for the region’s citizens. When he looks at Syria he sees a new incarnation of the same contest, where Russia and America both seek what they want at the expense of Syrians caught in the conflict.

“It’s cynical,” he said in a recent interview. “Now we are headed for a long civil war.”

Egypt’s Free-Speech Backlash

Posted February 9th, 2013 by Thanassis Cambanis and filed in Writing


A street poster from Cairo that reads, “My God, my freedom, O my country.” Photo: NEMO.

[The Internationalist column in The Boston Globe Ideas.]

CAIRO — Every night, Egypt’s current comedic sensation, a doctor hailed as his country’s Jon Stewart, lambastes the nation’s president on TV, mocking his authoritarian dictates and airing montages that reveal apparent lies. On talk shows, opposition politicians hold forth for hours, excoriating government policy and new Islamist president Mohammed Morsi. Protesters use the earthiest of language to compare their political leaders to donkeys, clowns, and worse. Meanwhile, the president’s supporters in the Muslim Brotherhood respond in kind on their new satellite television station and in mass counter-rallies.

Before Egypt’s uprising two years ago, this kind of open debate about the president would have been unthinkable. For nearly three decades, former president Hosni Mubarak exerted near total control over the public sphere. In the twilight of his term, he imprisoned a famous newspaper editor who dared to publish speculation about the ailing president’s declining health. No one else touched the story again.

To Western observers, the freewheeling back-and-forth in Egypt right now might sound like the flowering of a young open society, one of the revolution’s few unalloyed triumphs. But amid the explosion of debate, something less wholesome has begun to arise as well. Though speech is far more open, it now carries a new and different kind of risk, one more unpredictable and sudden. Islamist officials and citizens have begun going after individuals for crimes such as blasphemy and insulting the president, and vaguer charges like sedition and serving foreign interests. The elected Islamist ruling party, the Muslim Brotherhood, pushed a new constitution through Egypt’s constituent assembly in December that expanded the number of possible free speech offenses—including insults to “all prophets.”

Worryingly, a recent report showed that President Morsi—a Brotherhood member, and Egypt’s first-ever genuinely elected, civilian leader—has invoked the law against insulting the presidency far more frequently than any of the dictators who preceded him, and has even directed a full-time prosecutor to summon journalists and others suspected of that crime.

“The repression used to be more limited and strategic,” says Heba Morayef, a researcher for Human Rights Watch in Egypt who has tracked the spate of new laws and prosecutions. “Now, the scary thing is that it’s all over the place.”

The Muslim Brotherhood, as it rises to power, is playing host to conflicting ideas. It wants the United States to view it as a tolerant modern movement that doesn’t arbitrarily silence critics, but at the same time it needs to show its political base of socially conservative constituents in rural Egypt that it won’t tolerate irreligious speech at home. And it wants to argue that despite its religious pedigree, it is behaving within the constraints of the law.

For the time being, Egypt’s proliferating free expression still outstrips government efforts to shut it down. But as the new open society engenders pushback, what’s happening here is in many ways a test case for Islamist rule over a secular state. What’s at stake is whether Islamists—who are vying for elected power in countries around the Muslim world—really only respect the rules until they have enough clout to ignore them.


The text on this Cairo street poster reads, “As they breathe, they lie.”

The text on this Cairo street poster reads, “As they breathe, they lie.” Photo: NEMO

EGYPTIANS ARE RENOWNED throughout the Arab world for jokes and wordplay, as likely to fall from the mouth of a sweet potato peddler as a society journalist. Much of daily life takes place in the crowded public social spaces where people shop, drink hand-pressed sugarcane juice, loiter with friends, or picnic with their families. But under the stifling police state built by Mubarak, that vitality was undercut by fear of the undercover police and informants who lurked everywhere, declaring themselves at sheesha joints or cafes when the conversation veered toward politics.

As a result, a prudent self-censorship ruled the day. State security officials had desks at all the major newspapers, but top editors usually saved them the trouble, restraining their own reporters in advance. In 2005, when one publisher took the bold step of publishing a judge’s letter critical of the regime, he confiscated the cellphones of all his editors and sequestered them in a conference room so they couldn’t tip off authorities before the paper reached the streets.

It wasn’t technically illegal to be a dissident in Egypt; that the paper could be published at all was testament to the fact that some tolerance existed. Egypt’s system was less draconian and violent than the police states in Syria and Iraq, where dissidents were routinely assassinated and tortured. But the limits of public speech were well understood, and Egyptians who cared to criticize the state carefully stayed on the accepted side of the line. Activists would speak out about electoral fraud by the ministry of the interior or against corruption by businesspeople, for example, but would carefully refrain from criticizing the military or Mubarak’s family. Political life as we understand it barely existed.

Egypt’s uprising marked an abrupt break in this long cultural balancing act. For the first time, millions of Egyptians expressed themselves freely and in public, openly defying the intelligence minions and the guns of the police. It was shocking when people in the streets called directly for the fall of the regime. Within weeks, previously unimaginable acts had become commonplace. Mubarak’s effigy hung in Tahrir Square. Military generals were mocked as corrupt, sadistic toadies in cartoons and banners. Establishment figures called for trials of former officials and limits on renegade security officials.

In the two years since, free speech has spread with dizzying speed—on buses, during marches, around grocery stalls, everywhere that people congregate. Today there are fewer sacred cows, although even at the peak of revolutionary fervor few Egyptians were willing to risk publicly impugning the military, which was imprisoning thousands without any due process. (An elected member of parliament faced charges when he compared the interim military dictator to a donkey.)

Mohammed Morsi was inaugurated in June, after a tight election that pitted him against a former Mubarak crony. Morsi campaigned on a promise to excise the old regime’s ways from the state, and on a grandiose Islamist platform called “The Renaissance.” His regime has fared poorly in its efforts to take control of the police and judiciary. Nor has it made much progress on its sweeping but impractical proposals to end poverty and save the Egyptian economy. It has proven easier to talk about Islamic social issues: allegations of blasphemy by Christians and atheist bloggers; alcohol consumption and the sexual norms of secular Egyptians; and the idea, widely held among Brotherhood supporters, that a godless cabal of old-regime supporters is secretly plotting to seize power.

Before it won the presidency, the Muslim Brotherhood emphasized it had been fairly elected; the party was Islamist, it said, but from the pragmatic, democratic end of the spectrum. But in recent months, there’s been more than a whiff of Big Brother about the Brotherhood. Supposed volunteers attacked demonstrators outside Morsi’s presidential palace—and then were videotaped turning over their victims to Brotherhood operatives. Allegations of torture, illegal detention, and murder by state agents pile up uninvestigated.

As revolutionaries and other critical Egyptians have turned their ire from the old regime to the new, the Brotherhood also has begun targeting political speech. The new constitution, authored by the Brotherhood and forced through Egypt’s constituent assembly in an overnight session over the objections of the secular opposition and even some mainstream religious clerics, criminalized blasphemy and expanded older statutes against insults to leaders, state institutions like the courts, and religious figures. Popular journalists have been threatened with arrest, while less famous individuals, including children improbably accused of desecrating a Koran, have been thrown into detention. Morsi’s presidential advisers regularly contact human rights activists and journalists to challenge their reports, a level of attention and pressure previously unknown here.

In addition to the old legal tools to limit free expression, which are now more heavily used by the Islamists than they were by Mubarak, the new constitution has added criminal penalties for insulting all religions and empowers courts to shut down media outlets that don’t “respect the sanctity of the private lives of citizens and the requirements of national security.”

The Egyptian government began an investigation of TV comedian Baseem Yousef but dropped its charges after a public outcry.

The Egyptian government began an investigation of TV comedian Baseem Yousef but dropped its charges after a public outcry. 

Egyptian human rights monitors have tracked dozens of such cases, including three that were filed by the president’s own legal team. Gamal Eid at the Arab Network for Human Rights Information charted 40 cases that prosecuted political critics for what amounted to dissenting speech in the first 200 days of Morsi’s regime. That’s more, he claims, than during Mubarak’s entire reign, and more charges of insulting the president than were filed since 1909, when the law was first written.


IT’S A WELL-KNOWN PRECEPT in politics that times of transition are the most unstable, and that the fight to establish civil liberties carries risks. The current speech crackdown may just be an expected symptom of the shift from an effective authoritarian state to competitive politics. Mubarak, of course, had less need to prosecute a population that mostly kept quiet.

It could also be a sign of desperation on the part of the Brotherhood, as it struggles to rule without buy-in from the police and state bureaucracy. Or it could, more alarmingly, mark a transition to a genuine new era of censorship in the most populous Arab country, this time driven as much by the Islamist cultural agenda as by the quest to keep a grip on power.

It is that last prospect that makes the path Egypt takes so important. By dint of its size and cultural heft, the country remains a major influence across the Arab world, and both in Egypt and elsewhere, the Muslim Brotherhood is at the front lines of political Islam—trying to balance the cultural conservatism of its rank-and-file supporters with the openness the world expects from democratic society.

There are signs that the Brotherhood wants to at least make gestures toward Western norms, though it remains hard to gauge exactly how open an Egypt its members would like to see. At one point the government began an investigation of Baseem Yousef, the Jon Stewart-like TV comedian, but abruptly dropped its charges in January after a public outcry.

During the wave of bad publicity around the investigation, one of President Morsi’s advisers issued a statement claiming that the state would never interfere in free speech—so long as citizens and the press worked to raise their “level of credibility.”

“Human dignity has been a core demand of the revolution and should not be undermined under the guise of ‘free speech,’” presidential adviser Esam El-Haddad said in a statement that placed ominous boundaries on the very idea of free speech that it purported to advance. “Rather, with freedom of speech comes responsibility to fellow citizens.”

What scares many people, is how they define “responsibility.” A widely watched video clip portrays a Salafi cleric lecturing his followers about how Egypt’s new constitution will allow pious Muslims to limit Christian freedoms and silence secular critics (the cleric, Sheikh Yasser Borhami, is from a more fundamentalist current, separate from but allied with the Brotherhood). When critics look at the Brotherhood’s current spate of investigations and threatened prosecutions, they see the political manifestation of the same exclusionary impulse: the polarizing notion that the Islamists’ actions are blessed by God and, by implication, that to criticize them is sacrilege.

Modern Islamism hasn’t reckoned with this implicit conflict yet, even internally. Officially, one current of the Brotherhood’s ideology prioritizes social activism over politics, and eschews coercion in religious matters. But another, perhaps more popular strain in Brotherhood thinking agitates for a religious revolution in people’s daily lives, and that strain appears to be driving the behavior of the Brothers suddenly in charge of the nation. Their fervor is colliding squarely with the secular responsibility of running a state like Egypt, which for all its shortcomings has real institutions, laws, and a civil society that expects modern freedoms and protections. The first stage of Egypt’s transition from military dictatorship has ended, but the great clash between religious and secular politics is just beginning to unfold.


What really drives civil wars?

Posted January 15th, 2013 by Thanassis Cambanis and filed in Writing

Christia Fotini with a Syrian girl in a camp for Internally Displaced Persons (IDPs) in Syria in the village of Atmeh.

Christia Fotini with a Syrian girl in a camp for Internally Displaced Persons (IDPs) in Syria in the village of Atmeh.

[Originally published in The Boston Globe.]

WHAT IS a civil war, really?

At one level the answer is obvious: an internal fight for control of a nation. But in the bloody conflicts that split modern states, our policy makers often understand something deeper to be at work. The vengeful slaughter that has ripped apart Bosnia, Rwanda, Syria, and Yemen is most often seen as the armed eruption of ancient and complex hatreds. Afghanistan is embroiled in a nearly impenetrable melee between Pashtuns and smaller ethnic groups, according to this thinking; Iraq is split by a long-suppressed Sunni-Shia feud. The coalitions fighting these wars are seen as motivated by the deepest sort of identity politics, ideologies concerned with group survival and the essence of who we are.

This view has long shaped America’s engagement with countries enmeshed in civil war. It is also wrong, argues Fotini Christia, an up-and-coming political scientist at MIT.

In a new book, “Alliance Formation in Civil Wars,” Christia marshals in-depth studies of the recent wars in Afghanistan, Iraq, and Bosnia, along with empirical data from 53 civil conflicts, to show that in one civil war after another, the factions behave less like enraged siblings and more like clinically rational actors, switching sides and making deals in pursuit of power. They might use compelling stories about religion or ethnicity to justify their decisions, but their real motives aren’t all that different from armies squaring off in any other kind of conflict.

“The idea that today’s enemy can be the next day’s friend was very compelling to me,” Christia said in an interview. “We should not be surprised to see groups switching sides, based on how the balance of power on the ground evolves.”

How we understand civil wars matters. Most civil wars drag on until they’re resolved by a foreign power, which in this era almost always includes the United States. If she’s right, if we’re mistaken about what motivates the groups fighting in these internecine free-for-alls, we’re likely to misjudge our inevitable interventions—waiting too long, or guessing wrong about what to do.


CIVIL WARS ALWAYS have loomed large in the collective consciousness. Americans still debate theirs so vociferously that a blockbuster film about Abraham Lincoln feels topical 150 years after his death. Eastern Europe saw several years of ferocious killing in the round of civil wars that followed World War II.

Such wars have been understood as fights over differences that can’t be resolved any other way: fundamental questions of ideology, identity, creed. A disputed border can be redrawn; not so an ethnic grudge. In the last two decades, identity has become the preferred explanation for persistent conflicts around the world, from Chechnya to Armenia and Azerbaijan to cleavages between Muslims and Christians in Nigeria.

This thinking allows for a simple understanding, and conveniently limits the prospect for a solution. Any identity-based cleavage—Jew vs. Muslim, Bosnian vs. Serb, Catholic vs. Orthodox—is so profoundly personal as to be immutable. The conventional wisdom is best exemplified by a seminal 1996 paper by political scientist Chaim Kaufmann, “Possible and Impossible Solutions to Ethnic Civil Wars,” which argues that bitterly opposed populations will only stop fighting when separated from each other, preferably by a major natural barrier like a river or mountain range.

During the 1990s, this sort of ethnic determinism drove American policy toward Bosnia and Rwanda. It was popularized by Robert Kaplan’s book “Balkan Ghosts,” which was read in the Clinton White House and presented the wars in the former Yugoslavia as just the latest chapter in an insoluble, four-century ethnic feud. Like Kaufmann, Kaplan suggested that the grievances in civil wars could only be managed, never reconciled.

After 9/11, policy makers in Washington continued to view civil wars through this prism, talking about tribes and sects and ethnic groups rather than minority rights, systems of government, and resource-sharing. That view was so dominant that President Bush’s team insisted on designing Iraq’s first post-Saddam governing council with seats designated by sect and ethnicity, against the advice of Iraqis and foreign experts. It became a self-fulfilling prophecy as Iraq’s ethnic civil war peaked in 2006; things settled down only after death squads had cleansed most of Iraq’s mixed neighborhoods, turning the country into a patchwork of ethnically homogenous enclaves. Similarly, this thinking has shaped US policy in Afghanistan, where the military even sent anthropologists to help its troops understand the local culture that was considered the driving factor in the conflict.


Christia grew in up in the northern Greek city of Salonica in the 1990s, with the Bosnian war raging just over the border. “It was in our neighborhood and we discussed it vividly every night over dinner,” she says. The question of ethnicity seized her imagination: Were different peoples doomed to conflict by incompatible identities? Or were the decision-makers in civil wars working on a different calculus from their emotional followers? As a graduate student at Harvard, Christia flew to Afghanistan and tried to turn a dispassionate political scientist’s eye to the question of why warlords behave the way they do.

Christia spent years studying these warlords, the factional leaders in a civil war that broke out in the late 1970s. As a graduate student and later as a professor, she returned to Afghanistan to interview some of the nastiest war criminals in the country. She concluded that culture and identity, while important for their adherents, did not seem to factor into the motives of the warlords themselves, and specifically not in their choices of wartime allies. Despite the powerful rhetoric about ethnic alliances forged in blood, warlords repeatedly flipped and switched sides. They used the same language—about tribe, religion, or ethnicity—whether they were fighting yesterday’s foe or joining him.

If ethnicity, religion, and other markers of identity didn’t matter to warlords, Christia asked, what did? It turns out the answer was simple: power. After studying the cases of Afghanistan, Bosnia, and Iraq in intricate detail, Christia built a database of 53 conflicts to test whether her theory applied more widely. She ran regression analyses and showed that it did: Warlords adjusted their loyalties opportunistically, always angling for the best slice of the future government. It’s not quite as simple as siding with the presumed winner, she says: It’s picking the weakest likely winner, and therefore the one most likely to share power with an ally.

In this model of warlord behavior, the many factions in a civil war are less like Cain and Abel and more like the mafia families in “The Godfather” trilogy. Loyalties follow business interests, and business interests change; meanwhile, the talk about family and blood keeps the foot soldiers motivated. In Bosnia, one Muslim warlord joined forces with the Serbs after the Serbs’ horrific massacre of Muslims at Srebenica, and justified his switch by saying that the central government in Sarajevo was run by fanatics while he represented the true, moderate Islam. In case after case of intractable civil wars—Afghanistan, Lebanon, Iraq, the former Yugoslavia—Christia found similar patterns of fluid alliances.

“The elites make the decision, and then sell it to the people who follow them with whatever narrative sticks,” Christia said. “We’re both Christians? Or we’re both minorities? Or we’re both anti-communist? Whatever sticks.”


CHRISTIA’S WORK has been received with great interest, though not all her academic colleagues agree with her conclusions. Critics say identity is more important in civil wars than she gives it credit for, and we ignore it at our peril. Roger Petersen, an expert on ethnic war and Eastern Europe who is a colleague of Christia’s at MIT and supervised her dissertation, argues that in some conflicts, identity—ethnic, religious, or ideological—is truly the most important factor. Leaders might make a pact with the devil to survive, but once a conflict heads to its conclusion, irreconcilable conflicts often end with a fight to the death. Communists and nationalists fought for total victory in Eastern Europe’s civil wars, with no regard to their fleeting coalitions of opportunity against foreign occupiers during World War II. More recently, Bosnia’s war only ended after the country had split into ethnically cleansed cantons.

Christia acknowledges that her theory needs further testing to see if it applies in every case. She is currently studying how identity politics play out at most local level in present-day Syria and Yemen.

If it holds up, though, Christia’s research has direct bearing on how we ought to view the conflict today in a nation like Syria. The teetering dictatorship is the stronghold of the minority Allawite sect in a Sunni-majority nation. And leader Bashar Assad has rallied his constituents on sectarian grounds, saying his regime offers the only protection for Syria’s minorities against an increasingly Sunni uprising. But Syria’s rebellion comprises dozens of armed factions, and Christia suggests that these militants, which run the gamut of ethnic and sectarian communities, will be swayed more by the prospect of power in a post-Assad Syria than by ethnic loyalty. That would mean the United States could win the loyalty of different fighting factions by ignoring who they are—Sunni, Kurd, secular, Armenian, Allawite—and by focusing instead on their willingness to side with America or international forces in exchange for guns, money, or promises of future political power.

For America, civil wars elsewhere in the world might seem like somebody else’s problem. But in reality we’re very likely to end up playing a role: Most civil wars don’t end without foreign intervention, and America is the lone global superpower, with huge sway at the United Nations. Christia suggests that Washington would do well to acknowledge early on that it will end up intervening in some form in any civil war that threatens a strategic interest. That doesn’t necessarily mean boots on the ground, but it means active funding of factions and shaping of the alliances that are doing the fighting. In a war like Syria’s, that means the United States has wasted precious time on the sidelines.

Despite her sustained look at the worst of human conflict, Christia says she considers herself an optimist: People spend most of their history peacefully coexisting with different groups, and only a tiny portion of the time fighting. And once civil wars do break out, the empirical evidence shows that hatreds aren’t eternal. “If identities mattered so much,” she says, “you wouldn’t see so much shifting around.”

What failed negotiations teach us

Posted December 9th, 2012 by Thanassis Cambanis and filed in Writing

<br />

[Originally published in The Boston Globe Ideas.]

ROCKETS AND MORTARS have stopped flying over the border between Gaza and Israel, a temporary lull in one of the most intractable, hot-and-cold wars of our time. The hostilities of late November ended after negotiators for Hamas and Israel—who refused to talk face-to-face, preferring to send messages via Egyptian diplomats—agreed to a rudimentary cease-fire. Their tenuous accord has no enforcement mechanism and doesn’t even nod to discussing the festering problems that underlie the most recent crisis. Both sides say they expect another conflict; experience suggests it’s just a question of when.

Generations of negotiators have cut their teeth trying to forge a peace agreement between Israel and the Palestinians, and their failures are as varied as they are numerous: Camp David, Madrid, the Oslo Accords, Wye River, Taba, the Road Map. For diplomats and deal-makers around the world—even those with no particular stake in Middle East peace—Israel and Palestine have become the ultimate test of international negotiations.

For Guy Olivier Faure, a French sociologist who has dedicated his career to figuring out how to solve intractable international problems, they’re something else as well: an almost unparalleled trove of insights into how negotiations can go wrong.

For more than 20 years, Faure has studied not only what makes negotiations around the world succeed, but how they break down. From Israel and the Palestinians to the Biological Weapons Convention protocol to the ongoing talks about Iran’s nuclear program, it’s far more common for negotiations to fail than to work out. And it’s from these failures, Faure says, that we can harvest a more pragmatic idea of what we should be doing instead. “In order to not endlessly repeat the same mistakes, it is essential to understand their causes,” he says.

In a recent book, “Unfinished Business: Why International Negotiations Fail,” Faure commissioned case studies and analysis from more than a dozen academics and actual negotiators, which he then used to make a systematic survey of the causes of failure. Some of the most important conclusions they reach are as simple as they are surprising. Their most important is that the seemingly boring matter of the process is much more likely to cause a negotiation to fail than the difficulty of the problem itself. Failed negotiations can sometimes be the precursors to a later success. There are also times when negotiations make a problem worse, especially when it is not “ripe” for settlement yet. And finally, despite their commitment as a group to coming up with something akin to an international negotiator’s handbook, Faure and his collaborators argue that sometimes negotiations are simply the wrong tool in the first place.


TODAY’S INTERNATIONAL order turns on successful negotiation. When we think about what’s right in the world, we’re often thinking about the results of agreements like the START treaties, which ended the nuclear arms race between the United States and the Soviet Union; the Geneva Conventions, which govern the conduct of war; or even the General Agreement on Tariffs and Trade, drafted in 1948, which still underpins globalized free trade.

But in negotiations over the most vexing international problems—a hostage situation, a war between a central government and terrorist insurgents, a new multinational agreement—such successes are few and far between. Failure is the norm. Understandably, experts tend to focus on the wins. From US presidents to obscure third-party diplomats, negotiators pore over rare historical successes for tips rather than face the copious and dreary overall record.

Faure wants to change that focus. As an expert he straddles two worlds: He studies diplomacy academically as a sociologist at the Sorbonne, in Paris, and has also trained actual negotiators for decades, at the European Union, the World Trade Organization, and UN agencies. Over his career, he has produced 15 books spanning all the different theories behind negotiation, and ultimately concluded that negotiations that failed, or simply sputtered out inconclusively, were the most interesting. Each failure had multiple causes, but it was possible to compile a comprehensive list, and from that, consistent patterns.

“Unfinished Business” takes a look at what happened during a number of high-profile failures, and examines the underlying conditions of each set of talks: trust, cultural differences, psychology, the role of intermediaries, and outsiders who can derail negotiations or overload them with extraneous demands.

One of Faure’s insights concerns the mindset of negotiators—a factor negotiators themselves often believe is irrelevant, but which Faure and his colleagues believe can often determine the outcome. Incompatible values on the two sides of the table, he says, are much harder to bridge than practical differences, like an argument over a boundary or the mechanics of a cease-fire. As Faure says, “A quantity can be split, but not a value.” This is what Faure saw at work when the Palestinians and Israelis embarked on a rushed negotiation at the Egyptian seaside resort of Taba during Bill Clinton’s final month in office. The two sides had already reached an impasse at a lengthier negotiation in 2000 at Camp David. With the end of his presidency looming and Israeli elections coming up, Clinton summoned them back to the table for a no-nonsense session he hoped would bring speedy closure to disputes over borders, Jerusalem, and refugees. The Palestinians, however, felt that the two sides simply didn’t share the same view of justice and weren’t truly aiming at the same goal of two sovereign states—and so didn’t feel driven to make a deal. That mismatch of long-term beliefs, Faure says, doomed the talks.

There are other warning signs that emerge as patterns in failed talks. Time and again, parties embark on tough negotiations already convinced they will fail—a defeatism that becomes a self-fulfilling prophesy. In interviewing professional negotiators, Faure and his colleagues found that they often don’t pay that much attention to the practical aspects of how to run a negotiation—a surprising lapse.

Faure and his team have found that a well-planned process is one of the best predictors of success, and that many negotiations are terrible at it. When the European Union and the United States talked to Iran about its nuclear program, various European countries kept adding extraneous issues to the talks, for instance linking Iran’s behavior with nukes to existing trade agreements. The additions made the negotiations unwieldy, and provoked crises over matters peripheral to the actual subject. In the case of the mediation over Cyprus, the Greek and Turkish sides didn’t bother coming up with any tangible proposed solution to negotiate over, instead talking vaguely about a Swiss model. Negotiations failed in part because neither side knew what that would mean for Cyprus.

Ultimately, Faure argues, mistrust and inflexibility tangle up negotiators more than any other factor. Negotiators often end up demonizing the other side, and as a result might embark on a process that by its structure encourages failure. For instance, Israeli and Palestinian reliance on mediators to ferry messages—even between delegations in the same resort—maximizes misunderstandings and minimizes the possibility that either side will sense a genuine opening.


WHAT EMERGES FROM Faure’s work, overall, is that the outlook for negotiations is usually pretty bleak—certainly bleaker than Faure himself prefers to highlight. In some cases, he suggests that diplomats should put off an outright negotiation until they’ve dealt with gaps in trust and cultural communication, or until the conflict feels “ripe” for solution to the parties involved. There’s no point, he suggests, in embarking on a negotiation if all the stakeholders are convinced it’s a waste of time—indeed, a failed negotiation can sometimes exacerbate a problem.

The most promising scenarios occur when both sides are suffering under the status quo, which creates what social scientists call a “mutually hurting stalemate,” with soldiers or civilians dying on both sides, and a “mutually enticing opportunity” if there’s a peace agreement or a prisoner swap. In that case, a decent deal will give both sides a chance to genuinely improve their lot.

Unfortunately for the many whose hopes are riding on negotiations, the truly challenging international problems of our age don’t always come with a strong incentive to compromise. In military conflicts, there is little incentive to resolve matters when a conflict is lopsided in one side’s favor (Shia versus Sunni in Iraq, Israel versus Hamas, the Taliban versus the United States in Afghanistan). The same holds in broader international agreements: They’re complicated and intractable largely because the states involved are—no matter what they say—quite comfortable with the status quo. Think about climate change: The biggest gas-guzzlers and polluters, the ones whose assent matters the most for a carbon-reduction treaty, are often the last states that will pay the price for rising oceans. Meanwhile, the poorer nations whose populations are most at the mercy of sea levels or changing weather have little clout. Just as it’s easy for a relatively secure Israel to stand pat on the Palestinian question, there are few immediate consequences for the United States and China if they sit out climate talks.

It’s not all bleak news. Even in cases where negotiations appear hamstrung—like climate change and Palestine—there are, Faure points out, plenty of other reasons to continue negotiating. Negotiations are a form of diplomacy, dialogue, and recognition, and even in failure can serve some other interests of the parties involved. But—as the impressive historical record of failed international agreements shows—it’s naive to think that they will always yield a solution.

The Crises We’re Ignoring

Posted November 13th, 2012 by Thanassis Cambanis and filed in Writing

[Originally published in The Boston Globe.]

During the presidential campaign, two issues often seemed like the only foreign policy topics in the entire world: the Middle East and China. Those are unquestionably important: The wider Middle East contains most of the world’s oil and, currently, much of its conflict; and China is the world’s manufacturing base and America’s primary lender. But there are a host of other issues that are going to demand Washington’s sustained attention over the next four years, and don’t occupy anywhere near the same amount of Americans’ attention.

You could call them the icebergs, largely hidden challenges that lie in wait for the second Obama administration. Like all of us, when it comes to priorities, the people in Washington assume that the thing that comes to mind first must be the most important. The recent crises or tensions with Afghanistan, Benghazi, and China make these feel like the whole story. But in fact they are really just a few chapters, and the ones we’re ignoring completely may actually have the most surprises in store.

If the administration wants to stay ahead of the game, here’s what it will need to spend more of its time and energy dealing with in the coming four years.


The eurozone. This is the least sexy, most important foreign policy issue facing America. The nations linked by the euro have started to split apart, with economies staying fairly strong in the north while others, including major economies like Italy and Spain, weakened to the point that they could go bankrupt. To save the euro, the continent’s stronger players might have to spend and borrow to untold levels to bail out its weaker ones. Or it could let them fail, and suffer a chain of collapses that will throw the entire continent, and possibly the world, into another, even longer recession. The debt crisis in Europe could make the American financial crisis of 2008 seem a minor contretemps by co


Europe’s recovery needs to be managed, and that requires global cooperation and money.

Washington and China, along with the International Monetary Fund and the World Bank, will have to be closely involved, and that won’t happen without American leadership. Though the European crisis has already been a front-burner problem for two years, in the United States it barely cracks the public agenda except as a rhetorical bludgeon: “That guy wants to turn America into Greece!” But Europe’s importance to the global economy, and to America, is staggering: It’s the world’s largest economic bloc, worth $17 trillion, and it’s the US’s largest trading partner. If Europe goes down, we all go down.



Climate change. No politician likes to talk about climate change. It’s depressing news. It’s become highly partisan in this country, and it has no obvious solution even for those who understand the threat. It requires discussion of all kinds of hugely complex, dull-sounding science. When we do talk about it as a political issue, it’s largely as a domestic one: saving energy, dealing with the increasing fury and frequency of storms like Hurricane Sandy, investing in new infrastructure.

In fact, climate change is a massive foreign policy issue as well. On the preventive side, any emissions reduction requires cooperation across borders—between small numbers of powerful nations, like America and China, along with massive worldwide accords like the failed Kyoto Protocol. The responses will often need to be global as well. Rising oceans and temperatures have no regard for national boundaries, and most of the world’s population lives near soon-to-be-vulnerable coastlines. Entire cities might have to move, or be rebuilt, often across

borders. Sandy could cost the American Northeast close to $100 billion when all is said and done (current damage estimates already top $50


billion). Imagine the price of climate-proofing the cities where most of the world lives—Mumbai, Shangahi, Lagos, Alexandria, and so on. Climate change, if unaddressed, could well become an American security issue, propelling unrest and failed states that will spur threats against the US.


Pakistan. Like our tendency to obsess over shark attacks rather than, say, the more significant risk of getting hit by a car, we often find our foreign policy elite preoccupied with rare, dramatic potential threats rather than actual banal ones. You’ll keep hearing about Iran, which might one day have a bomb and which emits noxious rhetoric while supporting well-documented militant groups like Hezbollah. What we really need to hear more and do more about, however, is a regional power that already has nukes (90 to 120 warheads), that is reportedly planning for battlefield bombs that are easier to misplace or steal, and that sponsors rogue terrorist groups that have been regularly killing people in Afghanistan and India for years.

That country is Pakistan. Power there is split among an unstable cast of characters: a dictatorial military, super-empowered Islamic fundamentalists, and a corrupt civilian elite. A significant portion of its huge population has been radicalized, and can easily flit across borders with Iran, Afghanistan, and India. Pakistan isn’t a potential problem; it’s a huge actual problem, a driver of war in Afghanistan, a sponsor of killers of Americans, and perennially, the only actor in the world that actively poses the threat of nuclear war. (The hot war between India and Pakistan in Kargil in 1999 was the first active conflict between two nuclear powers. It’s not talked about much, but remains a genuine nightmare scenario.) Pakistan is also a huge recipient of American aid. We need to find leverage and work to contain, restrain, and stabilize Pakistan.


Transnational crime and drugs. When it comes to violence in the world, foreign-policy thinkers tend to think first about wars, militaries, and diplomacy. But to save money and lives, it would be smarter to think about drugs. In much of the world, the resources spent and lives lost to criminal syndicates in the drug war rival the costs of traditional conflict. Narco-states in the Andes and, increasingly, Central America, make life miserable for their own inhabitants. Criminal off-the-book profits symbiotically feed international crime and terrorism. And in every region of the world, drugs provide the economic engine and financing for militias and terrorist groups; they fuel innumerable security problems, such as human trafficking, illicit weapons sales, piracy, and smuggling. Ultimately, wherever the drug business flourishes, it tends to corrode state authority, leaving vast ungoverned swaths of territory and promoting political violence and weak policing.

The United States pays a lot of attention to this problem in Afghanistan and Mexico, but it’s a drain on resources in corners of the globe that get less attention, from Southeast Asia to Africa. Washington needs to approach the international illegal drug trade like the globalized, multifaceted problem that it is, requiring international law enforcement cooperation but also smart economic solutions to change the market, including legalization.


Mexico. It feels almost painfully obvious, but it’s been a long time since a US president has prioritized our next-door neighbor. Our economies are inextricably linked. America’s supposed problem with illegal immigration is actually the organic

development of a fluid shared labor market across the US-Mexico border. Meanwhile, the distant war in Afghanistan eats up an enormous amount of resources while another conflict races on next door: Mexico’s increasingly violent drug war. Since 2006, it has claimed 50,000 lives, and the violence regularly spills over the border. Washington has collaborated piecemeal with Mexico’s government, but this is a regional conflict, involving criminal syndicates indifferent to jurisdiction. The United States needs to persuade Mexico to pursue a less violent, more sustainable strategy to counter the drug gangs, and then partner with the government there wholeheartedly.


The dangerous Internet. Cyber security might sound like a boondoogle for defense contractors looking for more money to spend on a ginned-up threat. Yet in the last year we’ve seen the real-world consequences of cyber attacks on Iran’s nuclear program, apparently orchestrated by the

United States and Israel, and an effective cyber response apparently by Iran that hobbled Saudi Arabia’s oil industry. Harvard’s Joseph Nye points out that cyber espionage and crime already pose serious transnational threats, and recent developments show how war and terrorism will spill into our online networks, potentially threatening everything from our power supply to our personal data.


The US budget. Elementary economics usually begins with the discussion of guns vs. butter: You can’t pay for everything given limited resources, so do you eat or defend yourself? For generations, America has had the luxury of not really having to choose: The economy has mostly boomed since

World War II, meaning we never had to cut anything fundamentally important. But America now faces a contracting global economy and a world in which it increasingly has to share resources with other rising powers. This is unfamiliar, and unhappy, territory: America’s next defense and foreign affairs budgets will probably be the first since the Second World War to require serious downsizing at a time when there are actual credible threats to the United States.


The Americans who reelected President Obama didn’t care that much about his foreign policy, according to polls. And, perhaps fittingly, Obama dealt with the rest of world during his first term with competence and caution rather than with flair and executive drive. His impressive focus on Al Qaeda hasn’t been mirrored so far in the rest of his national security policy, made by a team better known for its meetings than for setting clear priorities.

In the wake of a decisive reelection, Obama will have the political latitude to shape a more creative and forward-thinking foreign policy in his second term. If he does, he’ll have to work around both deeply divided legislators and a constrained budget: We simply can’t pay for everything, from land wars to cyber threats to sea walls to protected American industries. The priorities the next administration chooses—and its ability to pass any budget—will dramatically shape the kind of foreign influence America yields over the next four years.

The Carter Doctrine: A Middle East strategy past its prime

Posted October 12th, 2012 by Thanassis Cambanis and filed in Writing

[From The Boston Globe Ideas section.]

Cops say they figure out a suspect’s intentions by watching his hands, not by listening to what comes out of his mouth. The same goes for American foreign policy. Whatever Washington may be saying about its global priorities, America’s hands tend to be occupied in the Middle East, site of all America’s major wars since Vietnam and the target of most of its foreign aid and diplomatic energy.

How to handle the Middle East has become a major point in the presidential campaign, with President Obama arguing for flexibility, patience, and a long menu of options, and challenger Mitt Romney promising a tougher, more consistent approach backed by open-ended military force.

Lurking behind the debate over tactics and approach, however, is a challenge rarely mentioned. The broad strategy that underlies American policy in the region, the Carter Doctrine, is now more than 30 years old, and in dire need of an overhaul. Issued in 1980 and expanded by presidents from both parties, the Carter doctrine now drives American engagement in a Middle East that looks far different from the region for which it was invented.

President Jimmy Carter confronted another time of great turmoil in the region. The US-supported Shah had fallen in Iran, the Soviets had invaded Afghanistan, and anti-Americanism was flaring, with US embassies attacked and burned. His new doctrine declared a fundamental shift. Because of the importance of oil, security in the Persian Gulf would henceforth be considered a fundamental American interest. The United States committed itself to using any means, including military force, to prevent other powers from establishing hegemony over the Gulf. In the same way that the Truman Doctrine and NATO bound America’s security to Europe’s after World War II, the Carter Doctrine elevated a crowded and contested Middle Eastern shipping lane to nearly the same status as American territory.

The consequences have been profound. Every conflict in the Gulf since (and there has been a constant supply) has involved the United States. Our Navy patrols its waters, in constant tension with Iran; our need for bases there has persuaded us to support otherwise noxious leaders. The Carter Doctrine has driven the US fixation on stability among Arab regimes and Washington’s micromanagement of Israel’s relations with its neighbors. The entire world enjoys the same oil prices when they’re low and stable, but the United States carries almost all of the increasingly unsustainable cost of securing the Gulf.
As difficult as it can be to imagine a fresh approach to such a complex web of alliances and conflicts, the next administration will enjoy a tool that Carter lacked: the insights gained from three decades of sustained, intimate, and often frustrating direct involvement in the region. Hundreds of thousands of American combat troops have done tours in the Middle East, diplomats and politicians have deeply involved themselves in US policy there, and Washington has spent billions of dollars in the process.

In 2012, we look back on a recent level of American engagement with the Middle East never seen before. Even the failures have been failures from which we can learn. The decade that began with the US invasion of Afghanistan and ended with a civil war in Syria holds some transformative lessons, ones that could point the next president toward a new strategy far better suited to what the modern Middle East actually looks like—and to America’s own values.

<br />


President Carterissued his new doctrine in what would turn out to be his final State of the Union speech in January 1980. America had been shaken by the oil shocks of the 1970s, in which the Arab-dominated OPEC asserted its control, and also by the fall of the tyrannical Mohammad Reza Pahlavi, Shah of Iran, who had been a stalwart security partner to the United States and Israel.

Nearly everyone in America and most Western economies shared Carter’s immediate goal of protecting the free flow of oil. What was significant was the path he chose to accomplish it. Carter asserted that the United States would take direct charge of security in this turbulent part of the world, rather than take the more indirect, diplomatic approach of balancing regional powers against each other and intervening through proxies and allies. It was the doctrine of a micromanager looking to prevent the next crisis.

Carter’s focus on oil unquestionably made sense, and the doctrine proved effective in the short term. Despite more war and instability in the Middle East, America was insulated from oil shocks and able to begin a long period of economic growth, in part predicated on cheap petrochemicals. But in declaring the Gulf region an American priority, it effectively tied us to a single patch of real estate, a shallow waterway the same size as Oregon, even when it was tangential, or at times inimical, to our greater goal of energy security. The result has been an ever-increasing American investment in the security architecture of the Persian Gulf, from putting US flags on foreign tankers during the Iran-Iraq war in the 1980s, to assembling a huge network of bases after Operation Desert Storm in 1991, to the outright regime-building effort of the Iraq War.

In theory, however, none of this is necessary. America doesn’t really need to worry about who controls the Gulf, so long as there’s no threat to the oil supply. What it does need is to maintain relations in the region that are friendly, or friendly enough, and able to survive democratic changes in regime—and to prevent any other power from monopolizing the region.

The Carter Doctrine, and the policies that have grown up to enforce it, are based on a set of assumptions about American power that might never have been wholly accurate. They assume America has relatively little persuasive influence in the region, but a great deal of effective police power: the ability to control major events like regional wars by supporting one side or even intervening directly, and to prevent or trigger regime change.

Our more recent experience in the Middle East has taught us the opposite lesson. It has become painfully clear over the last 10 years that America has little ability to control transformative events or to order governments around. Over the past decade, when America has made demands, governments have resolutely not listened. Israel kept building settlements. Saudi Arabia kept funding jihadis and religious extremists. Despots in Egypt, Syria, Tunisia, and Libya resisted any meaningful reform. Even in Iraq, where America physically toppled one regime and installed another, a costly occupation wasn’t enough to create the Iraqi government that Washington wanted. The long-term outcome was frustratingly beyond America’s control.

When it comes to requests, however, especially those linked to enticements, the recent past has more encouraging lessons. Analysts often focus on the failings of George W. Bush’s “freedom agenda” period in the Middle East; democracy didn’t break out, but the evidence shows that no matter how reluctantly, regional leaders felt compelled to respond to sustained diplomatic requests, in public and private, to open up political systems. It wasn’t just the threat of a big stick: Egypt and Israel weren’t afraid of an Iraq-style American invasion, yet they acceded to diplomatic pressure from the secretary of state to liberalize their political spheres. Egypt loosened its control over the opposition in 2005 and 2006 votes, while Israel let Hamas run in (and win) the 2006 Palestinian Authority elections. Even prickly Gulf potentates gave dollops of power to elected parliaments. It wasn’t all that America asked, but it was significant.

Paradoxically, by treating the Persian Gulf as an extension of American territory, Washington has reduced itself from global superpower to another neighborhood power, one than can be ignored, or rebuffed, or hectored from across the border. The more we are committed to the Carter Doctrine approach, which makes the military our central tool and physical control of the Gulf waters our top priority, the less we are able to shape events.

The past decade, meanwhile, suggests that soft power affords us some potent levers. The first is money. None of the Middle Eastern countries have sustainable economies; most don’t even have functional ones. The oil states are cash-rich but by no means self-sufficient. They’re dependent on outside expertise to make their countries work, and on foreign markets to sell their oil. Even Israel, which has a real and diverse economy, depends on America’s largesse to undergird its military. That economic power gives America lots of cards to play.

The second is defense. The majority of the Arab world, plus Israel, depends on the American military to provide security. In some cases the protection is literal, as in Bahrain, Qatar, and Kuwait, where US installations project power; elsewhere, as in Saudi Arabia, Egypt, and Jordan, it’s indirect but crucial. (American contractors, for instance, maintain Saudi Arabia’s air force.) America’s military commitments in the Middle East aren’t something it can take or leave as it suits; it’s a marriage, not a dalliance. A savvier diplomatic approach would remind beneficiaries that they can’t take it for granted, and that they need to respond to the nation that provides it.


The Carter Doctrineclearly hasn’t worked out as intended; America is more entangled than ever before, while its stated aims—a secure and stable Persian Gulf, free from any outside control but our own—seem increasingly out of reach. A growing, bipartisan tide of policy intellectuals has grappled with the question of what should replace it, especially given our recent experience.

One response has been to seek a more morally consistent strategy, one that seeks to encourage a better-governed Middle East. This idea has percolated on the left and the right. Alumni of Bush’s neoconservative foreign-policy brain trust, including Elliott Abrams, have argued that a consistent pro-democratic agenda would better serve US interests, creating a more stable region that is less prone to disruptions in the oil supply. Voices on the left have made a similar argument since the Arab uprisings; they include humanitarian interventionists like Anne-Marie Slaughter at Princeton, who argue for stronger American intervention in support of Syria’s rebels. Liberal fans of development and political freedoms have called for a “prosperity agenda,” arguing that societies with civil liberties and equitably distributed economic growth are not only better for their own citizens but make better American allies.

Then there’s a school that says the failures of the last decade prove that America should keep out of the Middle East almost entirely. Things turn out just as badly when we intervene, these critics argue, and it costs us more; oil will reach markets no matter how messy the region gets. This school includes small-footprint realists like Stephen Walt at Harvard and pugilistic anti-imperial conservatives like Andrew Bacevich at Boston University. (Bacevich argues that the more the US intervenes with military power to create stability in the oil-producing Middle East, the more instability it produces.)

While the realists think we should disentangle from the region because the US can exert strategic power from afar, others say we should pull back for moral reasons as well. That’s the argument made over the last year by Toby Craig Jones, a political scientist at Rutgers University who says that the US Navy should dissolve its Fifth Fleet base so it can cut ties with the troublesome and oppressive regime in Bahrain. America’s military might guarantees that no power—not Iran, not Iraq, not the Russians—can sweep in and take control of the world’s oil supply. Therefore, the argument goes, there’s no need for America to attend to every turn of the screw in the region.

What’s clear, from any of these perspectives, is that the Carter Doctrine is a blunt tool from a different time. It’s now possible, even preferable, to craft a policy more in keeping with the modern Middle East, and also more in line with American values. It might sound obvious to say that Washington should be pushing for a liberalized, economically self-sufficient, stable, but democratic Middle East, and that there are better tools than military power to reach those aims. In fact, that would mark a radical change for the nation—and it’s a course that the next president may well find within his power to plot.

A World of Messy Borders? Get Used to It

Posted September 6th, 2012 by Thanassis Cambanis and filed in Writing

[Originally published in The Boston Globe Ideas section.]

Concern for the sovereignty of nations runs like a drumbeat through almost every debate on foreign policy. Are corporations exerting too much influence on sovereign governments? Is a larger power pulling the strings of a smaller one; are international bankers putting too much outside pressure on some nation’s treasury? Is a humanitarian crisis severe enough to warrant breaching a border?The idea of the world as a perfect patchwork of self-ruled nations is so essential to our understanding of how the world works that we’re rarely aware of it. When we worry about wars, or trade disputes, or multinational companies throwing their weight around, we’re worried in part because we see these as disruptions of an otherwise neat and stable system of sovereign states.
To experts, this is called the “Westphalian” system, and it has a date of birth: 1648, when a series of treaties collectively known as the Peace of Westphalia transformed an unruly war-torn Europe into a network of cleanly delineated nations. Since that time, the basic notion of Westphalian sovereignty has become an organizing principle for scholars and statesman, policy makers and generals. And as the global map changed in the 20th century from a system of colonies and territories to a world of mutually recognized nations, it is the Westphalian model that prevailed.
But now, in a provocative paper, a young scholar has suggested that it might also be a chimera—that such a cut-and-dried international system has never really existed, and that the normal order of the world looks more like a shifting network of influences that operate across and within borders.In a paper published in the International Studies Quarterly, Sebastian Schmidt, a University of Chicago doctoral candidate in political science, argues that today’s idea of sovereign statehood arose as a convenient fiction after World War II. Under the pressures of that time, he says, scholars looking to build a functional international system following two global wars began to ignore the muddy and complex historical realities of statehood—and instead adopted the Westphalian ideal as a kind of useful shorthand for thinking about the world’s proper order.The primary target of Schmidt’s work is scholars, who he hopes will acknowledge that modern policy making relies on an oversimplified version of the Westphalian story. But his argument also offers a helpful way to think about the world now. In his conception, much of what worries observers today—globalization, intervention, power plays—is built into the way the world works, and always has been.“These challenges we face have been around before, in other forms,” Schmidt says. “I want to take a little bit of the bogeyman out of globalization.”
The more consistent version of the world order, which Schmidt describes as a fluid “society of states,” is one in which governments have always jockeyed for power with private interests, outside powers, or meddling clerics; in which borders and lines of influence are much fuzzier than we might like to think.It is at once messier and more enduring than the static ideal that has driven our understanding of states for centuries. Schmidt’s argument suggests that there is less cause for alarm than we often think in threats to sovereignty, and also that the past may be a richer source than we realize for useful experiments to resolve the problems of today.
The primacy of sovereign nations, in the long view, is a recent development of history. In Europe, before the Peace of Westphalia, the continent’s city-states competed with larger kingdoms and the Holy Roman Empire in a perpetual violent struggle for territory and resources. It was hard to distinguish among different kinds of authority. In some places, the pope held sway; in others, a monarch; in still others, a family or group of families whose power stemmed less from their territory than from the wealth they created through commerce or industry.
In this welter of influence, 1648 undeniably marked a watershed. The Westphalian Peace took shape during four years of negotiations and congresses, culminating in a series of peace treaties that gave states more authority at the expense of the Vatican. It marked the maturation of a method of state-to-state negotiation that already existed in inchoate form then, and which today continues to be the basis of diplomacy.But, Schmidt points out, that moment did not mark as clear-cut a change as the history books often have it. The Holy Roman Empire, whose influence was rooted in faith rather than territory, persisted as a power in Europe for 150 years afterward. Religious and ethnic strife continued, and Europe hosted a long parade of wars right into the contemporary era. Economically speaking, Schmidt argues, an international gold standard bound the world’s treasuries much more tightly than they are connected today, while in the colonial era joint-stock ventures like the British East India Company had influence that dwarfed that of their descendants such as the contemporary oil giant BP.
Schmidt traces the intellectual history of Westphalia among political philosophers, and argues that until the 20th century, scholars and policy makers retained a much more accurate view of its historical context and ambiguous legacy. Some argued that Westphalia had created the first international order; some that it pioneered a fledgling notion of sovereignty. Some went so far as to claim it was a precursor of the League of Nations. But all of them saw Westphalia as a murky transition point along a continuum, a historical moment as complex and inconclusive as the Treaty of Versailles in 1918.
After World War II, however, the idea of inviolable sovereignty took on new importance because of the imperative to stabilize a deeply shaken international system. In crafting a new world order, it was to the Westphalian ideal that leaders and their advisers turned. A new, almost purist view of Westphalia undergirded the design of the United Nations, the Bretton Woods institutions, and the norms of non-intervention that kept Cold War conflicts from mushrooming.The world had good reasons to embrace such an ideal. A half century of wanton intervention and blood-letting created a desire for stability; now, only under narrowly defined conditions spelled out in the United Nations charter could nations intervene in the affairs of others—and they could only do so with the blessing of the UN Security Council. The goal was practical: to end a horrific era of wars. The means was abstract: the adoption of an ideal form of national sovereignty.Even the promoters of this new, ahistorical view of Westphalia noted that it vastly simplified the real history of state-to-state relations. Richard Falk, a giant in the fields of international law and international relations, argued in a seminal 1969 paper about the emerging 20th-century international legal order that is was more “convenient” to refer to the concepts of Westphalia than to its actual history.During the Cold War and the subsequent surge of “globalization,” Schmidt argues, “Westphalia” and “sovereignty”devolved into lazy placeholders for thinkers struggling to make sense of an economically interconnected world straddled by expansive American and Soviet militaries. And today they represent a simplified view that misleads us into seeing old dynamics—like porous borders, free trade pacts, humanitarian interventions, and failing states—as new bogeymen.“Especially with globalization, ‘Westphalia’ just got used as a contrast to what people saw as new trends,” Schmidt says.
I n part, Schmidt’s argument offers historical comfort. If problems such as international monetary crises and clerical incursions into politics have been around for a half a millennium (or more), and we’ve survived—even prospered—then we’ll probably survive today’s threats as well.It carries some risk of indifference, of course—of deciding we shouldn’t worry when we see China accumulating US debt, or Iranian clerics trying to pull the strings in Iraq. But his insight also carries the promise that we can mine the past for solutions to today’s problems. If we’re concerned with managing today’s international financial system, for example, we can look at how 19th-century economies weathered fluctuations caused by the international gold standard. If we’re trying to figure out how to manage failing states and the violence they spawn, we can look at the late stages of the Ottoman Empire’s collapse. If we want to think realistically about ways to intervene in Syria, or reasons not to, we can look at France’s misadventures in 1860 when it sent its military to defend the Maronite community in Lebanon, then an Ottoman province.
Though Schmidt’s research runs against the grain of the dominant thinking among policy makers and international relations scholars, who often treat sovereignty and the state system as nearly sacrosanct, it echoes the work of other scholars who believe the world needs to be understood in more dynamic terms. The Stanford University political scientist Stephen Krasner, for example, has explored the idea of “shared sovereignty” to promote better governance in poorly ruled or failing states: Arguing that sovereignty can be treated almost as a commodity, he suggests that nations can, and should, be convinced to share sovereignty over some sector when needed, as Europe did with its security when it joined a US-led NATO.If we grow comfortable with a more realistic image of the world as a “society of states,” rather than the idealized version in which every nation is a separate castle, we will be more adaptable in a multilayered, globalized world. Just as private influence-peddling and transnational insurgencies are less new threats than old phenomena, so contemporary progressive ideas like “shared sovereignty,” “the responsibility to protect,” and the International Criminal Court are simply new incarnations of time-honored practices.All these things might not be the breakdown in the proper political order of the world that they seem, in other words: They might actually be an integral part of that very order.

Everybody’s an Islamist Now

Posted August 7th, 2012 by Thanassis Cambanis and filed in Writing

 The case that a political term has outlived its usefulness

[Originally published in The Boston Globe Ideas section.]

To watch the Arab world’s political transformation over the past year has been, in part, to track the inexorable rise of Islamism. Islamist groups—that is, parties favoring a more religious society—are dominating elections. Secular politicians and thinkers in the Arab world complain about the “Islamicization” of public life; scholars study the sociology of Islamist movements, while theologians pick apart the ideological dimensions of Islamism. This March, the US Institute for Peace published a collection of essays surveying the recent changes in the Arab world, entitled “The Islamists Are Coming: Who They Really Are.”

From all this, you might assume that “Islamism” is the most important term to understand in world politics right now. In fact, the Islamist ascendancy is making it increasingly meaningless.

In Tunisia, Libya, and Egypt, the most important factions are led overwhelmingly by religious politicians—all of them “Islamist” in the conventional sense, and many in sharp disagreement with one another over the most basic practical questions of how to govern. Explicitly secular groups are an exception, and where they have any traction at all they represent a fragmented minority. As electoral democracy makes its impact felt on the Arab world for the first time in history, it is becoming clear that it is the Islamist parties that are charting the future course of the Arab world.

As they do, “Islamist” is quickly becoming a term as broadly applicable—and as useless—as “Judeo-Christian” in American and European politics. If important distinctions are emerging within Islamism, that suggests that the lifespan of “Islamist” as a useful term is almost at an end—that we’ve reached the moment when it’s time to craft a new language to talk about Arab politics, one that looks beyond “Islamist” to the meaningful differences among groups that would once have been lumped together under that banner.

Some thinkers already are looking for new terms that offer a more sophisticated way to talk about the changes set in motion by the Arab Spring. At stake is more than a label; it’s a better understanding of the political order emerging not just in the Middle East, but around the world.

THE TERM “ISLAMIST” came into common use in the 1980s to describe all those forces pushing societies in the Islamic world to be more religious. It was deployed by outsiders (and often by political rivals) to describe the revival of faith that flowered after the Arab world’s defeat in the 1967 war with Israel and subsequent reflective inward turn. Islamist preachers called for a renewal of piety and religious study; Islamist social service groups filled the gaps left by inept governments, organizing health care, education, and food rations for the poor. In the political realm, “Islamist” applied to both Egypt’s Muslim Brotherhood, which disavowed violence in its pursuit of a wealthier and more powerful Islamic middle class, and radical underground cells that were precursors to Al Qaeda.

What they had in common was that they saw a more religious leadership, and more explicitly Islamic society, as the antidote to the oppressive rule of secular strongmen such as Hafez al-Assad, Hosni Mubarak, and Saddam Hussein.

Over the years, the term “Islamist” continued to be a useful catchall to describe the range of groups that embraced religion as a source of political authority. So long as the Islamist camp was out of power, the one-size-fits-all nature of the term seemed of secondary importance.

But in today’s ferment, such a broad term is no longer so useful. Elections have shown that broad electoral majorities support Islamism in one flavor or another. The most critical matters in the Arab world—such as the design of new constitutional orders in Egypt, Tunisia, and Libya—are now being hashed out among groups with competing interpretations of political Islam. In Egypt, the non-Islamic political forces are so shy about their desire to separate mosque from government that many eschew the term “secular,” requesting instead a “civil” state.

In Tunisia’s elections last fall, the Islamist Ennahda Party—an offshoot of the Muslim Brotherhood—swept to victory, but is having trouble dealing with its more doctrinaire Islamist allies to the right. In Libya, virtually every politician is a socially conservative Muslim. The country’s recent elections were won by a party whose leaders believe in Islamic law as a main reference point for legislation and support polygamy as prescribed by Islamic sharia law, but who also believe in a secular state—unlike their more Islamist rivals, who would like a direct application of sharia in drafting a new constitutional framework.

In Egypt, the two best-organized political groups since the fall of Mubarak have been the Muslim Brotherhood and the Salafi Noor Party—both “Islamist” in the broad sense, but dramatically different in nearly all practical respects. The Brotherhood has been around for 84 years, with a bourgeois leadership that supports liberal economics and preaches a gospel of success and education. The rival Salafi Noor Party, on the other hand, includes leaders who support a Saudi-style extremist view of Islam that holds the religious should live as much as possible in a pre-modern lifestyle, and that non-Muslims should live under a special Islamic dispensation for minorities. A third Islamist wing in Egypt includes the jihadists—the organization that assassinated President Anwar Sadat in 1981, which has officially renounced violence and has surfaced as a political party. (Its main agenda item is to advocate the release of “the blind sheikh,” Omar Abdel-Rahman imprisoned in the United States as the mastermind of the 1993 World Trade Center bombing.)

“ISLAMIST” MIGHT BE an accurate label for all these parties, but as a way to understand the real distinctions among them it’s becoming more a hindrance than a help. A useful new terminology will need to capture the fracture lines and substantive differences among Islamic ideologies.

In Egypt, for example, both the Muslim Brotherhood and the Salafis believe in the ultimate goal of a perfect society with full implementation of Islamic sharia. Yet most Brothers say that’s an abstract and unattainable aim, and in practice are willing to ignore many provisions of Islamic law—like those that would limit modern finance, or those that would outright ban alcohol—in the interest of prosperity and societal peace. The Salafis, by contrast, would shut down Egypt’s liquor industry and mixed-gender beaches, regardless of the consequences for tourism or the country’s Christian minority.

There’s a cleavage between Islamists who still believe in a secular definition of citizenship that doesn’t distinguish between Muslims and non-Muslims, and those who believe that citizenship should be defined by Islamic law, which in effect privileges Muslims. (Under Saudi Arabia’s strict brand of Islamist government, the practice of Christianity and Shiite Islam is actually illegal.) And there’s the matter of who would interpret religious law: Is it a personal matter, with each Muslim free to choose which cleric’s rulings to follow? Or should citizens be legally required to defer to doctrinaire Salafi clerics?

Many thinkers are trying to craft a new language for the emerging distinctions within Islamism. Issandr El Amrani, who edits The Arabist blog and has just started a new column for the news site Al-Monitor about Islamists in power, suggests we use the names of the organizations themselves to distinguish the competing trends: Ikhwani Islamists for the establishment Muslim Brothers and organizations that share its traditions and philosophy; Salafi Islamists for Salafis, whose name means “the predecessors” and refers to following in the path of the Prophet Mohammed’s original companions; and Wasati Islamists for the pluralistic democrats that broke away from the Brotherhood to form centrist parties in Egypt.

Gilles Kepel, the French political scientist who helped popularize the term “Islamist” in his writings on the Islamic revival in the 1980s, grew dissatisfied with its limits the more he learned about the diversity within the Islamist space. By the 1990s, he shifted to the more academic term “re-Islamification movements.” Today he suggests that it’s more helpful to look at the Islamist spectrum as coalescing around competing poles of “jihad,” those who seek to forcibly change the system and condemn those who don’t share those views, and “legalism,” those who would use instruments of sharia law to gradually shift it. But he’s still frustrated with the terminology’s ability to capture politics as they evolve. “I’ve tried to remain open-eyed,” he said.

It’s also helpful to look at what Islamists call themselves, but that only offers a perfunctory guide, since many Islamists consider religion so integral to their thinking that it doesn’t merit a name. Others might seek for domestic political reasons to downplay their religious aims. For example, Turkey’s ruling party, a coterie of veteran Islamists who adapted and subordinated their religious principles to their embrace of neoliberal economics, describes itself as a party of “values,” rather than of Islam. In Libya, the new government will be led by the personally conservative technocrat Mahmoud Jibril; though his party could be considered “Islamist” in the traditional sense, it’s often identified as secular in Western press reports, to distinguish it from its more religious rivals. Jibril himself prefers “moderate Islamic.”

The efforts to come up with a new language to talk about Islamic politics are just beginning, and are sure to evolve as competing movements sharpen their ideologies, and as the lofty rhetoric of religion meets the hard road of governing. The importance of moving beyond “Islamism” will only grow as these changes make themselves felt: What we call the “Islamic world” includes about a quarter of the world’s population, stretching from Muslim-majority nations in the Arab world, along with Turkey, Pakistan, and Indonesia, to sizable communities from China to the United States. For Islam, the current political moment could be likened to the aftermath of 1848 in Europe, when liberal democracy coalesced as an alternative to absolute monarchy. Only after that, once virtually every political movement was a “liberal” one, did it become important to distinguish between socialists and capitalists, libertarians and statists—the distinctions that have seemed essential ever since.

The economic toll of Islamic law

Posted July 1st, 2012 by Thanassis Cambanis and filed in Writing

[Originally published in The Boston Globe.]

Right now, the Islamic world is in the midst of a grand experiment. After decades facing an unappetizing choice among secular dictatorship, monarchy, and Iranian-style theocracy, nations across the region are grappling with how to build genuinely modern governments and societies that take into account the Islamist principles shared by a majority of voters.

As they do, a shadow hangs over their prospects. Islamic nations in the Middle East on the whole have underperformed their counterparts in the West. Asian nations that were poorer than the Arab world at the beginning of the Cold War have overtaken the Middle East. And promising experiments with democracy have been few and far between.

The question of why is a contentious one. Has the Islamic world been held back by its treatment at the hands of history? Or could the roots of the problem lie in its shared religion—in the Koran, and Islamic belief itself?

A provocative new answer is emerging from the work of Timur Kuran, a Turkish-American economist at Duke University and one of the most influential thinkers about how, exactly, Islam shapes societies. In a growing body of work, Kuran argues that the blame for the Islamic world’s economic stagnation and democracy deficit lies with a distinct set of institutions that Islamic law created over centuries. The way traditional Islamic law handled finance, inheritance, and incorporation, he argues, held back both economic and political development. These practices aren’t inherent in the religion—they emerged long after the establishment of Islam, and have partly receded from use in the modern era. But they left a profound legacy in many societies where Islam held sway.

Islamic partnerships and inheritance law limited the ability of merchants to pool capital and build competitive enterprises with long life spans. Islam’s emphasis on fairness and a division of assets among children had the unfortunate effect of preventing large-scale businesses from taking root. Meanwhile, the primary vehicle for organizing institutions—the Islamic trust—placed severe limits on the development of civic institutions such as universities, guilds, and charities. Over time, the result was a stagnant economy and an enfeebled civil society with no way to challenge the established political order.
Kuran’s work is part of a current in modern economics that explores the precise ways institutions shape societies. His 2010 book, “The Long Divergence: How Islamic Law Held Back the Middle East,” pulled together nearly a decade of research on economic development of the Islamic world over the past thousand years. Since then he has been focusing more specifically on its political implications. His work has also catalyzed a flurry of research by economists, political scientists, and other scholars.

Kuran’s critics think he unfairly impugns religious law. Pakistani scholar Arshad Zaman argues that Kuran misunderstands the very nature of Islamic law and business practice, which elevate worthwhile economic goals such as income equality and social justice above growth. Others argue that the harm suffered at the hands of legacy Western colonial powers is far more important in explaining why the Muslim world is struggling today.

Kuran himself sees his work as coming from a sympathetic perspective: He wants to combat the argument that Islam is incompatible with modernity and liberty, a notion he decries as “one of the most virulent ideas of our time.” He worries about anti-Islamic sentiment from outside the religion, as well as the rigid and defensive posture of some orthodox Islamists. (He pointedly avoids discussing his own faith. “I write as a scholar,” he says.)

Thanks in part to this careful navigation, Kuran’s scholarship gives economists, and perhaps political leaders in the Middle East, a way to talk about the Islamic world’s problems without resorting to crude stereotypes or heightened “clash of civilizations” rhetoric. In Kuran’s analysis, Islam itself is neither the problem nor the solution; indeed, most of the rigid practices have long been supplemented by or in some cases abandoned for more Western models.

However, his work does carry stark implications for countries such as Egypt, Libya, and Tunisia, whose emerging political futures are likely to be shaped by Islamist majorities or pluralities. A democratic renaissance could paradoxically lead to more stagnation if it imposes calcified institutions of Islamic yesteryear on modern society. If Kuran is right, the nations of the Arab Spring face a conundrum: The institutions most in keeping with societies’ religious principles could be the ones most likely to hold it back.




While Europe suffered centuries of decline and intellectual darkness, the early Islamic world bubbled with vitality. Competing schools of Islamic jurisprudence produced texts still consulted as references today, while merchants and caliphs left copious written records for future scholars to study. In the course of his work, Kuran was able to comb through business records and commercial ledgers spanning more than a millennium.

What he found, he says, was that two legal traditions pervasive in the Islamic world became especially limiting: the laws governing the accumulation of capital, and those governing how institutions were organized. The growth of capital was limited by laws of inheritance and Islamic partnership, which required that large fortunes and enterprises be split up with each passing generation. The waqf, or Islamic trust, had even greater ramifications, because it determined the structure of most social relationships and had wide-ranging consequences for civil society.

Under Islamic law, the trust—rather than the corporation—is the most common legal unit of organization for entities outside the government. Until modern times, cities, hospitals, schools, parks, and charities were all set up and governed by the immutable deed of an Islamic trust. Under its terms, the founder of the trust donates the land or other capital that funds it in perpetuity, and sets its rules in the deed. They can never be altered or amended. The waqf was developed by Islamic scholars in the centuries after the religion was established, drawing on Koranic principles barring usury and demanding justice in business. (It is not an institution stipulated by the Koran itself.) Much like a trust in the West, a waqf is not “governed” so much as executed. It is also limited in what it can do. A waqf is prohibited from engaging in politics, which means it cannot form coalitions, pool its resources with other organizations, or oppose the state.

Drawing on voluminous study of the mechanisms of money, power, and law going back to the 7th century founding of Islam, Kuran draws a picture of nations whose rulers wielded central and often highly authoritarian power, and faced little challenge from either business owners or a waqf-bound civil society.

Over time, he argues, this structure led to a radically different social system than the one that arose in the West. There, the rise of the corporation created a vehicle for prosperity and a civilian counterweight to state power—an institution that could adapt and grow, survive from one generation to the next, and pay benefits to its shareholding owners, who are thus motivated to steer it toward expansion and influence. Nonprofit corporations enjoy similar flexibility and freedom of action, though they don’t have shareholders.

“In the West, you had universities, unions, churches, organized as corporations that were free to make coalitions, engage in politics, advocate for more freedoms, and they became a civil society,” Kuran said in an interview. “Democracy is a system of checks and balances. It can’t develop if a population is passive.”

In modern times, Islamic nations have adopted Western institutions like corporations and banks to manage their affairs. Municipalities and private enterprise are now more commonly incorporated rather than set up as trusts. But the trust remains pervasive, especially in the realm of social services: Hospitals, schools, and aid societies are still almost always trusts rather than corporations—a factor that correlates with their quiescence in balancing state power.

In focusing on the specific legal institutions of Islamic civic life, Kuran’s thesis directly targets those “apologists” who blame the economic and political problems of the Middle East solely on colonialism and other outside forces. He also takes aim at essentialists who hold Islam as a religion responsible for the problems of Islamic countries. In fact, he argues, Islamic states that have embraced modernization programs and gone through the sometimes painful process of adopting new institutions, as Turkey and Indonesia have, have had great success in developing both democracy and economic prosperity widely shared among citizens.




While some critics attack Kuran from an Islamic perspective, like Arshad Zaman, others share his approach but dispute his findings. Maya Shatzmiller, a historian at Western University in Canada, believes that the real specific causes of economic growth are particular to the circumstances of each individual region, and that by focusing on some notional qualities common to the entire Islamic world, his work generically indicts Islam without offering real insight into the economic problems of individual Middle Eastern states.

Kuran believes the evidence of a gap between the Islamic world and the West is undeniable and merits serious examination of what those countries have in common. And it’s patronizing, he writes, to suggest that the Islamic world will be offended by a vigorous debate on the subject.

Kuran’s approach has influenced other social scientists to use similar tools in the hope of offering more precise and useful answers. Eric Chaney, a Harvard economist, uses the mathematical modeling of econometrics to pinpoint the historical factors that correlate with lagging democratization and development in the Islamic world. Jared Rubin, an economist at Chapman University in California, studies the effect of technologies like the printing press on economic disparities. Jan Luiten van Zanden, a renowned Dutch historian, has begun a deep comparative study of the organization of cities in the Islamic world and the West.

For the new architects of Islamic politics, Kuran’s work offers a clear blueprint, though perhaps a difficult one to follow. It suggests that states heavily reliant on Islamic law may need to reformulate their approach, extending Western-style rules to organize their nonstate entities: banks, companies, nonprofits, political parties, religious societies. Over time, this will seed a more empowered civic society and ultimately pull greater numbers of citizens into the fabric of political life.

It’s a challenge, however. Authoritarian states are unlikely to promote reforms that will weaken their control. And the resurgence of Islamist politics has created a new wave of support for a more doctrinaire application of Islamic law and traditions.

Another barrier to reform is the slow pace of cultural change. Once modern institutions are in place, Kuran warns, it takes a long time for their use to become widespread and for people to trust them. Simply put, for an institution to grow powerful and influential, whether it’s a bank or a political party, it needs to build support from a large, trusting public of strangers. Much of the Middle East still operates on smaller units, in which customers or citizens expect to know who’s running the company or institution that serves them. For example, Kuran points out that despite the prevalence of banks, only one in five families in the Arab world actually has a bank account. (By comparison, three in five Turkish families have bank accounts, and nearly every US family does.)

Most broadly, change requires a shift in the constraints on civil society. In recent decades, Middle Eastern regimes have systematically destroyed any opposition and kept rigid control over the media, official religious groups, and any body that might develop a political identity, from university faculty to labor unions. The most effective dissent survived deep within mosques, where even the most repressive police states hesitated to go.

In the long run, to end the cycle of autocracy and violence, the Islamic world will need space for civil society to grow outside the constraints of the state and the mosque. Only then will citizens grow accustomed to making decisions that have traditionally been made on their behalf. And breaking the old habits, on the street and in election booths, will likely take time.

“The state itself,” Kuran says, “cannot change the way people relate to each other.”

The Amazing Expanding Pentagon

Posted May 25th, 2012 by Thanassis Cambanis and filed in Writing

[Published in The Boston Globe Ideas section.]

When President Obama and Mitt Romney cross swords on defense policy, it can sound like a schoolyard fight: Who loves the military more? Who is tougher? Who would lead a more muscular America?

This is the way we expect candidates to talk about defense: in terms of power, force, even national pride. But increasingly, when it comes to the role the Department of Defense actually plays for the nation, it misses the point. Over the past decade, the Pentagon has become far more complex than the conversation about it would suggest. What “military” means has changed sharply as the Pentagon has acquired an immense range of new expertise. What began as the world’s most lethal strike force has grown into something much more wide-ranging and influential.

Today, the Pentagon is the chief agent of nearly all American foreign policy, and a major player in domestic policy as well. Its planning staff is charting approaches not only toward China but toward Latin America, Africa, and much of the Middle East. It’s in part a development agency, and in part a diplomatic one, providing America’s main avenue of contact with Africa and with pivotal oil-producing regimes. It has convened battalions of agriculture specialists, development experts, and economic analysts that dwarf the resources at the disposal of USAID or the State Department. It’s responsible for protecting America’s computer networks. In May of this year, the Pentagon announced it was creating its own new clandestine intelligence service. And the Pentagon has emerged as a surprisingly progressive voice in energy policy, openly acknowledging climate change and funding research into renewable energy sources.

The huge expansion of the Pentagon’s mission has, not surprisingly, rung plenty of alarm bells. In the policy sphere, critics worry about the militarization of American foreign policy, and the fact that much of the world—especially the most volatile and unstable parts—now encounters America almost exclusively in the form of armed troops. Hawkish critics worry that the Pentagon’s ballooning responsibilities are a distraction from its main job of providing a focused and prepared fighting force. But this new reality will be with us for a while, and in the short term it creates an opportunity for the next president. Super-empowered and quickly deployable, the Pentagon has become a one-stop shop for any policy objective, no matter how far removed from traditional warfare.

That means the next administration will have ample room to shape the priorities, and even perhaps to reimagine the mission of a Pentagon that plays a leading role in areas from language research to fighting the drug trade. And it means that voters will need to consider the full breadth of its capabilities when they hear candidates talk about “defense.”

In campaigning so far, neither candidate has seriously engaged with the real challenges of steering the most diverse and powerful entity under his control. For both Obama and Romney, the most central question about foreign policy—and even some of their domestic priorities—may be how creatively and effectively they can use the Pentagon to further their aims.


The current balance of power in Washington runs counter to most of American history. Traditionally, the United States has related to the world chiefly through diplomats: A civilian president set the policy, civilian envoys worked to implement it, and gunboats stepped in only when diplomacy failed. Indeed, until World War II, the Department of State outranked Defense in size as well as influence. That began to change in the 1940s, first with the huge mobilization of World War II and then the Cold War. Funding and power began to accumulate permanently in the Pentagon.

In the decade since 9/11, the Pentagon has undergone another transformation. The military was asked to fight two complex wars in Afghanistan and Iraq, while also engaging in a sprawling operation dubbed the Global War on Terror. In practice, this meant soldiers and other troops were asked to design nation-building operations on the fly; produce the kind of pro-democracy propaganda that decades earlier was the province of the Voice of America; and do police, intelligence, and development work in conflict zones that had long bedeviled experts in far more stable locales. In Iraq, the Pentagon was essentially expected to provide the full gamut of services normally offered by a national government. Army commanders in provincial outposts dispensed cash grants to business start-ups, supervised building renovations, managed police forces, and built electricity plants.

As American involvement in those wars winds down, we are left with a Department of Defense that has become Washington’s default tool for getting things done in the world. Unlike diplomats, who serve abroad for limited stints and who can refuse to work in dangerous places, military personnel have to go where ordered, and stay as long as the government needs them. They haven’t always succeeded, leaving any number of failed governance projects in their wake. But it’s understandable why the White House has turned more and more often to warriors. The military is undeniably good at taking action: A lieutenant colonel can spend a hundred thousand dollars on a day’s notice to dig a well or refurbish a mayor’s office or rebuild a village market. In contrast, civilian USAID specialists operating under the agency’s rules would take months, or even years, to put out bids and hire a local subcontractor to do the same job.

“The president who comes into office and thinks about what he wants to do, when he looks around for capabilities he tends to see someone in uniform,” says Gordon Adams, an American University political scientist and expert in the defense budget. “The uniformed military are really the only global operational capacity the president has.”

And that capacity stretches into some surprising domains. The Pentagon maintains an international rule of law office staffed with do-gooder lawyers. It has trained and deployed agriculture battalions. Its regional commands, as well as its war-fighting generals in Afghanistan and Iraq, have tapped hundreds of economists, anthropologists, and other field experts as unconventional military assets. Its special operators conduct the kind of clandestine operations once reserved for the CIA, but also do a lot of in-the-field political advising for local leaders in unstable countries. The US Cyber Command runs a kind of geek tech shop in charge of protecting America’s computer networks. The world’s most high-tech navy runs counter-piracy missions off the coast of Somalia, essentially serving as a taxpayer-funded security force for private shipping companies. Much of drug policy is executed by the military, which is in charge of intercepting drug shipments and has been the key player in drug-supplying countries like Colombia.

With little fanfare, the Pentagon—currently the greatest single consumer of fossil fuels in all of America, accounting for 1 percent of all use—has begun promoting fuel efficiency and alternate energy sources through its Office of Operation Energy Plans and Programs. Using its gargantuan research and development budget, and its market-making purchasing power, the Defense Department has demanded more efficient motors and batteries. Its approach amounts to a major official policy shift and huge national investment in green energy, sidestepping the ideological debate that would likely hamstring any comparable effort in Congress.

This huge expansion of what the Department of Defense does is not the same thing as a runaway military, though there are critics who see it that way. At the height of the Cold War, the United States dedicated far more of its budget to defense—around 60 percent, compared to 20 percent now. It is more a matter of vast “mission creep.” Inevitably, it is to the Pentagon that the government will turn when it faces urgent, unexpected needs: Hurricane Andrew, the 2005 tsunami in Asia, propaganda in the Islamic world. Men and women in camouflage uniforms can be found helping domestic law enforcement pursue cattle rustlers in North Dakota using loaned military drones, or working with Afghan farmers to increase crop yields.

Paul Eaton, a major general in the US Army who retired in 2006 and now advises a Washington think tank called the National Security Network, describes a meeting he attended in Kampala, Uganda, this May, convened by the American general in charge of the Africa Command, or AfriCom. The top commanders of 35 militaries on the continent gather every other year, hash out policy matters, and forge personal ties.

“It was as much diplomacy and politics as anything else,” Eaton said. “Nobody could give me an example of the State Department doing anything like that.”


What SHOULD a president do about this metamorphosed Pentagon? Or more practically, what should be done with it? A question to watch for in the coming presidential debates is whether either candidate is willing to discuss reorienting the Pentagon toward its core mission of armed defense, shedding its new capacities in the interest of keeping it focused or saving money. Neither candidate has suggested so far that he will.

Pentagon cutbacks are politically difficult. No president likes to argue against national defense, and Pentagon spending by design sprawls across congressional districts, creating a built-in bipartisan lobby against cuts. But a president who tried to return the Pentagon to a more strictly military mission could expect at least some support from the Department of Defense itself. Many career officers view the extra missions with dismay, fearing that the Pentagon will get worse at fighting wars as it spends more and more time patrolling cyberspace, organizing diplomatic retreats, and deploying agricultural battalions to train farmers in war zones. Eaton, whose three children all serve in the armed forces, has been a vocal critic of the new military, and thinks the best thing the next administration could do is defund and shut down all the niche capacities that have sprung up since 9/11.

The past two US defense secretaries, including George W. Bush appointee Robert Gates, have also expressed concern about the department’s expansion. In a 2008 speech, while still in office, Gates ripped into the “creeping militarization” of foreign policy, expressing concern that the Pentagon was like an “800-pound gorilla” taking over the intelligence community, foreign aid, and diplomacy in conflict zones. Both Gates and his successor, Leon Panetta, have vociferously advocated for a bigger, better-funded State Department more capable of deploying around the world, conducting diplomacy in hot zones, and dispensing emergency relief and development aid.

It’s hard to imagine the Pentagon shedding capacity anytime soon. As the wars in Iraq and Afghanistan subside, the Defense Department appears likely to keep most of its enormous budget. During a period when most branches of government will be struggling to survive budget cutting, the Pentagon will more than ever have the global reach and the policy planning muscle to set the agenda and execute foreign policy.

So in the next several months, we should be on the lookout for specific ideas from candidates about what do with this excess power—at the least, an acknowledgment that it exists. Domestically, the Pentagon has the opportunity to shape university research priorities; it influences White House policy planning anytime a crisis erupts in a new place. Abroad, the military can do considerable good by using its money and expertise to improve quality of life, burnishing America’s reputation as a font of positive development rather than just counter-terrorism and counter-insurgency.

But while it lasts, the breadth of the current military presents grave challenges, not least for a democratic country that in principle, if not always in policy, opposes military dictatorships around the world. Even Pentagon insiders worry about this dissonance. Whatever good our deployments can do, it will be harder to promote civilian ideals so long as our foreign policy wears a uniform.

Stop Being Scared

Posted April 25th, 2012 by Thanassis Cambanis and filed in Writing

[Originally published in The Boston Globe.]

President Obama and his presumptive challenger Mitt Romney agree on at least one important matter: the world these days is a terrifying place. Romney talks about the “bewildering” array of threats; Obama about the perils of nuclear weapons in the wrong hands. They differ only on the details.

A bipartisan emphasis on threats from outside has always been a hallmark of American foreign-policy thinking, but it has grown more widespread and more heightened in the decade since 9/11. General Martin E. Dempsey, chairman of the Joint Chiefs of Staff, captured the spirit when he spoke in front of Congress recently: “It’s the most dangerous period in my military career, 38 years,” he said. “I wake up every morning waiting for that cyberattack or waiting for that terrorist attack or waiting for that nuclear proliferation.”

The unpredictability and extremism of America’s enemies today, the thinking goes, makes them even more threatening than our old conventional foes. The old enemy was distant armies and rival ideologies; today, it’s a theocracy with missiles, or a lone wolf trying to detonate a suitcase nuke in an American city.

But what if the entire political and foreign policy elite is wrong? What if America is safer than it ever has been before, and by focusing on imagined and exaggerated dangers it is misplacing its priorities?

That’s the bombshell argument put forth by a pair of policy thinkers in the influential journal Foreign Affairs. In an essay entitled “Clear and Present Safety: The United States Is More Secure Than Washington Thinks,” authors Micah Zenko and Michael A. Cohen argue that that American policy leaders have fallen into a nearly universal error. Across the ideological board, our leaders and experts genuinely believe that the world has gotten increasingly dangerous for America, while all available evidence suggests exactly the opposite: we’re safer than we’ve ever been.

<br />


“The United States faces no serious threats, no great-power rival, and no near-term competition for the role of global hegemon,” Cohen says. “Yet this reality of the 21st century is simply not reflected in US foreign policy debates or national security strategy.”

It might seem that the extra caution couldn’t hurt. But Zenko and Cohen argue that excessive worry about security leads America to focus money and attention on the wrong things, sometimes exacerbating the problems it seeks to prevent. If Americans and their leaders recognized just how safe they are, they would spend less on the military and more on the slow nagging problems that undermine our economy and security in less dramatic ways: creeping threats like refugee flows, climate change, and pandemics. More important, the United States would avoid applying military solutions to non-military problems, which they argue has made containable problems like terrorism worse. In effect, they argue, the United States should keep a pared-down military in reserve for traditional military rivals. The bulk of America’s security efforts could then be spent on remedies like policing and development work — more appropriate responses to the terrorism and global crime syndicates that understandably drive our fears.

Why should Americans feel so secure right now? Zenko and Cohen write that a calm appraisal of global trends belies the danger consensus. There are fewer violent conflicts than at almost any point in history, and a greater number of democracies. None of the states that compete with America come close to matching its economic and military might. Life expectancy is up, and so is prosperity. As vulnerable as the nation felt in the wake of 9/11, American soil is still remarkably insulated from attack.

Nonetheless, more than two-thirds of the members of the Council on Foreign Relations — as good a cross-section of the foreign-policy brain trust as there is — said in a 2009 Pew Survey that the world today was as dangerous, or even more so, than during the Cold War. Other surveys of experts and opinion-makers showed the same thing: the overwhelming majority of experts believe the world is becoming more dangerous. Zenko and Cohen claim, essentially, that the entire foreign policy elite has fallen prey to a long-term error in thinking.

“More people have died in America since 9/11 crushed by furniture than from terrorism,” Zenko says in an interview. “But that’s not an interesting story to tell. People have a cognitive bias toward threats they can perceive.”

The paper’s authors are, in effect, skewering their own peers. Zenko is a Council on Foreign Relations political scientist with a PhD from Brandeis. Cohen (a colleague of mine at The Century Foundation, and a previous contributor to Ideas) worked as a speechwriter in the Clinton administration and has been a mainstay in the thinktank world for a decade.

Zenko and Cohen point out that there are plenty of good-faith reasons that experts tend to overestimate our national risk. A raft of psychological research from the last two decades that shows human nature is biased to exaggerate the threat of rare events like terrorist attacks and underestimate the threat from common ones like heart attacks. And security policy in general is extremely risk-averse: we expect our military and intelligence community to tolerate no failures. A cabinet secretary who pledged to reduce terrorist attacks to just a few per year would not last long in the job: the only acceptable goal is zero.

Electoral politics, of course, is greatest driver of what Zenko and Cohen call “domestic threat-mongering.” In an endless contest for votes, Republicans do well by claiming to be tough in a scary world. Democrats adopt the same rhetoric in order to shield themselves from political attacks. Both sides see an advantage in a politically risk-averse strategy. If a minor threat today turns into a sizeable one tomorrow, better to have sounded the alarm early than to have appeared naïve or feckless.

But Zenko and Cohen make the politically uncomfortable argument that it’s wrong to govern based on the prospect of unlikely but extreme events. Instead of marshalling our resources for the 1 percent risk of a nuclear jihadist, as Vice President Dick Cheney argued we should, we should really set our security policy based on the 99 percent of the time when things go America’s way.

They point to the number of wars between major states and the number of people killed in wars every year, both of which have been steadily declining for decades. They also point to the historical, systematic growth of the global economy and spread of financial and trade links, which have undergirded an unprecedented period of peace among rival great powers and within the West.

Their thinking follows in the footsteps of a small but persistent group of contrarian security scholars, who have noted the post-9/11 spike in America’s already long history of threat exaggeration. Best known among them is Ohio State University political scientist John Mueller, who has argued that American alarmism about terrorism can cause more harm to our well-being and national economy than terrorism itself. In that vein, Zenko and Cohen claim that America’s over-militarization prompts avoidable wars and has in fact created far more problems than it solves, from terrorist blowback to huge drains on the Treasury.

The implications, as they see it, are clear: spend less money on the military; spend more on the boring, international initiatives that actually make America safe and powerful. Zenko’s favorite example is loose nukes, which pose hardly any threat today but were a real cause for concern as the Soviet Union collapsed in the early 1990s, leaving poorly secured weapons across an entire hemisphere.

“There was a solution: limit nuclear stockpiles and secure them,” Zenko says. “We took common-sense steps, none of which involved the US military. We send contractors to these facilities in Russia, and they say ‘The fence doesn’t work, the cameras don’t work, the guards are drunk.’ It’s cheap, and it works. This is what keeps us safe.”

Not everyone agrees. Robert Kagan, the most influential proponent of robust American power, argues that America is safe today precisely because it throws its military might around. President Obama said he relied for his most recent state of the union address on Kagan’s newest book, “The World America Made.” Mackenzie Eaglen, a defense expert at the American Enterprise Institute, says America benefits even when it appears to overreact. According to this thinking, even if the Pentagon designs the military for improbable threats and deploys at the drop of the hat, it is performing a service keeping the world stable and deterring would-be rivals. Like most establishment defense thinkers, Eaglen believes American dominance could easily and quickly come to an end without this kind of power projection.

“Power abhors a vacuum,” she says. “If we don’t fill it, others will, and we won’t like what that looks like.”

Other critics of Zenko and Cohen’s argument, like defense policy writer Carl Prine, say the comforting data about declining wars and violent deaths is misleading. Today’s circumstances, they argue, don’t preclude something drastic happening next  — say, if China, or even a rising power like Brazil, veers into an unpredictably bellicose path and clashes violently with American interests. Simply put, safe today doesn’t mean safe tomorrow.

Even if Zenko and Cohen are right, however, and the big-military crowd is wrong, it is nearly impossible to imagine a spirit of “threat deflation” taking hold in American politics. The already alarmist expert community that shapes US government thinking was further electrified by 9/11. Anyone in either party who argues for a leaner military, or pruning back intelligence infrastructure, risks being portrayed as inviting another attack on the homeland. The result is an unshakeable institutional inertia.

To get a sense of what they’re up against, listen to James Clapper, the director of national intelligence, presenting the “Worldwide Threat Assessment” to Congress, as his office does every year. The latest version, in January this year, reads like a catalogue of nightmares, describing macabre possibilities ranging from Hezbollah sleeper cells attacking inside America to a cyberattack that could turn our own infrastructure into murderous drones. Are these science fiction visions, or simply the wise man’s anticipation of the next war? It’s impossible to know, but one thing is striking: there’s no attempt in the intelligence czar’s report to rank the threats or assess their real likelihood. He simply and clinically presents every possible, terrifying thing that could happen, and signs off. It’s truly frightening reading.

This is what Zenko dismissively deems the “threat smorgasbord.” It makes clear how high the stakes are in planning for war and terrorist attacks, and how much emotional power the issue has. The reasonable thing to do might simply never be politically palatable. The current debate about Iran’s nuclear intentions is a perfect example, says Eaglen, who debated Cohen about his thesis on Capitol Hill in April.

“I can say Iran’s a threat, Michael can say it’s not,” she says. “But if he gets it wrong, we’re in trouble.”

Thanassis Cambanis, a fellow at The Century Foundation, is the author of “A Privilege to Die: Inside Hezbollah’s Legions and Their Endless War Against Israel” and blogs at thanassiscambanis.com. He is an Ideas columnist.