[Originally published in The Boston Globe Ideas section.]
EVER SINCE AMERICANS had to briefly ration gas in 1973, “energy independence” has been one of the long-range goals of US policy. Presidents since Richard Nixon have promised that America would someday wean itself of its reliance on foreign oil and gas, which leaves us vulnerable to the outside world in a way that was seen as a gaping hole in America’s national security. It also handcuffs our foreign policy, entangling America in unstable petroleum-producing regions like the Middle East and West Africa.
Given the United States’ huge appetite for fuel, energy independence has always seemed more of a dream than a realistic prospect. But today, nearly four decades later, energy independence is starting to loom in sight. Sustained high oil prices have made it economically viable to exploit harder-to-reach deposits. Techniques pioneered over the last decade, with US government support, have made it possible to extract shale oil more efficiently. It helps, too, that Americans have kept reducing their petrochemical consumption, a trend driven as much by high prices as by official policy. Total oil consumption peaked at 20.7 million barrels per day in 2004. By 2010, the most recent year tracked in the CIA Factbook, consumption had fallen by nearly a tenth.
Last year, the United States imported only 40 percent of the oil it consumed, down from 60 percent in 2005. And by next year, according to the US Energy Information Administration, the United States will need to import only 30 percent of its oil. That’s been driven by an almost overnight jump in domestic oil production, which had remained static at about 5 million barrels per day for years, but is at 7 million now and will be at 8.5 million by the end of 2014. If these trends continue, the United States will be able to supply all its own energy needs by 2030 and be able to export oil by 2035. In fact, according to the government’s latest projections, the country is on track to become the world’s largest oil producer in less than a decade.
Yet as this once unimaginable prospect becomes a realistic possibility, it’s far from clear that it will solve all the problems it was supposed to. As much as boosters hope otherwise, energy independence isn’t likely to free America from its foreign policy entanglements. And at worst, say some skeptics who specialize in energy markets, it might create a whole new host of them, subjecting America to the same economic buffeting that plagues most oil exporters, and handing China even more global influence as the world’s behemoth consumer.
As much as the shift brings opportunities, however, it is also likely to open the United States up to liabilities we have not yet had to face. The consequences may be both good and bad, enriching and destabilizing for US interests—but they will certainly have a major impact on our geopolitics, in ways that the policy world is only just beginning to understand.
WHEN RICHARD NIXON was president, America consumed about one-third of the world’s oil, importing about 8.4 million barrels per day chiefly from the Middle East. The status quo hummed along until the Arab-Israeli war of 1973. The United States sent weapons to Israel, and the Arab states retaliated with a six-month oil embargo, refusing to sell oil to America. It was the only time in history that the “oil weapon” was effectively used, and it made a permanent impression on the United States.
Over time, the American response to the embargo came to include three major initiatives that still shape energy policy today. First, the government promoted lower oil consumption by pushing coal and natural gas power plants, home insulation, and mileage standards for cars. Second, the country drilled for more of its own oil. Third, and perhaps most important from a foreign-policy standpoint, the United States promoted a unified global oil market in which any country had the practical means to buy oil from any other. That meant that even if some countries couldn’t do business with each other—say, Iran and the United States—it wouldn’t affect the overall price and availability of oil. Other countries could fill in the gap.
The dreams of energy independence crossed party lines. Though liberals and conservatives differ on the means—how much we should rely on new drilling versus energy conservation—both parties have endorsed the quest. It was one of the few issues on which Presidents Carter and Reagan agreed.
America has made steady progress over the years, to the point where the nation’s total oil consumption has actually begun to drop. As this has happened, the high cost of global energy has also made it profitable to increase domestic production of natural gas and oil. A few months ago, both the US Energy Information Administration and the International Energy Agency predicted that if current production trends continue, the United States will overtake Saudi Arabia and Russia as the world’s largest oil producer in 2017.
Taken together, our slowing appetite and booming production mean that with a suddenness that has surprised many observers, the prospect of energy independence—technically speaking, at least—looms in the windshield.
Energy independence looks different today, however, than it did in the oil-shocked 1970s. For one thing, the energy market is a linchpin of the world order, and any big shift is likely to have costs to stability. Some analysts have warned that America’s growing oil production will create a glut that lowers prices, eating up the profits of oil countries and destabilizing their regimes. (That’s in the short term, anyway; worldwide, oil demand is still rising fast.) Falling prices mean that countries that depend on oil will face sudden cash shortages. It’s easy to imagine how destabilizing that could be for a natural-resource power like Russia, for the monarchs of the Persian Gulf, or for the dictators in Central Asia. No matter how distasteful their rule, the prospect of an unruly transition, or worse still, a protracted conflict, in any of those countries could cause havoc.
In the long term, this is not necessarily a bad thing: Weakening oppressive or corrupt governments could ultimately be beneficial for the people of those countries. And a shift in the balance of power away from the Gulf monarchies of OPEC and toward the United States could have a democratizing effect. In any event, though, lower oil prices and a dynamic energy market make the current stable order unpredictable.
China’s economic rise has also changed the global energy equation. For now, China is largely without its own petroleum supplies and is replacing the United States as the largest importer. As China steps into the United States’ shoes as the world’s largest oil customer, it will gain influence in oil-producing regions as American influence wanes. It might also feel compelled to invest more heavily in an aggressive navy, fearing that the United States will no longer shoulder the responsibility of policing shipping lanes in the Persian Gulf and elsewhere—a costly security service that America pays for but which benefits the entire network of global trade.
Domestically, there’s also the “resource curse,” which afflicts countries that depend too heavily on extracted commodities like minerals or petroleum. Such industries don’t add much value to a society beyond the price the commodity fetches at market, and that price is notoriously fickle, meaning fortunes and jobs rise and fall with swings in global prices. The resource curse often implies corruption and autocracy as well. But economists are less concerned about that, since the United States already has an effective government and laws to thwart corruption, and because oil will still make up a minuscule overall share of the economy. Last year oil and gas extraction amounted to just 1.2 percent of the American gross domestic product.
THERE ARE STILL plenty of people who think that energy self-sufficiency will be an unalloyed good. Jay Hakes, who has pursued the goal as an energy official under the last three Democratic presidents, says that America will reap countless political and economic dividends. It will help the trade deficit, give American companies and workers benefits when oil prices are high, and insulate the country from supply shocks. It will also give Washington wider latitude when dealing with oil-producing countries, on which it will depend less. “There are some downsides,” he acknowledges, “but they’re outweighed by all the positives.”
One benefit that self-sufficiency won’t bring, it seems clear, is a sudden independence from the politics of the Middle East. The region produces about half the world’s oil, and Saudi Arabia alone has so much oil that it can raise its capacity at a moment’s notice to make up for a shortfall anywhere else in the world.
Already, America is largely independent of Middle Eastern oil as a consumer: Only about 15 percent of our supply comes from the region. But we do depend on a stable world market—even more so if we become a net exporter ourselves. So even if we don’t buy Saudi oil, we’ll still need a stable Saudi regime that can add a few million barrels a day to world flows, at a moment’s notice, to offset a disruption somewhere else.
Michael Levi, a fellow at the Council on Foreign Relations and author of the book “The Power Surge: Energy, Opportunity, and the Battle for America’s Future,” believes that the biggest risk of achieving a goal like energy independence is complacency: Without the pressures that importing oil has brought, we may have little reason to innovate our way out of fossil fuels altogether. The policies themselves have achieved a great deal of good, he points out—stabilizing the world’s energy markets, reducing consumption, and pushing us beyond “independence” toward renewable sources like wind and solar power (though today these still make up a vanishingly small portion of the US energy supply).
Levi argues that an American oil bonanza could easily remove the political incentives for long-term planning and sacrifice. “I get scared that we’ll become complacent and make foolish decisions because we believe we’ve become energy independent,” Levi says. Energy independence was a useful slogan to motivate America, but in reality, a sensible energy policy has to balance a plethora of competing concerns, from geopolitics and the environment to consumer demand and fuel’s importance to the economy.
“The real way to be energy independent,” he said, “is actually to not use oil.”
Blast barriers in Baghdad, 2008. DONOVAN WYLIE / MAGNUM PHOTOS
[Originally published in The Boston Globe Ideas.]
BEIRUT — Everything that people love and hate about cities stems from the agglomeration of humanity packed into a tight space: the traffic, the culture, the chance encounters, the anxious bustle. Along with this proximity come certain feelings, a relative sense of security or of fear.
Over the last 13 years I have lived in a magical succession of cities: Boston, Baghdad, New York, and Beirut. They all made lovely and surprising homes—and they all were distorted to varying degrees by fear and threat.
At root, cities depend on constant leaps of faith. You cross paths every hour with people of vastly different backgrounds and trust that you’ll emerge safely. Each act of violence—a mugging, a murder, a bombing—erodes that faith. Over time, especially if there are more attacks, a city will adapt in subtle but profound and insidious ways.
The bombing last week was a shock to Boston, and a violation. As long as it’s isolated, the city will recover. But three people have died, and more than 170 have been wounded, and the scar will remain. As we think about how to respond, it’s worth also considering what happens when cities become driven by fear.
NEW YORK, WASHINGTON, and to a lesser extent the rest of America have exchanged some of the trappings of an open society for extra security since the Sept. 11 attacks. Metal detectors and guards have become fixtures at government buildings and airports, and it’s not unusual to see a SWAT team with machine guns patrolling in Manhattan. Police conduct random searches in subways, and new buildings feature barriers and setbacks that isolate them from the city’s pedestrian life.
Baghdad and Beirut, however, are reminders of the far greater changes wrought by wars and ubiquitous random violence. A city of about 7 million, low-slung Baghdad sprawls along the banks of the Tigris River. It’s the kind of place where almost everyone has a car, and it blends into expansive suburbs on its fringes. When I first arrived in 2003, the city was reeling from the shock-and-awe bombing and the US invasion. But it was the year that followed that slowly and inexorably transformed the way people lived. First, the US military closed roads and erected checkpoints. Then, militants started ambushing American troops and planting roadside bombs; the ensuing shootouts often engulfed passersby. Finally, extremists and sectarian militias began indiscriminately targeting Iraqi civilians and government personnel—in queues at public buildings, at markets, in mosques, virtually everywhere. Order crumbled.
Baghdad became a city of walls. Wealthy homeowners blocked their own streets with piles of gravel. Jersey barriers sprang up around every ministry, office, and hotel. As the conflict widened, entire neighborhoods were sealed off. People adjusted their commutes to avoid tedious checkpoints and areas with frequent car bombings. Drive times doubled or tripled. Long lines, with invasive searches, became an everyday fact of life. The geography of the city changed. Markets moved, even the old-fashioned outdoor kind where merchants sell livestock or vegetables. Entire sub-cities sprang up to serve Shia and Sunni Baghdadis who no longer could travel through each other’s areas.
Simple civic pleasures atrophied almost overnight. No one wanted to get blown up because they insisted on going out for ice cream. The famed riverfront restaurants went dormant; no more live carp hammered with a mallet and grilled before our eyes. The water-pipe joints in the parks went out of business. Most of the social spaces that defined the city shut down. Booksellers fled Mutanabe Street, the intellectual center of the city with its antique cafes. The amusement park at the Martyrs Monument shut its gates. Hotel bar pianists emigrated. Dust settled over the playgrounds and fountains at the strip of grassy family restaurants near Baghdad University.
In Beirut, where I moved with my family earlier this year, a generation-long conflict has Balkanized the city’s population. Here, most groups no longer trust each other at all. From 1975 to 1991 the city was split by civil war, and people moved where they felt safest. A cosmopolitan city fragmented into enclaves. Christians flocked to East Beirut, spawning a dozen new commercial hubs. Shiites squatted in the village orchards south of Beirut, and within a decade had built a city-within-a-city almost a million strong—the Dahieh, or “the Suburb.” The original downtown became a demilitarized zone, its Arabesque arcades reduced to rubble, and today has been rebuilt as a sterile, Disney-like tourist and office sector. Ras Beirut, my neighborhood, deteriorated from proudly diverse (and secular) to “Sunni West Beirut,” although it still boasts pockets of stubborn coexistence. In today’s Beirut, my block, where a Druze warlord lives across the street from a church and subsidizes the parking fees of his Shia, Sunni, and Christian neighbors, is a stark exception.
Mixing takes place, but tentatively, and because of frequent outbreaks of violence over the years, Beirutis have internalized the reflexes to fight, defend, and isolate. The result is a city alight with street life, cafes, and boutiques but which can instantaneously shift to war footing. One friend ran into his bartender on a night off at a checkpoint with bandoliers of bullets strapped to his chest. Even when the city appears calm, most people have laid in supplies in case an armed flare-up forces them to stay in their homes for a week. My friend’s teenage daughter keeps a change of clothes in her schoolbag in case she can’t return to her house. Uniformed private guards are everywhere, in every park, on every promenade, at every mall.
WITHIN A FEW weeks of our move to Beirut this year, my 5-year-old son traded his old fantasy, in which he played the doctor and assigned us roles as patients and nurses, for a new one: security guard. While we were setting up for a yard party, he arranged a few plastic chairs by the door. “I’ll check people here,” he declared. He also asked me a lot of questions about the heavily armed soldiers who stand watch on our street: “Will the army shoot me if I make a mistake?”
This is not the childhood he would have in Boston, even after this week. War-molded cities are nightmare reflections of failed states, places where government has gone into free fall, police don’t or can’t do their jobs, and normal life feels out of reach. Beirut is a kind of warning: Physically it appears normal on most days. But trust is gone. The public sphere feels wobbly and impermanent.
Boston is still lucky, with assets that Beirut lost generations ago; it has functional institutions, old communities with tangled but shared histories, and unifying cultural traditions. Boston has police that can get a job done and a baseball team that ties together otherwise divided corners of the city. One lesson of the city I live in now is that circumstances can sever these lifelines faster than we expect. Our connections require continued, perhaps redoubled, care. Without trust, a city can still be a magnificent place to live. Until all at once, it isn’t.
Syrian rebel fighters posed for a photo after several days of intense clashes with the Syrian army in Aleppo, Syria, in October. (AP: NARCISO CONTRERAS)
[Originally published in The Boston Globe Ideas.]
THE NEWS FROM SYRIA keeps getting worse. As it enters its third year, the civil war between the ruthless Assad regime and groups of mostly Sunni rebels has taken nearly 100,000 lives and settled into a violent, deadly stalemate. Beyond the humanitarian costs, it threatens to engulf the entire region: Syria’s rival militias have set up camp beyond the nation’s borders, destabilizing Turkey, Lebanon, and Jordan. Refugees have made frontier areas of those countries ungovernable.
United Nations peace talks have never really gotten off the ground, and as the conflict gets worse, voices in Europe and America, from both the left and right, have begun to press urgently for some kind of intervention. So far the Obama administration has largely stayed out, trying to identify moderate rebels to back, and officially hoping for a negotiated settlement—a peace deal between Assad’s regime and its collection of enemies.
Given the importance of what’s happening in Syria, it might seem puzzling that the United States is still so much on the sidelines, waiting for a resolution that seems more and more elusive with each passing week. But it is also becoming clear that for America, there’s another way to look at what’s happening. A handful of voices in the Western foreign policy world are quietly starting to acknowledge that a long, drawn-out conflict in Syria doesn’t threaten American interests; to put it coldly, it might even serve them. Assad might be a monster and a despot, they point out, but there is a good chance that whoever replaces him will be worse for the United States. And as long as the war continues, it has some clear benefits for America: It distracts Iran, Hezbollah, and Assad’s government, traditional American antagonists in the region. In the most purely pragmatic policy calculus, they point out, the best solution to Syria’s problems, as far as US interests go, might be no solution at all.
If it’s true that the Syrian war serves American interests, that unsettling insight leads to an even more unsettling question: what to do with that knowledge. No matter how the rest of the world sees the United States, Americans like to think of themselves as moral actors, not the kind of nation that would stand by as another country destroys itself through civil war. Yet as time goes on, it’s starting to look—especially to outsiders—as if America is enabling a massacre that it could do considerably more to end.
For now, the public debate over intervention in America has a whiff of hand-wringing theatricality. We could intervene to staunch the suffering but for circumstances beyond our control: the financial crisis, worries about Assad’s successor, the lingering consequences of the Iraq war. These might explain why America doesn’t stage a costly outright invasion. But they don’t explain why it isn’t sending vastly more assistance to the rebels.
The more Machiavellian analysis of Syria’s war helps clarify the disturbing set of choices before us. It’s unlikely that America would alter the balance in Syria unless the situation worsens and protracted civil war begins to threaten, rather than quietly advance, core US interests. And if we don’t want to wait for things to get that bad, then it is time for America’s policy leaders to start talking more concretely—and more honestly—about when humanitarian concerns should trump our more naked state interests.
MANY AMERICAN observers were heartened when the Arab uprisings spread to Syria in the spring of 2011, starting with peaceful demonstrations against Bashar al-Assad’s police state. Given Assad’s long and murderous reign, a democratic revolution seemed to offer hope. But the regime immediately responded with maximum lethality, arresting protesters and torturing some to death.
Armed rebel groups began to surface around the country, harassing Assad’s military and claiming control over a belt of provincial cities. Assad has pursued a scorched earth strategy, raining shells, missiles, and bombs on any neighborhood that rises up. Rebel areas have suffered for the better part of a year under constant strafing and sniper fire, without access to water, health care, or electricity. Iran and Russia have kept the military pipeline open, and Assad has a major storehouse of chemical weapons. While some rebel groups have been accused of crimes, the regime is disproportionately responsible for the killing, which earlier this year passed the 70,000 mark by a United Nations estimate that close observers consider an undercount.
As the civil war has hardened into a bloody, damaging standoff, many have called for a military intervention, pressing for the United States to side with one of the moderate rebel factions and do whatever it takes to propel it to victory. Liberal humanitarians focus on the dead and the millions driven from their homes by the fighting, and have urged the United States to join the rebel campaign. The right wants intervention on different grounds, arguing that the regional security implications of a failed Syria are too dangerous to ignore; the country occupies a significant strategic location, and the strongest rebel coalition, the Nusra Front, is an Al Qaeda affiliate. Given all those concerns, both sides suggest that it’s only a question of when, not if, the United States gets drawn in.
“Syria’s current trajectory is toward total state failure and a humanitarian catastrophe that will overwhelm at least two of its neighbors, to say nothing of 22 million Syrians,” said Fred Hof, an ambassador who ran Obama’s Syria policy at the State Department until last year, when he quit the administration and became a leading advocate for intervention. His feelings are widely shared in the foreign policy establishment: Liberals like Princeton’s Anne-Marie Slaughter and conservatives like Fouad Ajami have made the interventionist case, as have State Department officials behind the scenes.
Intervention is always risky, and in Syria it’s riskier than elsewhere. The regime has a powerful military at its disposal and major foreign backers in Russia and Iran. An intervention could dramatically escalate the loss of life and inflame a proxy struggle into a regional conflagration.
And yet there’s a flip side to the risks: The war is also becoming a sinkhole for America’s enemies. Iran and Hezbollah, the region’s most persistent irritants to the United States and Israel, have tied up considerable resources and manpower propping up Assad’s regime and establishing new militias. Russia remains a key guarantor of the government, costing Russia support throughout the rest of the Arab world. Gulf monarchies, which tend to be troublesome American allies, have invested small fortunes on the rebel side, sending weapons and establishing exile political organizations. The more the Syrian war sucks up the attention and resources of its entire neighborhood, the greater America’s relative influence in the Middle East.
If that makes Syria an unattractive target for intervention, so too do the politics and position of the combatants. For now, jihadist groups have established themselves as the most effective rebel fighters—and their distaste for Washington approaches their rage against Assad. Egos have fractured the rebellion, with new leaders emerging and falling every week, leaving no unified government-in-waiting for outsiders to support. The violent regime, meanwhile, is no friend to the West.
“I’ll come out and say it,” wrote the American historian and polemicist Daniel Pipes, in an e-mail. “Western powers should guide the conflict to stalemate by helping whichever side is losing. The danger of evil forces lessens when they make war on each other.”
Pipes is a polarizing figure, best known for his broadsides against Islamists and his critique of US policy toward the Middle East, which he usually says is naive. But in this case he’s voicing a sentiment that several diplomats, policy makers, and foreign policy thinkers have expressed to me in private. Some are career diplomats who follow the Syrian war closely. None wants to see the carnage continue, but one said to me with resignation: “For now, the war is helping America, so there’s no incentive to change policy.”
Analysts who follow the conflict up close almost universally want more involvement because they are maddened by the human toll—but many of them see national interests clearly standing in the way. “Russia gets to feel like it’s standing up to America, and America watches its enemies suffer,” one complained. “They don’t care that the Syrian state is hollowing itself out in ways that will come back to haunt everyone.”
IS IT EVER ACCEPTABLE to encourage a war to continue? In the policy world it’s seen as the grittiest kind of realpolitik, a throwback to the imperial age when competing powers often encouraged distant wars to weaken rivals, or to keep colonized nations compliant. During the Cold War the United States fanned proxy wars from Vietnam to Afghanistan to Angola to Nicaragua but invoked the higher principle of stopping the spread of communism, rather than admitting it was simply trying to wear out the Soviet Union.
In Syria it’s impossible to pretend that the prolonging of the civil war is serving a higher goal, and nobody, even Pipes, wants the United States to occupy the position of abetting a human-rights catastrophe. But the tradeoffs illustrate why Syria has become such a murky problem to solve. Even in an intervention that is humanitarian rather than primarily self-interested, a country needs to weigh the costs and risks of trying to help against the benefit we might realistically expect to bring—and it’s a difficult decision to get involved when those potential costs include threats to our own political interests.
So just what would be bad enough to induce the United States to intervene? An especially egregious massacre—a present-day Srebenica or Rwanda—could fan such outrage that the White House changes its position. So too would a large-scale violation of the Chemical Weapons Convention—signed by most states in the world, but not Syria. But far more likely is that the war simmers on, ever deadlier, until one side scores a military victory big enough to convince the outside powers to pick a winner. The White House hopes that with time, rebels more to its liking will gain influence and perhaps eclipse the alarming jihadists. That could take years. Many observers fear that Assad will fall and open the way to a five- or ten-year civil war between his successor and a well-armed coalition of Islamist militias, turning Syria into an Afghanistan on the Euphrates. The only thing that seems likely is that whatever comes next will be tragic for the people of Syria.
Because this chilly if practical logic is largely unspoken, the current hands-off policy continues to bewilder many American onlookers. It would be easier to navigate the conversation about intervention if the White House, and the policy community, admit what observers are starting to describe as the benefits of the war. Only then can we move forward to the real moral and political calculations at stake: for example, whether giving Iran a black eye is worth having a hand in the tally of Syria’s dead and displaced.
For those up close, it’s looking unhappily like a trip to a bygone era. Walid Jumblatt, the Lebanese Druze warlord, spent much of the last two years trying fruitlessly to persuade Washington and Moscow to midwife a political solution. Now he’s given up. Atop the pile of books on his coffee table sits “The Great Game,” a tale of how superpowers coldly schemed for centuries over Central Asia, heedless of the consequences for the region’s citizens. When he looks at Syria he sees a new incarnation of the same contest, where Russia and America both seek what they want at the expense of Syrians caught in the conflict.
“It’s cynical,” he said in a recent interview. “Now we are headed for a long civil war.”
Kholoud and Nidal, from Kholoud’s Facebook page.
[Originally published in The Boston Globe Ideas.]
BEIRUT — Kholoud Sukkarieh and Nidal Darwish weren’t interested in making legal history when they got engaged last year. She’s a Sunni Muslim and he is Shia, and like many interfaith couples in this part of the world who don’t want to convert, they planned to get married outside their home country. Then, at a photography workshop, Sukkarieh met a lawyer with a cause and an intriguing proposal: Would she and her fiancé be interested in using their wedding to do something radical?
Last November, they tied the knot before a willing notary, becoming the first couple in the history of Lebanon to marry in a nonreligious ceremony. In January they embarked on what promises to be a long, fraught challenge to the legal status quo, using an obscure provision of old colonial law to demand that the Lebanese government officially recognize their marriage.
What sounds like a simple thing in the West—a civil marriage with a judge or a notary presiding, and no religious contract—is a near-impossibility in the Middle East. Their marriage, and the controversy it has triggered here, shines a light on a crucial but unappreciated way in which this region differs from much of the rest of the world. Lebanon, like almost all the Arab states, most Islamic countries, and Israel, simply doesn’t have civil laws for matters of personal status.
The effects of this difference percolate deeply through society. In the West and Asia, marriage and family law have evolved to reflect broader changes in society, even when religious authorities don’t agree. Civil codes have adapted to the rise of divorce, developed custody standards that are fairer to women, and are now starting to recognize the rights of same-sex couples and their children.
In the Middle East, on the other hand, the laws governing marriage, inheritance, and sometimes even citizenship move only when religious authorities allow, which means they’ve remained almost entirely static during the last century. They stifle identity and cause countless problems for families whose members might include different faiths or different nationalities; for couples that want to divorce; for women, who often have far fewer rights than men under religious codes; and for people who want to equitably share their inheritance with their daughters.
Sukkarieh and Darwish, and the civil-law advocates who support them, see their marriage only as a beginning. By forcing the state of Lebanon to recognize their union, they hope to catalyze a broader movement to grant people a legally recognized personal life outside the umbrella of religion—and establish a clear civil sphere outside the influence of clerics. In doing so, they are flying straight into the teeth of a countervailing trend: the rise of political Islam.
“We want political life to be 100 percent secular,” Sukkarieh explained one recent evening in the garden at Fitness Zone, the gym where her husband works as a receptionist. “I’m not afraid to question a sheikh. It’s time for the politicians to stop being afraid of clerics.”
MIDDLE EASTERN MARRIAGE and family law is, ironically, the legacy of what was once a very progressive idea. Over the centuries that the Ottoman Empire ruled the region, it presided over an astonishing patchwork of minorities, including sizable populations of Jews and Christians. The Ottoman Sultans, cosmopolitan Muslims based in Istanbul, took the open-minded approach of granting Jews, Christians, and other minority sects the right to govern their own personal affairs. Separate religious courts governed the private lives of different religious communities.
Ottoman rule crumbled in the 19th century, replaced by European colonial powers, and finally by independent states in the 20th century. As these emerged, new ideas jostled to replace the old balance of cleric and state. Islamist philosophers argued for a more rigid system governed entirely by Islam, while secular nationalists argued for new modern civil governments. Most of the Arab states ultimately moved toward more religious legal codes, though some, like Iraq and Syria, were resolutely secular for a time. Lebanon inherited some secular legal precepts from the French mandate period, which ended in 1943. But most of the former empire fell back on the Ottoman notion that, where personal life is concerned, each religion should govern its own.
For at least two generations, the laws of religious sects have controlled personal status matters almost everywhere. (One exception was Tunisia, where a secular dictator in 1956 implemented a progressive secular code that gave women equal rights.) Even in states that have some civil rules, Islamic law trumps them for Muslims. In Christian communities, clerics hold sway over family life with such strict and antiquated codes that many of the dwindling number of Christians in the Arab world have converted to Islam, which has comparatively simple procedures for divorce. Israel offers more of the same, with the most conservative ultra-Orthodox sect in charge of sanctioning marriages—and even of certifying who counts as a Jew. This has created some strange scenarios: One of the few ways that Israelis and Lebanese cross paths with each other, as citizens of two nations that have technically been at war nonstop since 1948, is in Cyprus, where package tour operators fly couples from both countries who must leave home to hold a civil marriage ceremony.
The result is a patchwork of laws where neighbors may have entirely different marriage rights and interfaith couples face myriad complications. For many communities it blocks any attempt to secure equal rights for women, and curtails the rights of people who consider themselves secular or nonbelievers. Ultimately, it creates a vast zone of activity that’s solely the province of clerics and off limits to the state.
There have been occasional bursts of reform, including a brief period during the 1950s and 1960s when secular regimes in Egypt and Syria appeared to have momentum on their side. In the years before recent the Arab uprisings, civil law advocates made some headway. Egypt reformed divorce laws to fix some of the most imbalanced practices, including custody rulings that almost universally favored fathers over mothers. Algeria gave women substantial rights to pass citizenship to their children. Saudi Arabia reluctantly began to consider giving children born to a Saudi mother and a foreign father—previously treated as complete foreigners—a chance to win legal residency in the kingdom.
Today, though, even those small steps are imperiled, as Islamists all over the region press for political power. In Egypt they are seeking to roll back the limited rights to divorce that women earned in the last decade. Some have also asked to lower the age of marriage from its current minimum of 18 and to limit the rights of religious minorities and women to run for high political office.
Though such extreme propositions appear unlikely to become law right away, Islamists have successfully shifted the political dialogue in most of the region. In Iraq, a new constitution written after the US invasion raised the status of Islam in the legal code, and sectarian officials have gained vast new powers over both the personal and the political. Libya’s new leader in his first speech legalized polygamy, which had been banned by Khadafy. Tunisia’s progressive personal status code, which gave women equal rights and cut clerics out of personal status matters, is under attack.
Perhaps surprisingly, given the history, when civil-law advocates look for a model legal regime they turn to Turkey. What was once the heart of the Ottoman Empire is now the only country to have completely scrapped its religious rules for a civil code that applies equally to all citizens—although there, too, Islamists are gradually chipping away at the state’s secular bedrock.
LEBANON, WHERE SUKKARIEH and Darwish live, has been an especially fertile ground for secular challenges to clerical control. This small country is notorious for the entrenched factionalism of its 18 sects—Muslims, Christians, Druze, and other smaller offshoots—whose animosity was inflamed during the civil war from 1975 to 1990. But it also has a deep tradition of interfaith cooperation, a powerful secular civil society built on the region’s best universities, and the Arab world’s most thriving independent nonprofit sector.
Even so, it remains bound by the region’s assumptions about the role of religion in civic life. In Lebanon, candidates run for office based on their sect. Public sector jobs are handed out by sect. Voting is regulated by sect. And naturally, marriage, divorce, and inheritance are controlled by religious codes.
Today, a small but vociferous group of activists is pushing against this. They include lawyers from prominent political dynasties, feminists, activists for political reform, frustrated youth, and working-class believers like Sukkarieh. Parallel movements exist all over the Middle East among urban elites and occasionally in rural villages.
Like Sukkarieh and Darwish, these advocates see civil marriage as a proxy for something bigger: a truly secular state.
“Only a secular regime guarantees freedom, even freedom of religion,” says Lina Abou-Habib, who runs a Beirut-based NGO that pushes for women’s rights around the region. One of their many campaigns hopes to force Arab governments to allow women to pass citizenship to their children—a right currently reserved, in most places, for men alone. Abou-Habib says the hegemony of clerics has led to absurdity. She’s a Greek Orthodox Christian married to a Sunni Muslim; under the prevailing legal code, since she hasn’t converted, if she were to die now her daughter wouldn’t even be able to inherit her car.
For Sukkarieh and Darwish, getting their marriage registered with the state has become a consuming second career. So far the government has withheld legal recognition of their marriage contract, but courts have recognized it—which, in Lebanon’s complicated bureaucracy, is looking like a victory for the couple. And the larger argument appears just to be beginning. Lebanon’s president came out on Twitter in favor of civil marriage; the billionaire prime minister opposed it. At the end of January, the top Sunni cleric threatened to excommunicate anyone who supported civil marriage, but the richest and most powerful Sunni politician said he supported civil marriage and opposed the mufti.
Meanwhile, other couples say they intend to follow their lead as soon as the issue is resolved. A Lebanese journalist has tentatively scheduled a group civil wedding for more couples in April, assuming the pathbreaking marriage successfully wins formal recognition this month.
It’s been 70 years since some Lebanese started talking about civil marriage, and no couple has gotten as far as Sukkarieh and Darwish. At a time when the future of the Arab state is being vigorously contested by empowered citizens around the region, secular activists have been buoyed by the unexpected landmark in Lebanon. It’s premature to read too much into it; after all, Islamists are sweeping to power in elections all over, and it could take decades for their own internal schisms and inconsistencies to create openings for a strong secular alternative. But the civil marriage faction hopes to build that, one family at a time. “We didn’t expect it,” Sukkarieh said, “but we’ve rejuvenated civil society.”
A street poster from Cairo that reads, “My God, my freedom, O my country.” Photo: NEMO.
CAIRO — Every night, Egypt’s current comedic sensation, a doctor hailed as his country’s Jon Stewart, lambastes the nation’s president on TV, mocking his authoritarian dictates and airing montages that reveal apparent lies. On talk shows, opposition politicians hold forth for hours, excoriating government policy and new Islamist president Mohammed Morsi. Protesters use the earthiest of language to compare their political leaders to donkeys, clowns, and worse. Meanwhile, the president’s supporters in the Muslim Brotherhood respond in kind on their new satellite television station and in mass counter-rallies.
Before Egypt’s uprising two years ago, this kind of open debate about the president would have been unthinkable. For nearly three decades, former president Hosni Mubarak exerted near total control over the public sphere. In the twilight of his term, he imprisoned a famous newspaper editor who dared to publish speculation about the ailing president’s declining health. No one else touched the story again.
To Western observers, the freewheeling back-and-forth in Egypt right now might sound like the flowering of a young open society, one of the revolution’s few unalloyed triumphs. But amid the explosion of debate, something less wholesome has begun to arise as well. Though speech is far more open, it now carries a new and different kind of risk, one more unpredictable and sudden. Islamist officials and citizens have begun going after individuals for crimes such as blasphemy and insulting the president, and vaguer charges like sedition and serving foreign interests. The elected Islamist ruling party, the Muslim Brotherhood, pushed a new constitution through Egypt’s constituent assembly in December that expanded the number of possible free speech offenses—including insults to “all prophets.”
Worryingly, a recent report showed that President Morsi—a Brotherhood member, and Egypt’s first-ever genuinely elected, civilian leader—has invoked the law against insulting the presidency far more frequently than any of the dictators who preceded him, and has even directed a full-time prosecutor to summon journalists and others suspected of that crime.
The Muslim Brotherhood, as it rises to power, is playing host to conflicting ideas. It wants the United States to view it as a tolerant modern movement that doesn’t arbitrarily silence critics, but at the same time it needs to show its political base of socially conservative constituents in rural Egypt that it won’t tolerate irreligious speech at home. And it wants to argue that despite its religious pedigree, it is behaving within the constraints of the law.
For the time being, Egypt’s proliferating free expression still outstrips government efforts to shut it down. But as the new open society engenders pushback, what’s happening here is in many ways a test case for Islamist rule over a secular state. What’s at stake is whether Islamists—who are vying for elected power in countries around the Muslim world—really only respect the rules until they have enough clout to ignore them.
The text on this Cairo street poster reads, “As they breathe, they lie.” Photo: NEMO
EGYPTIANS ARE RENOWNED throughout the Arab world for jokes and wordplay, as likely to fall from the mouth of a sweet potato peddler as a society journalist. Much of daily life takes place in the crowded public social spaces where people shop, drink hand-pressed sugarcane juice, loiter with friends, or picnic with their families. But under the stifling police state built by Mubarak, that vitality was undercut by fear of the undercover police and informants who lurked everywhere, declaring themselves at sheesha joints or cafes when the conversation veered toward politics.
As a result, a prudent self-censorship ruled the day. State security officials had desks at all the major newspapers, but top editors usually saved them the trouble, restraining their own reporters in advance. In 2005, when one publisher took the bold step of publishing a judge’s letter critical of the regime, he confiscated the cellphones of all his editors and sequestered them in a conference room so they couldn’t tip off authorities before the paper reached the streets.
It wasn’t technically illegal to be a dissident in Egypt; that the paper could be published at all was testament to the fact that some tolerance existed. Egypt’s system was less draconian and violent than the police states in Syria and Iraq, where dissidents were routinely assassinated and tortured. But the limits of public speech were well understood, and Egyptians who cared to criticize the state carefully stayed on the accepted side of the line. Activists would speak out about electoral fraud by the ministry of the interior or against corruption by businesspeople, for example, but would carefully refrain from criticizing the military or Mubarak’s family. Political life as we understand it barely existed.
Egypt’s uprising marked an abrupt break in this long cultural balancing act. For the first time, millions of Egyptians expressed themselves freely and in public, openly defying the intelligence minions and the guns of the police. It was shocking when people in the streets called directly for the fall of the regime. Within weeks, previously unimaginable acts had become commonplace. Mubarak’s effigy hung in Tahrir Square. Military generals were mocked as corrupt, sadistic toadies in cartoons and banners. Establishment figures called for trials of former officials and limits on renegade security officials.
In the two years since, free speech has spread with dizzying speed—on buses, during marches, around grocery stalls, everywhere that people congregate. Today there are fewer sacred cows, although even at the peak of revolutionary fervor few Egyptians were willing to risk publicly impugning the military, which was imprisoning thousands without any due process. (An elected member of parliament faced charges when he compared the interim military dictator to a donkey.)
Mohammed Morsi was inaugurated in June, after a tight election that pitted him against a former Mubarak crony. Morsi campaigned on a promise to excise the old regime’s ways from the state, and on a grandiose Islamist platform called “The Renaissance.” His regime has fared poorly in its efforts to take control of the police and judiciary. Nor has it made much progress on its sweeping but impractical proposals to end poverty and save the Egyptian economy. It has proven easier to talk about Islamic social issues: allegations of blasphemy by Christians and atheist bloggers; alcohol consumption and the sexual norms of secular Egyptians; and the idea, widely held among Brotherhood supporters, that a godless cabal of old-regime supporters is secretly plotting to seize power.
Before it won the presidency, the Muslim Brotherhood emphasized it had been fairly elected; the party was Islamist, it said, but from the pragmatic, democratic end of the spectrum. But in recent months, there’s been more than a whiff of Big Brother about the Brotherhood. Supposed volunteers attacked demonstrators outside Morsi’s presidential palace—and then were videotaped turning over their victims to Brotherhood operatives. Allegations of torture, illegal detention, and murder by state agents pile up uninvestigated.
As revolutionaries and other critical Egyptians have turned their ire from the old regime to the new, the Brotherhood also has begun targeting political speech. The new constitution, authored by the Brotherhood and forced through Egypt’s constituent assembly in an overnight session over the objections of the secular opposition and even some mainstream religious clerics, criminalized blasphemy and expanded older statutes against insults to leaders, state institutions like the courts, and religious figures. Popular journalists have been threatened with arrest, while less famous individuals, including children improbably accused of desecrating a Koran, have been thrown into detention. Morsi’s presidential advisers regularly contact human rights activists and journalists to challenge their reports, a level of attention and pressure previously unknown here.
In addition to the old legal tools to limit free expression, which are now more heavily used by the Islamists than they were by Mubarak, the new constitution has added criminal penalties for insulting all religions and empowers courts to shut down media outlets that don’t “respect the sanctity of the private lives of citizens and the requirements of national security.”
The Egyptian government began an investigation of TV comedian Baseem Yousef but dropped its charges after a public outcry.
Egyptian human rights monitors have tracked dozens of such cases, including three that were filed by the president’s own legal team. Gamal Eid at the Arab Network for Human Rights Information charted 40 cases that prosecuted political critics for what amounted to dissenting speech in the first 200 days of Morsi’s regime. That’s more, he claims, than during Mubarak’s entire reign, and more charges of insulting the president than were filed since 1909, when the law was first written.
IT’S A WELL-KNOWN PRECEPT in politics that times of transition are the most unstable, and that the fight to establish civil liberties carries risks. The current speech crackdown may just be an expected symptom of the shift from an effective authoritarian state to competitive politics. Mubarak, of course, had less need to prosecute a population that mostly kept quiet.
It could also be a sign of desperation on the part of the Brotherhood, as it struggles to rule without buy-in from the police and state bureaucracy. Or it could, more alarmingly, mark a transition to a genuine new era of censorship in the most populous Arab country, this time driven as much by the Islamist cultural agenda as by the quest to keep a grip on power.
It is that last prospect that makes the path Egypt takes so important. By dint of its size and cultural heft, the country remains a major influence across the Arab world, and both in Egypt and elsewhere, the Muslim Brotherhood is at the front lines of political Islam—trying to balance the cultural conservatism of its rank-and-file supporters with the openness the world expects from democratic society.
There are signs that the Brotherhood wants to at least make gestures toward Western norms, though it remains hard to gauge exactly how open an Egypt its members would like to see. At one point the government began an investigation of Baseem Yousef, the Jon Stewart-like TV comedian, but abruptly dropped its charges in January after a public outcry.
During the wave of bad publicity around the investigation, one of President Morsi’s advisers issued a statement claiming that the state would never interfere in free speech—so long as citizens and the press worked to raise their “level of credibility.”
“Human dignity has been a core demand of the revolution and should not be undermined under the guise of ‘free speech,’” presidential adviser Esam El-Haddad said in a statement that placed ominous boundaries on the very idea of free speech that it purported to advance. “Rather, with freedom of speech comes responsibility to fellow citizens.”
What scares many people, is how they define “responsibility.” A widely watched video clip portrays a Salafi cleric lecturing his followers about how Egypt’s new constitution will allow pious Muslims to limit Christian freedoms and silence secular critics (the cleric, Sheikh Yasser Borhami, is from a more fundamentalist current, separate from but allied with the Brotherhood). When critics look at the Brotherhood’s current spate of investigations and threatened prosecutions, they see the political manifestation of the same exclusionary impulse: the polarizing notion that the Islamists’ actions are blessed by God and, by implication, that to criticize them is sacrilege.
Modern Islamism hasn’t reckoned with this implicit conflict yet, even internally. Officially, one current of the Brotherhood’s ideology prioritizes social activism over politics, and eschews coercion in religious matters. But another, perhaps more popular strain in Brotherhood thinking agitates for a religious revolution in people’s daily lives, and that strain appears to be driving the behavior of the Brothers suddenly in charge of the nation. Their fervor is colliding squarely with the secular responsibility of running a state like Egypt, which for all its shortcomings has real institutions, laws, and a civil society that expects modern freedoms and protections. The first stage of Egypt’s transition from military dictatorship has ended, but the great clash between religious and secular politics is just beginning to unfold.
Fred Kaplan’s Insurgents on David Petraeus
The American occupation of Iraq in its early years was a swamp of incompetence and self-delusion. The tales of hubris and reality-denial have already passed into folklore. Recent college graduates were tasked with rigging up a Western-style government. Some renegade military units blasted away at what they called “anti-Iraq Forces,” spurring an inchoate insurgency. Early on, Washington hailed the mess a glorious “mission accomplished.” Meanwhile, a “forgotten war” simmered to the east in Afghanistan. By the low standards of the time, common sense passed for great wisdom. Any American military officer willing to criticize his own tactics and question the viability of the mission brought a welcome breath of fresh air.
Most alarming was the atmosphere of intellectual dishonesty that swirled through the highest levels of America’s war on terror. The Pentagon banned American officers from using the word “insurgency” to describe the nationalist Iraqis who were killing them. The White House decided that if it refused to plan for an occupation, somehow the United States would slide off the hook for running Iraq. Ideas mattered, and many of the most egregious foul-ups of the era stemmed from abstract theories mindlessly applied to the real world.
There is no one better equipped to tell the story of those ideas — and their often hair-raising consequences — than Fred Kaplan, a rare combination of defense intellectual and pugnacious reporter. Kaplan writes Slate’s War Stories column, a must-read in security circles. He brings genuine expertise to his fine storytelling, with a doctorate from M.I.T., a government career in defense policy in the 1970s and three decades as a journalist. Kaplan knows the military world inside and out; better still, he has historical perspective. With “The Insurgents: David Petraeus and the Plot to Change the American Way of War,” he has written an authoritative, gripping and somewhat terrifying account of how the American military approached two major wars in the combustible Islamic world. He tells how it was grudgingly forced to adapt; how it then overreached; and how it now appears determined to discard as much as possible of what it learned and revert to its old ways.
Christia Fotini with a Syrian girl in a camp for Internally Displaced Persons (IDPs) in Syria in the village of Atmeh.
[Originally published in The Boston Globe.]
WHAT IS a civil war, really?
At one level the answer is obvious: an internal fight for control of a nation. But in the bloody conflicts that split modern states, our policy makers often understand something deeper to be at work. The vengeful slaughter that has ripped apart Bosnia, Rwanda, Syria, and Yemen is most often seen as the armed eruption of ancient and complex hatreds. Afghanistan is embroiled in a nearly impenetrable melee between Pashtuns and smaller ethnic groups, according to this thinking; Iraq is split by a long-suppressed Sunni-Shia feud. The coalitions fighting these wars are seen as motivated by the deepest sort of identity politics, ideologies concerned with group survival and the essence of who we are.
This view has long shaped America’s engagement with countries enmeshed in civil war. It is also wrong, argues Fotini Christia, an up-and-coming political scientist at MIT.
In a new book, “Alliance Formation in Civil Wars,” Christia marshals in-depth studies of the recent wars in Afghanistan, Iraq, and Bosnia, along with empirical data from 53 civil conflicts, to show that in one civil war after another, the factions behave less like enraged siblings and more like clinically rational actors, switching sides and making deals in pursuit of power. They might use compelling stories about religion or ethnicity to justify their decisions, but their real motives aren’t all that different from armies squaring off in any other kind of conflict.
How we understand civil wars matters. Most civil wars drag on until they’re resolved by a foreign power, which in this era almost always includes the United States. If she’s right, if we’re mistaken about what motivates the groups fighting in these internecine free-for-alls, we’re likely to misjudge our inevitable interventions—waiting too long, or guessing wrong about what to do.
CIVIL WARS ALWAYS have loomed large in the collective consciousness. Americans still debate theirs so vociferously that a blockbuster film about Abraham Lincoln feels topical 150 years after his death. Eastern Europe saw several years of ferocious killing in the round of civil wars that followed World War II.
Such wars have been understood as fights over differences that can’t be resolved any other way: fundamental questions of ideology, identity, creed. A disputed border can be redrawn; not so an ethnic grudge. In the last two decades, identity has become the preferred explanation for persistent conflicts around the world, from Chechnya to Armenia and Azerbaijan to cleavages between Muslims and Christians in Nigeria.
This thinking allows for a simple understanding, and conveniently limits the prospect for a solution. Any identity-based cleavage—Jew vs. Muslim, Bosnian vs. Serb, Catholic vs. Orthodox—is so profoundly personal as to be immutable. The conventional wisdom is best exemplified by a seminal 1996 paper by political scientist Chaim Kaufmann, “Possible and Impossible Solutions to Ethnic Civil Wars,” which argues that bitterly opposed populations will only stop fighting when separated from each other, preferably by a major natural barrier like a river or mountain range.
During the 1990s, this sort of ethnic determinism drove American policy toward Bosnia and Rwanda. It was popularized by Robert Kaplan’s book “Balkan Ghosts,” which was read in the Clinton White House and presented the wars in the former Yugoslavia as just the latest chapter in an insoluble, four-century ethnic feud. Like Kaufmann, Kaplan suggested that the grievances in civil wars could only be managed, never reconciled.
After 9/11, policy makers in Washington continued to view civil wars through this prism, talking about tribes and sects and ethnic groups rather than minority rights, systems of government, and resource-sharing. That view was so dominant that President Bush’s team insisted on designing Iraq’s first post-Saddam governing council with seats designated by sect and ethnicity, against the advice of Iraqis and foreign experts. It became a self-fulfilling prophecy as Iraq’s ethnic civil war peaked in 2006; things settled down only after death squads had cleansed most of Iraq’s mixed neighborhoods, turning the country into a patchwork of ethnically homogenous enclaves. Similarly, this thinking has shaped US policy in Afghanistan, where the military even sent anthropologists to help its troops understand the local culture that was considered the driving factor in the conflict.
Christia grew in up in the northern Greek city of Salonica in the 1990s, with the Bosnian war raging just over the border. “It was in our neighborhood and we discussed it vividly every night over dinner,” she says. The question of ethnicity seized her imagination: Were different peoples doomed to conflict by incompatible identities? Or were the decision-makers in civil wars working on a different calculus from their emotional followers? As a graduate student at Harvard, Christia flew to Afghanistan and tried to turn a dispassionate political scientist’s eye to the question of why warlords behave the way they do.
Christia spent years studying these warlords, the factional leaders in a civil war that broke out in the late 1970s. As a graduate student and later as a professor, she returned to Afghanistan to interview some of the nastiest war criminals in the country. She concluded that culture and identity, while important for their adherents, did not seem to factor into the motives of the warlords themselves, and specifically not in their choices of wartime allies. Despite the powerful rhetoric about ethnic alliances forged in blood, warlords repeatedly flipped and switched sides. They used the same language—about tribe, religion, or ethnicity—whether they were fighting yesterday’s foe or joining him.
If ethnicity, religion, and other markers of identity didn’t matter to warlords, Christia asked, what did? It turns out the answer was simple: power. After studying the cases of Afghanistan, Bosnia, and Iraq in intricate detail, Christia built a database of 53 conflicts to test whether her theory applied more widely. She ran regression analyses and showed that it did: Warlords adjusted their loyalties opportunistically, always angling for the best slice of the future government. It’s not quite as simple as siding with the presumed winner, she says: It’s picking the weakest likely winner, and therefore the one most likely to share power with an ally.
In this model of warlord behavior, the many factions in a civil war are less like Cain and Abel and more like the mafia families in “The Godfather” trilogy. Loyalties follow business interests, and business interests change; meanwhile, the talk about family and blood keeps the foot soldiers motivated. In Bosnia, one Muslim warlord joined forces with the Serbs after the Serbs’ horrific massacre of Muslims at Srebenica, and justified his switch by saying that the central government in Sarajevo was run by fanatics while he represented the true, moderate Islam. In case after case of intractable civil wars—Afghanistan, Lebanon, Iraq, the former Yugoslavia—Christia found similar patterns of fluid alliances.
“The elites make the decision, and then sell it to the people who follow them with whatever narrative sticks,” Christia said. “We’re both Christians? Or we’re both minorities? Or we’re both anti-communist? Whatever sticks.”
CHRISTIA’S WORK has been received with great interest, though not all her academic colleagues agree with her conclusions. Critics say identity is more important in civil wars than she gives it credit for, and we ignore it at our peril. Roger Petersen, an expert on ethnic war and Eastern Europe who is a colleague of Christia’s at MIT and supervised her dissertation, argues that in some conflicts, identity—ethnic, religious, or ideological—is truly the most important factor. Leaders might make a pact with the devil to survive, but once a conflict heads to its conclusion, irreconcilable conflicts often end with a fight to the death. Communists and nationalists fought for total victory in Eastern Europe’s civil wars, with no regard to their fleeting coalitions of opportunity against foreign occupiers during World War II. More recently, Bosnia’s war only ended after the country had split into ethnically cleansed cantons.
Christia acknowledges that her theory needs further testing to see if it applies in every case. She is currently studying how identity politics play out at most local level in present-day Syria and Yemen.
If it holds up, though, Christia’s research has direct bearing on how we ought to view the conflict today in a nation like Syria. The teetering dictatorship is the stronghold of the minority Allawite sect in a Sunni-majority nation. And leader Bashar Assad has rallied his constituents on sectarian grounds, saying his regime offers the only protection for Syria’s minorities against an increasingly Sunni uprising. But Syria’s rebellion comprises dozens of armed factions, and Christia suggests that these militants, which run the gamut of ethnic and sectarian communities, will be swayed more by the prospect of power in a post-Assad Syria than by ethnic loyalty. That would mean the United States could win the loyalty of different fighting factions by ignoring who they are—Sunni, Kurd, secular, Armenian, Allawite—and by focusing instead on their willingness to side with America or international forces in exchange for guns, money, or promises of future political power.
For America, civil wars elsewhere in the world might seem like somebody else’s problem. But in reality we’re very likely to end up playing a role: Most civil wars don’t end without foreign intervention, and America is the lone global superpower, with huge sway at the United Nations. Christia suggests that Washington would do well to acknowledge early on that it will end up intervening in some form in any civil war that threatens a strategic interest. That doesn’t necessarily mean boots on the ground, but it means active funding of factions and shaping of the alliances that are doing the fighting. In a war like Syria’s, that means the United States has wasted precious time on the sidelines.
Despite her sustained look at the worst of human conflict, Christia says she considers herself an optimist: People spend most of their history peacefully coexisting with different groups, and only a tiny portion of the time fighting. And once civil wars do break out, the empirical evidence shows that hatreds aren’t eternal. “If identities mattered so much,” she says, “you wouldn’t see so much shifting around.”
[Originally published in The Boston Globe Ideas.]
ROCKETS AND MORTARS have stopped flying over the border between Gaza and Israel, a temporary lull in one of the most intractable, hot-and-cold wars of our time. The hostilities of late November ended after negotiators for Hamas and Israel—who refused to talk face-to-face, preferring to send messages via Egyptian diplomats—agreed to a rudimentary cease-fire. Their tenuous accord has no enforcement mechanism and doesn’t even nod to discussing the festering problems that underlie the most recent crisis. Both sides say they expect another conflict; experience suggests it’s just a question of when.
Generations of negotiators have cut their teeth trying to forge a peace agreement between Israel and the Palestinians, and their failures are as varied as they are numerous: Camp David, Madrid, the Oslo Accords, Wye River, Taba, the Road Map. For diplomats and deal-makers around the world—even those with no particular stake in Middle East peace—Israel and Palestine have become the ultimate test of international negotiations.
For Guy Olivier Faure, a French sociologist who has dedicated his career to figuring out how to solve intractable international problems, they’re something else as well: an almost unparalleled trove of insights into how negotiations can go wrong.
For more than 20 years, Faure has studied not only what makes negotiations around the world succeed, but how they break down. From Israel and the Palestinians to the Biological Weapons Convention protocol to the ongoing talks about Iran’s nuclear program, it’s far more common for negotiations to fail than to work out. And it’s from these failures, Faure says, that we can harvest a more pragmatic idea of what we should be doing instead. “In order to not endlessly repeat the same mistakes, it is essential to understand their causes,” he says.
TODAY’S INTERNATIONAL order turns on successful negotiation. When we think about what’s right in the world, we’re often thinking about the results of agreements like the START treaties, which ended the nuclear arms race between the United States and the Soviet Union; the Geneva Conventions, which govern the conduct of war; or even the General Agreement on Tariffs and Trade, drafted in 1948, which still underpins globalized free trade.
But in negotiations over the most vexing international problems—a hostage situation, a war between a central government and terrorist insurgents, a new multinational agreement—such successes are few and far between. Failure is the norm. Understandably, experts tend to focus on the wins. From US presidents to obscure third-party diplomats, negotiators pore over rare historical successes for tips rather than face the copious and dreary overall record.
Faure wants to change that focus. As an expert he straddles two worlds: He studies diplomacy academically as a sociologist at the Sorbonne, in Paris, and has also trained actual negotiators for decades, at the European Union, the World Trade Organization, and UN agencies. Over his career, he has produced 15 books spanning all the different theories behind negotiation, and ultimately concluded that negotiations that failed, or simply sputtered out inconclusively, were the most interesting. Each failure had multiple causes, but it was possible to compile a comprehensive list, and from that, consistent patterns.
“Unfinished Business” takes a look at what happened during a number of high-profile failures, and examines the underlying conditions of each set of talks: trust, cultural differences, psychology, the role of intermediaries, and outsiders who can derail negotiations or overload them with extraneous demands.
One of Faure’s insights concerns the mindset of negotiators—a factor negotiators themselves often believe is irrelevant, but which Faure and his colleagues believe can often determine the outcome. Incompatible values on the two sides of the table, he says, are much harder to bridge than practical differences, like an argument over a boundary or the mechanics of a cease-fire. As Faure says, “A quantity can be split, but not a value.” This is what Faure saw at work when the Palestinians and Israelis embarked on a rushed negotiation at the Egyptian seaside resort of Taba during Bill Clinton’s final month in office. The two sides had already reached an impasse at a lengthier negotiation in 2000 at Camp David. With the end of his presidency looming and Israeli elections coming up, Clinton summoned them back to the table for a no-nonsense session he hoped would bring speedy closure to disputes over borders, Jerusalem, and refugees. The Palestinians, however, felt that the two sides simply didn’t share the same view of justice and weren’t truly aiming at the same goal of two sovereign states—and so didn’t feel driven to make a deal. That mismatch of long-term beliefs, Faure says, doomed the talks.
There are other warning signs that emerge as patterns in failed talks. Time and again, parties embark on tough negotiations already convinced they will fail—a defeatism that becomes a self-fulfilling prophesy. In interviewing professional negotiators, Faure and his colleagues found that they often don’t pay that much attention to the practical aspects of how to run a negotiation—a surprising lapse.
Faure and his team have found that a well-planned process is one of the best predictors of success, and that many negotiations are terrible at it. When the European Union and the United States talked to Iran about its nuclear program, various European countries kept adding extraneous issues to the talks, for instance linking Iran’s behavior with nukes to existing trade agreements. The additions made the negotiations unwieldy, and provoked crises over matters peripheral to the actual subject. In the case of the mediation over Cyprus, the Greek and Turkish sides didn’t bother coming up with any tangible proposed solution to negotiate over, instead talking vaguely about a Swiss model. Negotiations failed in part because neither side knew what that would mean for Cyprus.
Ultimately, Faure argues, mistrust and inflexibility tangle up negotiators more than any other factor. Negotiators often end up demonizing the other side, and as a result might embark on a process that by its structure encourages failure. For instance, Israeli and Palestinian reliance on mediators to ferry messages—even between delegations in the same resort—maximizes misunderstandings and minimizes the possibility that either side will sense a genuine opening.
WHAT EMERGES FROM Faure’s work, overall, is that the outlook for negotiations is usually pretty bleak—certainly bleaker than Faure himself prefers to highlight. In some cases, he suggests that diplomats should put off an outright negotiation until they’ve dealt with gaps in trust and cultural communication, or until the conflict feels “ripe” for solution to the parties involved. There’s no point, he suggests, in embarking on a negotiation if all the stakeholders are convinced it’s a waste of time—indeed, a failed negotiation can sometimes exacerbate a problem.
The most promising scenarios occur when both sides are suffering under the status quo, which creates what social scientists call a “mutually hurting stalemate,” with soldiers or civilians dying on both sides, and a “mutually enticing opportunity” if there’s a peace agreement or a prisoner swap. In that case, a decent deal will give both sides a chance to genuinely improve their lot.
Unfortunately for the many whose hopes are riding on negotiations, the truly challenging international problems of our age don’t always come with a strong incentive to compromise. In military conflicts, there is little incentive to resolve matters when a conflict is lopsided in one side’s favor (Shia versus Sunni in Iraq, Israel versus Hamas, the Taliban versus the United States in Afghanistan). The same holds in broader international agreements: They’re complicated and intractable largely because the states involved are—no matter what they say—quite comfortable with the status quo. Think about climate change: The biggest gas-guzzlers and polluters, the ones whose assent matters the most for a carbon-reduction treaty, are often the last states that will pay the price for rising oceans. Meanwhile, the poorer nations whose populations are most at the mercy of sea levels or changing weather have little clout. Just as it’s easy for a relatively secure Israel to stand pat on the Palestinian question, there are few immediate consequences for the United States and China if they sit out climate talks.
It’s not all bleak news. Even in cases where negotiations appear hamstrung—like climate change and Palestine—there are, Faure points out, plenty of other reasons to continue negotiating. Negotiations are a form of diplomacy, dialogue, and recognition, and even in failure can serve some other interests of the parties involved. But—as the impressive historical record of failed international agreements shows—it’s naive to think that they will always yield a solution.
Cops say they figure out a suspect’s intentions by watching his hands, not by listening to what comes out of his mouth. The same goes for American foreign policy. Whatever Washington may be saying about its global priorities, America’s hands tend to be occupied in the Middle East, site of all America’s major wars since Vietnam and the target of most of its foreign aid and diplomatic energy.
How to handle the Middle East has become a major point in the presidential campaign, with President Obama arguing for flexibility, patience, and a long menu of options, and challenger Mitt Romney promising a tougher, more consistent approach backed by open-ended military force.
Lurking behind the debate over tactics and approach, however, is a challenge rarely mentioned. The broad strategy that underlies American policy in the region, the Carter Doctrine, is now more than 30 years old, and in dire need of an overhaul. Issued in 1980 and expanded by presidents from both parties, the Carter doctrine now drives American engagement in a Middle East that looks far different from the region for which it was invented.
President Jimmy Carter confronted another time of great turmoil in the region. The US-supported Shah had fallen in Iran, the Soviets had invaded Afghanistan, and anti-Americanism was flaring, with US embassies attacked and burned. His new doctrine declared a fundamental shift. Because of the importance of oil, security in the Persian Gulf would henceforth be considered a fundamental American interest. The United States committed itself to using any means, including military force, to prevent other powers from establishing hegemony over the Gulf. In the same way that the Truman Doctrine and NATO bound America’s security to Europe’s after World War II, the Carter Doctrine elevated a crowded and contested Middle Eastern shipping lane to nearly the same status as American territory.
In 2012, we look back on a recent level of American engagement with the Middle East never seen before. Even the failures have been failures from which we can learn. The decade that began with the US invasion of Afghanistan and ended with a civil war in Syria holds some transformative lessons, ones that could point the next president toward a new strategy far better suited to what the modern Middle East actually looks like—and to America’s own values.
President Carterissued his new doctrine in what would turn out to be his final State of the Union speech in January 1980. America had been shaken by the oil shocks of the 1970s, in which the Arab-dominated OPEC asserted its control, and also by the fall of the tyrannical Mohammad Reza Pahlavi, Shah of Iran, who had been a stalwart security partner to the United States and Israel.
Nearly everyone in America and most Western economies shared Carter’s immediate goal of protecting the free flow of oil. What was significant was the path he chose to accomplish it. Carter asserted that the United States would take direct charge of security in this turbulent part of the world, rather than take the more indirect, diplomatic approach of balancing regional powers against each other and intervening through proxies and allies. It was the doctrine of a micromanager looking to prevent the next crisis.
Carter’s focus on oil unquestionably made sense, and the doctrine proved effective in the short term. Despite more war and instability in the Middle East, America was insulated from oil shocks and able to begin a long period of economic growth, in part predicated on cheap petrochemicals. But in declaring the Gulf region an American priority, it effectively tied us to a single patch of real estate, a shallow waterway the same size as Oregon, even when it was tangential, or at times inimical, to our greater goal of energy security. The result has been an ever-increasing American investment in the security architecture of the Persian Gulf, from putting US flags on foreign tankers during the Iran-Iraq war in the 1980s, to assembling a huge network of bases after Operation Desert Storm in 1991, to the outright regime-building effort of the Iraq War.
In theory, however, none of this is necessary. America doesn’t really need to worry about who controls the Gulf, so long as there’s no threat to the oil supply. What it does need is to maintain relations in the region that are friendly, or friendly enough, and able to survive democratic changes in regime—and to prevent any other power from monopolizing the region.
The Carter Doctrine, and the policies that have grown up to enforce it, are based on a set of assumptions about American power that might never have been wholly accurate. They assume America has relatively little persuasive influence in the region, but a great deal of effective police power: the ability to control major events like regional wars by supporting one side or even intervening directly, and to prevent or trigger regime change.
Our more recent experience in the Middle East has taught us the opposite lesson. It has become painfully clear over the last 10 years that America has little ability to control transformative events or to order governments around. Over the past decade, when America has made demands, governments have resolutely not listened. Israel kept building settlements. Saudi Arabia kept funding jihadis and religious extremists. Despots in Egypt, Syria, Tunisia, and Libya resisted any meaningful reform. Even in Iraq, where America physically toppled one regime and installed another, a costly occupation wasn’t enough to create the Iraqi government that Washington wanted. The long-term outcome was frustratingly beyond America’s control.
When it comes to requests, however, especially those linked to enticements, the recent past has more encouraging lessons. Analysts often focus on the failings of George W. Bush’s “freedom agenda” period in the Middle East; democracy didn’t break out, but the evidence shows that no matter how reluctantly, regional leaders felt compelled to respond to sustained diplomatic requests, in public and private, to open up political systems. It wasn’t just the threat of a big stick: Egypt and Israel weren’t afraid of an Iraq-style American invasion, yet they acceded to diplomatic pressure from the secretary of state to liberalize their political spheres. Egypt loosened its control over the opposition in 2005 and 2006 votes, while Israel let Hamas run in (and win) the 2006 Palestinian Authority elections. Even prickly Gulf potentates gave dollops of power to elected parliaments. It wasn’t all that America asked, but it was significant.
Paradoxically, by treating the Persian Gulf as an extension of American territory, Washington has reduced itself from global superpower to another neighborhood power, one than can be ignored, or rebuffed, or hectored from across the border. The more we are committed to the Carter Doctrine approach, which makes the military our central tool and physical control of the Gulf waters our top priority, the less we are able to shape events.
The past decade, meanwhile, suggests that soft power affords us some potent levers. The first is money. None of the Middle Eastern countries have sustainable economies; most don’t even have functional ones. The oil states are cash-rich but by no means self-sufficient. They’re dependent on outside expertise to make their countries work, and on foreign markets to sell their oil. Even Israel, which has a real and diverse economy, depends on America’s largesse to undergird its military. That economic power gives America lots of cards to play.
The second is defense. The majority of the Arab world, plus Israel, depends on the American military to provide security. In some cases the protection is literal, as in Bahrain, Qatar, and Kuwait, where US installations project power; elsewhere, as in Saudi Arabia, Egypt, and Jordan, it’s indirect but crucial. (American contractors, for instance, maintain Saudi Arabia’s air force.) America’s military commitments in the Middle East aren’t something it can take or leave as it suits; it’s a marriage, not a dalliance. A savvier diplomatic approach would remind beneficiaries that they can’t take it for granted, and that they need to respond to the nation that provides it.
The Carter Doctrineclearly hasn’t worked out as intended; America is more entangled than ever before, while its stated aims—a secure and stable Persian Gulf, free from any outside control but our own—seem increasingly out of reach. A growing, bipartisan tide of policy intellectuals has grappled with the question of what should replace it, especially given our recent experience.
One response has been to seek a more morally consistent strategy, one that seeks to encourage a better-governed Middle East. This idea has percolated on the left and the right. Alumni of Bush’s neoconservative foreign-policy brain trust, including Elliott Abrams, have argued that a consistent pro-democratic agenda would better serve US interests, creating a more stable region that is less prone to disruptions in the oil supply. Voices on the left have made a similar argument since the Arab uprisings; they include humanitarian interventionists like Anne-Marie Slaughter at Princeton, who argue for stronger American intervention in support of Syria’s rebels. Liberal fans of development and political freedoms have called for a “prosperity agenda,” arguing that societies with civil liberties and equitably distributed economic growth are not only better for their own citizens but make better American allies.
Then there’s a school that says the failures of the last decade prove that America should keep out of the Middle East almost entirely. Things turn out just as badly when we intervene, these critics argue, and it costs us more; oil will reach markets no matter how messy the region gets. This school includes small-footprint realists like Stephen Walt at Harvard and pugilistic anti-imperial conservatives like Andrew Bacevich at Boston University. (Bacevich argues that the more the US intervenes with military power to create stability in the oil-producing Middle East, the more instability it produces.)
While the realists think we should disentangle from the region because the US can exert strategic power from afar, others say we should pull back for moral reasons as well. That’s the argument made over the last year by Toby Craig Jones, a political scientist at Rutgers University who says that the US Navy should dissolve its Fifth Fleet base so it can cut ties with the troublesome and oppressive regime in Bahrain. America’s military might guarantees that no power—not Iran, not Iraq, not the Russians—can sweep in and take control of the world’s oil supply. Therefore, the argument goes, there’s no need for America to attend to every turn of the screw in the region.
What’s clear, from any of these perspectives, is that the Carter Doctrine is a blunt tool from a different time. It’s now possible, even preferable, to craft a policy more in keeping with the modern Middle East, and also more in line with American values. It might sound obvious to say that Washington should be pushing for a liberalized, economically self-sufficient, stable, but democratic Middle East, and that there are better tools than military power to reach those aims. In fact, that would mark a radical change for the nation—and it’s a course that the next president may well find within his power to plot.
[Originally published in The Boston Globe Ideas section.]
[Originally published in The Boston Globe.]
Right now, the Islamic world is in the midst of a grand experiment. After decades facing an unappetizing choice among secular dictatorship, monarchy, and Iranian-style theocracy, nations across the region are grappling with how to build genuinely modern governments and societies that take into account the Islamist principles shared by a majority of voters.
As they do, a shadow hangs over their prospects. Islamic nations in the Middle East on the whole have underperformed their counterparts in the West. Asian nations that were poorer than the Arab world at the beginning of the Cold War have overtaken the Middle East. And promising experiments with democracy have been few and far between.
The question of why is a contentious one. Has the Islamic world been held back by its treatment at the hands of history? Or could the roots of the problem lie in its shared religion—in the Koran, and Islamic belief itself?
A provocative new answer is emerging from the work of Timur Kuran, a Turkish-American economist at Duke University and one of the most influential thinkers about how, exactly, Islam shapes societies. In a growing body of work, Kuran argues that the blame for the Islamic world’s economic stagnation and democracy deficit lies with a distinct set of institutions that Islamic law created over centuries. The way traditional Islamic law handled finance, inheritance, and incorporation, he argues, held back both economic and political development. These practices aren’t inherent in the religion—they emerged long after the establishment of Islam, and have partly receded from use in the modern era. But they left a profound legacy in many societies where Islam held sway.
Kuran’s critics think he unfairly impugns religious law. Pakistani scholar Arshad Zaman argues that Kuran misunderstands the very nature of Islamic law and business practice, which elevate worthwhile economic goals such as income equality and social justice above growth. Others argue that the harm suffered at the hands of legacy Western colonial powers is far more important in explaining why the Muslim world is struggling today.
Kuran himself sees his work as coming from a sympathetic perspective: He wants to combat the argument that Islam is incompatible with modernity and liberty, a notion he decries as “one of the most virulent ideas of our time.” He worries about anti-Islamic sentiment from outside the religion, as well as the rigid and defensive posture of some orthodox Islamists. (He pointedly avoids discussing his own faith. “I write as a scholar,” he says.)
Thanks in part to this careful navigation, Kuran’s scholarship gives economists, and perhaps political leaders in the Middle East, a way to talk about the Islamic world’s problems without resorting to crude stereotypes or heightened “clash of civilizations” rhetoric. In Kuran’s analysis, Islam itself is neither the problem nor the solution; indeed, most of the rigid practices have long been supplemented by or in some cases abandoned for more Western models.
However, his work does carry stark implications for countries such as Egypt, Libya, and Tunisia, whose emerging political futures are likely to be shaped by Islamist majorities or pluralities. A democratic renaissance could paradoxically lead to more stagnation if it imposes calcified institutions of Islamic yesteryear on modern society. If Kuran is right, the nations of the Arab Spring face a conundrum: The institutions most in keeping with societies’ religious principles could be the ones most likely to hold it back.
While Europe suffered centuries of decline and intellectual darkness, the early Islamic world bubbled with vitality. Competing schools of Islamic jurisprudence produced texts still consulted as references today, while merchants and caliphs left copious written records for future scholars to study. In the course of his work, Kuran was able to comb through business records and commercial ledgers spanning more than a millennium.
What he found, he says, was that two legal traditions pervasive in the Islamic world became especially limiting: the laws governing the accumulation of capital, and those governing how institutions were organized. The growth of capital was limited by laws of inheritance and Islamic partnership, which required that large fortunes and enterprises be split up with each passing generation. The waqf, or Islamic trust, had even greater ramifications, because it determined the structure of most social relationships and had wide-ranging consequences for civil society.
Under Islamic law, the trust—rather than the corporation—is the most common legal unit of organization for entities outside the government. Until modern times, cities, hospitals, schools, parks, and charities were all set up and governed by the immutable deed of an Islamic trust. Under its terms, the founder of the trust donates the land or other capital that funds it in perpetuity, and sets its rules in the deed. They can never be altered or amended. The waqf was developed by Islamic scholars in the centuries after the religion was established, drawing on Koranic principles barring usury and demanding justice in business. (It is not an institution stipulated by the Koran itself.) Much like a trust in the West, a waqf is not “governed” so much as executed. It is also limited in what it can do. A waqf is prohibited from engaging in politics, which means it cannot form coalitions, pool its resources with other organizations, or oppose the state.
Drawing on voluminous study of the mechanisms of money, power, and law going back to the 7th century founding of Islam, Kuran draws a picture of nations whose rulers wielded central and often highly authoritarian power, and faced little challenge from either business owners or a waqf-bound civil society.
Over time, he argues, this structure led to a radically different social system than the one that arose in the West. There, the rise of the corporation created a vehicle for prosperity and a civilian counterweight to state power—an institution that could adapt and grow, survive from one generation to the next, and pay benefits to its shareholding owners, who are thus motivated to steer it toward expansion and influence. Nonprofit corporations enjoy similar flexibility and freedom of action, though they don’t have shareholders.
“In the West, you had universities, unions, churches, organized as corporations that were free to make coalitions, engage in politics, advocate for more freedoms, and they became a civil society,” Kuran said in an interview. “Democracy is a system of checks and balances. It can’t develop if a population is passive.”
In modern times, Islamic nations have adopted Western institutions like corporations and banks to manage their affairs. Municipalities and private enterprise are now more commonly incorporated rather than set up as trusts. But the trust remains pervasive, especially in the realm of social services: Hospitals, schools, and aid societies are still almost always trusts rather than corporations—a factor that correlates with their quiescence in balancing state power.
In focusing on the specific legal institutions of Islamic civic life, Kuran’s thesis directly targets those “apologists” who blame the economic and political problems of the Middle East solely on colonialism and other outside forces. He also takes aim at essentialists who hold Islam as a religion responsible for the problems of Islamic countries. In fact, he argues, Islamic states that have embraced modernization programs and gone through the sometimes painful process of adopting new institutions, as Turkey and Indonesia have, have had great success in developing both democracy and economic prosperity widely shared among citizens.
While some critics attack Kuran from an Islamic perspective, like Arshad Zaman, others share his approach but dispute his findings. Maya Shatzmiller, a historian at Western University in Canada, believes that the real specific causes of economic growth are particular to the circumstances of each individual region, and that by focusing on some notional qualities common to the entire Islamic world, his work generically indicts Islam without offering real insight into the economic problems of individual Middle Eastern states.
Kuran believes the evidence of a gap between the Islamic world and the West is undeniable and merits serious examination of what those countries have in common. And it’s patronizing, he writes, to suggest that the Islamic world will be offended by a vigorous debate on the subject.
Kuran’s approach has influenced other social scientists to use similar tools in the hope of offering more precise and useful answers. Eric Chaney, a Harvard economist, uses the mathematical modeling of econometrics to pinpoint the historical factors that correlate with lagging democratization and development in the Islamic world. Jared Rubin, an economist at Chapman University in California, studies the effect of technologies like the printing press on economic disparities. Jan Luiten van Zanden, a renowned Dutch historian, has begun a deep comparative study of the organization of cities in the Islamic world and the West.
For the new architects of Islamic politics, Kuran’s work offers a clear blueprint, though perhaps a difficult one to follow. It suggests that states heavily reliant on Islamic law may need to reformulate their approach, extending Western-style rules to organize their nonstate entities: banks, companies, nonprofits, political parties, religious societies. Over time, this will seed a more empowered civic society and ultimately pull greater numbers of citizens into the fabric of political life.
It’s a challenge, however. Authoritarian states are unlikely to promote reforms that will weaken their control. And the resurgence of Islamist politics has created a new wave of support for a more doctrinaire application of Islamic law and traditions.
Another barrier to reform is the slow pace of cultural change. Once modern institutions are in place, Kuran warns, it takes a long time for their use to become widespread and for people to trust them. Simply put, for an institution to grow powerful and influential, whether it’s a bank or a political party, it needs to build support from a large, trusting public of strangers. Much of the Middle East still operates on smaller units, in which customers or citizens expect to know who’s running the company or institution that serves them. For example, Kuran points out that despite the prevalence of banks, only one in five families in the Arab world actually has a bank account. (By comparison, three in five Turkish families have bank accounts, and nearly every US family does.)
Most broadly, change requires a shift in the constraints on civil society. In recent decades, Middle Eastern regimes have systematically destroyed any opposition and kept rigid control over the media, official religious groups, and any body that might develop a political identity, from university faculty to labor unions. The most effective dissent survived deep within mosques, where even the most repressive police states hesitated to go.
In the long run, to end the cycle of autocracy and violence, the Islamic world will need space for civil society to grow outside the constraints of the state and the mosque. Only then will citizens grow accustomed to making decisions that have traditionally been made on their behalf. And breaking the old habits, on the street and in election booths, will likely take time.
“The state itself,” Kuran says, “cannot change the way people relate to each other.”
[Published in The Boston Globe Ideas section.]
When President Obama and Mitt Romney cross swords on defense policy, it can sound like a schoolyard fight: Who loves the military more? Who is tougher? Who would lead a more muscular America?
This is the way we expect candidates to talk about defense: in terms of power, force, even national pride. But increasingly, when it comes to the role the Department of Defense actually plays for the nation, it misses the point. Over the past decade, the Pentagon has become far more complex than the conversation about it would suggest. What “military” means has changed sharply as the Pentagon has acquired an immense range of new expertise. What began as the world’s most lethal strike force has grown into something much more wide-ranging and influential.
Today, the Pentagon is the chief agent of nearly all American foreign policy, and a major player in domestic policy as well. Its planning staff is charting approaches not only toward China but toward Latin America, Africa, and much of the Middle East. It’s in part a development agency, and in part a diplomatic one, providing America’s main avenue of contact with Africa and with pivotal oil-producing regimes. It has convened battalions of agriculture specialists, development experts, and economic analysts that dwarf the resources at the disposal of USAID or the State Department. It’s responsible for protecting America’s computer networks. In May of this year, the Pentagon announced it was creating its own new clandestine intelligence service. And the Pentagon has emerged as a surprisingly progressive voice in energy policy, openly acknowledging climate change and funding research into renewable energy sources.
The huge expansion of the Pentagon’s mission has, not surprisingly, rung plenty of alarm bells. In the policy sphere, critics worry about the militarization of American foreign policy, and the fact that much of the world—especially the most volatile and unstable parts—now encounters America almost exclusively in the form of armed troops. Hawkish critics worry that the Pentagon’s ballooning responsibilities are a distraction from its main job of providing a focused and prepared fighting force. But this new reality will be with us for a while, and in the short term it creates an opportunity for the next president. Super-empowered and quickly deployable, the Pentagon has become a one-stop shop for any policy objective, no matter how far removed from traditional warfare.
That means the next administration will have ample room to shape the priorities, and even perhaps to reimagine the mission of a Pentagon that plays a leading role in areas from language research to fighting the drug trade. And it means that voters will need to consider the full breadth of its capabilities when they hear candidates talk about “defense.”
In campaigning so far, neither candidate has seriously engaged with the real challenges of steering the most diverse and powerful entity under his control. For both Obama and Romney, the most central question about foreign policy—and even some of their domestic priorities—may be how creatively and effectively they can use the Pentagon to further their aims.
The current balance of power in Washington runs counter to most of American history. Traditionally, the United States has related to the world chiefly through diplomats: A civilian president set the policy, civilian envoys worked to implement it, and gunboats stepped in only when diplomacy failed. Indeed, until World War II, the Department of State outranked Defense in size as well as influence. That began to change in the 1940s, first with the huge mobilization of World War II and then the Cold War. Funding and power began to accumulate permanently in the Pentagon.
In the decade since 9/11, the Pentagon has undergone another transformation. The military was asked to fight two complex wars in Afghanistan and Iraq, while also engaging in a sprawling operation dubbed the Global War on Terror. In practice, this meant soldiers and other troops were asked to design nation-building operations on the fly; produce the kind of pro-democracy propaganda that decades earlier was the province of the Voice of America; and do police, intelligence, and development work in conflict zones that had long bedeviled experts in far more stable locales. In Iraq, the Pentagon was essentially expected to provide the full gamut of services normally offered by a national government. Army commanders in provincial outposts dispensed cash grants to business start-ups, supervised building renovations, managed police forces, and built electricity plants.
As American involvement in those wars winds down, we are left with a Department of Defense that has become Washington’s default tool for getting things done in the world. Unlike diplomats, who serve abroad for limited stints and who can refuse to work in dangerous places, military personnel have to go where ordered, and stay as long as the government needs them. They haven’t always succeeded, leaving any number of failed governance projects in their wake. But it’s understandable why the White House has turned more and more often to warriors. The military is undeniably good at taking action: A lieutenant colonel can spend a hundred thousand dollars on a day’s notice to dig a well or refurbish a mayor’s office or rebuild a village market. In contrast, civilian USAID specialists operating under the agency’s rules would take months, or even years, to put out bids and hire a local subcontractor to do the same job.
“The president who comes into office and thinks about what he wants to do, when he looks around for capabilities he tends to see someone in uniform,” says Gordon Adams, an American University political scientist and expert in the defense budget. “The uniformed military are really the only global operational capacity the president has.”
And that capacity stretches into some surprising domains. The Pentagon maintains an international rule of law office staffed with do-gooder lawyers. It has trained and deployed agriculture battalions. Its regional commands, as well as its war-fighting generals in Afghanistan and Iraq, have tapped hundreds of economists, anthropologists, and other field experts as unconventional military assets. Its special operators conduct the kind of clandestine operations once reserved for the CIA, but also do a lot of in-the-field political advising for local leaders in unstable countries. The US Cyber Command runs a kind of geek tech shop in charge of protecting America’s computer networks. The world’s most high-tech navy runs counter-piracy missions off the coast of Somalia, essentially serving as a taxpayer-funded security force for private shipping companies. Much of drug policy is executed by the military, which is in charge of intercepting drug shipments and has been the key player in drug-supplying countries like Colombia.
With little fanfare, the Pentagon—currently the greatest single consumer of fossil fuels in all of America, accounting for 1 percent of all use—has begun promoting fuel efficiency and alternate energy sources through its Office of Operation Energy Plans and Programs. Using its gargantuan research and development budget, and its market-making purchasing power, the Defense Department has demanded more efficient motors and batteries. Its approach amounts to a major official policy shift and huge national investment in green energy, sidestepping the ideological debate that would likely hamstring any comparable effort in Congress.
This huge expansion of what the Department of Defense does is not the same thing as a runaway military, though there are critics who see it that way. At the height of the Cold War, the United States dedicated far more of its budget to defense—around 60 percent, compared to 20 percent now. It is more a matter of vast “mission creep.” Inevitably, it is to the Pentagon that the government will turn when it faces urgent, unexpected needs: Hurricane Andrew, the 2005 tsunami in Asia, propaganda in the Islamic world. Men and women in camouflage uniforms can be found helping domestic law enforcement pursue cattle rustlers in North Dakota using loaned military drones, or working with Afghan farmers to increase crop yields.
Paul Eaton, a major general in the US Army who retired in 2006 and now advises a Washington think tank called the National Security Network, describes a meeting he attended in Kampala, Uganda, this May, convened by the American general in charge of the Africa Command, or AfriCom. The top commanders of 35 militaries on the continent gather every other year, hash out policy matters, and forge personal ties.
“It was as much diplomacy and politics as anything else,” Eaton said. “Nobody could give me an example of the State Department doing anything like that.”
What SHOULD a president do about this metamorphosed Pentagon? Or more practically, what should be done with it? A question to watch for in the coming presidential debates is whether either candidate is willing to discuss reorienting the Pentagon toward its core mission of armed defense, shedding its new capacities in the interest of keeping it focused or saving money. Neither candidate has suggested so far that he will.
Pentagon cutbacks are politically difficult. No president likes to argue against national defense, and Pentagon spending by design sprawls across congressional districts, creating a built-in bipartisan lobby against cuts. But a president who tried to return the Pentagon to a more strictly military mission could expect at least some support from the Department of Defense itself. Many career officers view the extra missions with dismay, fearing that the Pentagon will get worse at fighting wars as it spends more and more time patrolling cyberspace, organizing diplomatic retreats, and deploying agricultural battalions to train farmers in war zones. Eaton, whose three children all serve in the armed forces, has been a vocal critic of the new military, and thinks the best thing the next administration could do is defund and shut down all the niche capacities that have sprung up since 9/11.
The past two US defense secretaries, including George W. Bush appointee Robert Gates, have also expressed concern about the department’s expansion. In a 2008 speech, while still in office, Gates ripped into the “creeping militarization” of foreign policy, expressing concern that the Pentagon was like an “800-pound gorilla” taking over the intelligence community, foreign aid, and diplomacy in conflict zones. Both Gates and his successor, Leon Panetta, have vociferously advocated for a bigger, better-funded State Department more capable of deploying around the world, conducting diplomacy in hot zones, and dispensing emergency relief and development aid.
It’s hard to imagine the Pentagon shedding capacity anytime soon. As the wars in Iraq and Afghanistan subside, the Defense Department appears likely to keep most of its enormous budget. During a period when most branches of government will be struggling to survive budget cutting, the Pentagon will more than ever have the global reach and the policy planning muscle to set the agenda and execute foreign policy.
So in the next several months, we should be on the lookout for specific ideas from candidates about what do with this excess power—at the least, an acknowledgment that it exists. Domestically, the Pentagon has the opportunity to shape university research priorities; it influences White House policy planning anytime a crisis erupts in a new place. Abroad, the military can do considerable good by using its money and expertise to improve quality of life, burnishing America’s reputation as a font of positive development rather than just counter-terrorism and counter-insurgency.
But while it lasts, the breadth of the current military presents grave challenges, not least for a democratic country that in principle, if not always in policy, opposes military dictatorships around the world. Even Pentagon insiders worry about this dissonance. Whatever good our deployments can do, it will be harder to promote civilian ideals so long as our foreign policy wears a uniform.
For the second year in a row, I’ve supervised a team of graduate students at Columbia’s School of International and Public Affairs in a comparative study of Islamist political movements and provincial governance. They’ll be presenting their findings this week at The Century Foundation. You can download a full copy of their report here (PDF).
Religious political movements have been rising in popularity and power across the Islamic world for decades, amassing an ample record in local government. The Arab uprisings are only the most recent manifestation of this long-term trend. Yet there is little empirical study of the behavior of Islamist political parties, with prevailing assumptions never subjected to scrutiny. Conventional wisdom holds that ideology matters more to Islamist parties than to secular ones, and that once in power religious hardliners will moderate. Our study aims to clinically assess the performance of Islamist groups based on socio-economic data. It is difficult to compare Islamist parties across different time periods and national contexts, but we have looked for patterns and causal connections rather than hard-and-fast rules. We compared the ideologies and stated governing platform of the parties we studied to their political behavior and other outcomes measurable by data. Some general trends emerged:
Islamists invoke religion selectively. The level of Islamist rhetoric varied widely among the parties we studied. Those with an overtly religious discourse used it to gain political support and distinguish themselves from their secular counterparts, but applied religion only to some spheres of governance – usually gender equality and education, rather than issues like the economy or health.
Politics trumps ideology. Islamists respond to pressure from their constituents, displaying flexibility even on central points of doctrine if their political viability is at stake.
Context is controlling. Local structural concerns like economic crises or regime change trump ideology, pragmatically shaping the governing party’s agenda regardless of its stated ideology. The transitional narrative is particularly important in studying the rise of Islamists; many of these groups rose to power after decades of state suppression and underground activism.
Even when an Islamist party rises to power on a wave of religious rhetoric, we found across a variety of national narratives that ideology can be molded or subsumed by public opinion, local conditions, and pragmatic political constraints. We also found that Islamism is most useful as a predictor of political behavior on matters of social policy.
[Originally published in The Boston Globe.]
President Obama and his presumptive challenger Mitt Romney agree on at least one important matter: the world these days is a terrifying place. Romney talks about the “bewildering” array of threats; Obama about the perils of nuclear weapons in the wrong hands. They differ only on the details.
A bipartisan emphasis on threats from outside has always been a hallmark of American foreign-policy thinking, but it has grown more widespread and more heightened in the decade since 9/11. General Martin E. Dempsey, chairman of the Joint Chiefs of Staff, captured the spirit when he spoke in front of Congress recently: “It’s the most dangerous period in my military career, 38 years,” he said. “I wake up every morning waiting for that cyberattack or waiting for that terrorist attack or waiting for that nuclear proliferation.”
The unpredictability and extremism of America’s enemies today, the thinking goes, makes them even more threatening than our old conventional foes. The old enemy was distant armies and rival ideologies; today, it’s a theocracy with missiles, or a lone wolf trying to detonate a suitcase nuke in an American city.
But what if the entire political and foreign policy elite is wrong? What if America is safer than it ever has been before, and by focusing on imagined and exaggerated dangers it is misplacing its priorities?
That’s the bombshell argument put forth by a pair of policy thinkers in the influential journal Foreign Affairs. In an essay entitled “Clear and Present Safety: The United States Is More Secure Than Washington Thinks,” authors Micah Zenko and Michael A. Cohen argue that that American policy leaders have fallen into a nearly universal error. Across the ideological board, our leaders and experts genuinely believe that the world has gotten increasingly dangerous for America, while all available evidence suggests exactly the opposite: we’re safer than we’ve ever been.
“The United States faces no serious threats, no great-power rival, and no near-term competition for the role of global hegemon,” Cohen says. “Yet this reality of the 21st century is simply not reflected in US foreign policy debates or national security strategy.”
It might seem that the extra caution couldn’t hurt. But Zenko and Cohen argue that excessive worry about security leads America to focus money and attention on the wrong things, sometimes exacerbating the problems it seeks to prevent. If Americans and their leaders recognized just how safe they are, they would spend less on the military and more on the slow nagging problems that undermine our economy and security in less dramatic ways: creeping threats like refugee flows, climate change, and pandemics. More important, the United States would avoid applying military solutions to non-military problems, which they argue has made containable problems like terrorism worse. In effect, they argue, the United States should keep a pared-down military in reserve for traditional military rivals. The bulk of America’s security efforts could then be spent on remedies like policing and development work — more appropriate responses to the terrorism and global crime syndicates that understandably drive our fears.
Why should Americans feel so secure right now? Zenko and Cohen write that a calm appraisal of global trends belies the danger consensus. There are fewer violent conflicts than at almost any point in history, and a greater number of democracies. None of the states that compete with America come close to matching its economic and military might. Life expectancy is up, and so is prosperity. As vulnerable as the nation felt in the wake of 9/11, American soil is still remarkably insulated from attack.
Nonetheless, more than two-thirds of the members of the Council on Foreign Relations — as good a cross-section of the foreign-policy brain trust as there is — said in a 2009 Pew Survey that the world today was as dangerous, or even more so, than during the Cold War. Other surveys of experts and opinion-makers showed the same thing: the overwhelming majority of experts believe the world is becoming more dangerous. Zenko and Cohen claim, essentially, that the entire foreign policy elite has fallen prey to a long-term error in thinking.
“More people have died in America since 9/11 crushed by furniture than from terrorism,” Zenko says in an interview. “But that’s not an interesting story to tell. People have a cognitive bias toward threats they can perceive.”
The paper’s authors are, in effect, skewering their own peers. Zenko is a Council on Foreign Relations political scientist with a PhD from Brandeis. Cohen (a colleague of mine at The Century Foundation, and a previous contributor to Ideas) worked as a speechwriter in the Clinton administration and has been a mainstay in the thinktank world for a decade.
Zenko and Cohen point out that there are plenty of good-faith reasons that experts tend to overestimate our national risk. A raft of psychological research from the last two decades that shows human nature is biased to exaggerate the threat of rare events like terrorist attacks and underestimate the threat from common ones like heart attacks. And security policy in general is extremely risk-averse: we expect our military and intelligence community to tolerate no failures. A cabinet secretary who pledged to reduce terrorist attacks to just a few per year would not last long in the job: the only acceptable goal is zero.
Electoral politics, of course, is greatest driver of what Zenko and Cohen call “domestic threat-mongering.” In an endless contest for votes, Republicans do well by claiming to be tough in a scary world. Democrats adopt the same rhetoric in order to shield themselves from political attacks. Both sides see an advantage in a politically risk-averse strategy. If a minor threat today turns into a sizeable one tomorrow, better to have sounded the alarm early than to have appeared naïve or feckless.
But Zenko and Cohen make the politically uncomfortable argument that it’s wrong to govern based on the prospect of unlikely but extreme events. Instead of marshalling our resources for the 1 percent risk of a nuclear jihadist, as Vice President Dick Cheney argued we should, we should really set our security policy based on the 99 percent of the time when things go America’s way.
They point to the number of wars between major states and the number of people killed in wars every year, both of which have been steadily declining for decades. They also point to the historical, systematic growth of the global economy and spread of financial and trade links, which have undergirded an unprecedented period of peace among rival great powers and within the West.
Their thinking follows in the footsteps of a small but persistent group of contrarian security scholars, who have noted the post-9/11 spike in America’s already long history of threat exaggeration. Best known among them is Ohio State University political scientist John Mueller, who has argued that American alarmism about terrorism can cause more harm to our well-being and national economy than terrorism itself. In that vein, Zenko and Cohen claim that America’s over-militarization prompts avoidable wars and has in fact created far more problems than it solves, from terrorist blowback to huge drains on the Treasury.
The implications, as they see it, are clear: spend less money on the military; spend more on the boring, international initiatives that actually make America safe and powerful. Zenko’s favorite example is loose nukes, which pose hardly any threat today but were a real cause for concern as the Soviet Union collapsed in the early 1990s, leaving poorly secured weapons across an entire hemisphere.
“There was a solution: limit nuclear stockpiles and secure them,” Zenko says. “We took common-sense steps, none of which involved the US military. We send contractors to these facilities in Russia, and they say ‘The fence doesn’t work, the cameras don’t work, the guards are drunk.’ It’s cheap, and it works. This is what keeps us safe.”
Not everyone agrees. Robert Kagan, the most influential proponent of robust American power, argues that America is safe today precisely because it throws its military might around. President Obama said he relied for his most recent state of the union address on Kagan’s newest book, “The World America Made.” Mackenzie Eaglen, a defense expert at the American Enterprise Institute, says America benefits even when it appears to overreact. According to this thinking, even if the Pentagon designs the military for improbable threats and deploys at the drop of the hat, it is performing a service keeping the world stable and deterring would-be rivals. Like most establishment defense thinkers, Eaglen believes American dominance could easily and quickly come to an end without this kind of power projection.
“Power abhors a vacuum,” she says. “If we don’t fill it, others will, and we won’t like what that looks like.”
Other critics of Zenko and Cohen’s argument, like defense policy writer Carl Prine, say the comforting data about declining wars and violent deaths is misleading. Today’s circumstances, they argue, don’t preclude something drastic happening next — say, if China, or even a rising power like Brazil, veers into an unpredictably bellicose path and clashes violently with American interests. Simply put, safe today doesn’t mean safe tomorrow.
Even if Zenko and Cohen are right, however, and the big-military crowd is wrong, it is nearly impossible to imagine a spirit of “threat deflation” taking hold in American politics. The already alarmist expert community that shapes US government thinking was further electrified by 9/11. Anyone in either party who argues for a leaner military, or pruning back intelligence infrastructure, risks being portrayed as inviting another attack on the homeland. The result is an unshakeable institutional inertia.
To get a sense of what they’re up against, listen to James Clapper, the director of national intelligence, presenting the “Worldwide Threat Assessment” to Congress, as his office does every year. The latest version, in January this year, reads like a catalogue of nightmares, describing macabre possibilities ranging from Hezbollah sleeper cells attacking inside America to a cyberattack that could turn our own infrastructure into murderous drones. Are these science fiction visions, or simply the wise man’s anticipation of the next war? It’s impossible to know, but one thing is striking: there’s no attempt in the intelligence czar’s report to rank the threats or assess their real likelihood. He simply and clinically presents every possible, terrifying thing that could happen, and signs off. It’s truly frightening reading.
This is what Zenko dismissively deems the “threat smorgasbord.” It makes clear how high the stakes are in planning for war and terrorist attacks, and how much emotional power the issue has. The reasonable thing to do might simply never be politically palatable. The current debate about Iran’s nuclear intentions is a perfect example, says Eaglen, who debated Cohen about his thesis on Capitol Hill in April.
“I can say Iran’s a threat, Michael can say it’s not,” she says. “But if he gets it wrong, we’re in trouble.”
Thanassis Cambanis, a fellow at The Century Foundation, is the author of “A Privilege to Die: Inside Hezbollah’s Legions and Their Endless War Against Israel” and blogs at thanassiscambanis.com. He is an Ideas columnist.
[Originally published in The Boston Globe, subscribers only.]
CAIRO — It might have seemed naïve to an outsider, but one of the great hopes among the revolutionaries who humiliated Egyptian dictator Hosni Mubarak more than a year ago was that the country’s strongman regime would finally yield to a democratic variety of voices. Both Islamic revolutionaries and secular liberals spoke up for modern ideas of pluralism, tolerance, and minority rights. Tahrir Square was supposed to turn the page on a half-century or more of one-party rule, and open the door to something new not just for Egypt, but for the Arab world: a genuine diversity of opinion about how a nation should govern itself.
“We disagree about many things, but that is the point,” one of the protest organizers, Moaz Abdelkareem, said in February 2011, the week that Mubarak quit. “We come from many backgrounds. We can work together to dismantle the old regime and make a new Egypt.”
A year later, it is still far from clear what that new Egypt will look like. The country awaits a new constitution, and although a competitively elected parliament sat in January for the first time in contemporary Egyptian history, it is still subordinate to a secretive military regime. Like all transitions, the struggle against Egyptian authoritarianism has been messy and complex. But for those who hoped that Egypt would emerge as a beacon of tolerant, or at least diverse, politics in the Arab world, there has been one big disappointment: It’s safe to say that one early casualty of the struggle has been that spirit of pluralism.
“I do not trust the military. I do not trust the Muslim Brothers,” Abdelkareem says in an interview a year later. In the past year, he helped establish the Egyptian Current, a small liberal party that wants to bring direct democracy to Egyptian government. Despite his inclusive principles, he’s urged the dismissal from public life of major political constituencies with whom he disagrees: former regime supporters, many Islamists, old-line liberals, and supporters of the military. He shrugs: “It’s necessary, if we’re going to change.”
He’s not the only one who has left the ideal of pluralism behind. A survey of the major players in Egyptian politics yields a list of people and groups who have worked harder to shut down their opponents than to engage them. The Muslim Brotherhood — the Islamist party that is the oldest opposition group in Egypt, and the one with by far the most popular support — has run roughshod over its rivals, hoarding almost every significant procedural power in the legislature and cutting a series of exclusive deals with the ruling generals. Secular liberals, for their part, have suggested that an outright coup by secular officers would be better than a plural democracy that ended up empowering bearded fundamentalists who disagree with them.
When pressed, most will still say they want Egypt to be the birthplace of a new kind of Arab political culture — one in which differences are respected, minorities have rights, and dissent is protected. However, their behavior suggests that Egypt might have trouble escaping the more repressive patterns of its past.
IN A COUNTRY that had long barred any meaningful politics at all, Tahrir’s leaderless revolution begat a brief but golden moment of political pluralism. Activists across the spectrum agreed to disagree — this, it was widely believed, was the very practice that would lead Egypt from dictatorship to democracy. During the first month of maneuvering after Mubarak resigned in February 2011, Muslim Brotherhood leaders vowed to restrain their quest for political power; socialists and liberals emphasized due process and fair elections. The revolution took special pride in its unity, inclusiveness, and plethora of leaders: It included representatives of every part of society, and aspired to exclude nobody.
“Our first priority is to start rebuilding Egypt, cooperating with all groups of people: Muslims and Christians, men and women, all the political parties,” the Muslim Brotherhood’s most powerful leader, Khairat Al-Shater, told me in an interview last March, a year ago. “The first thing is to start political life in the right, democratic way.”
Within a month, however, that commitment had begun to fray. Jostling factions were quick to question the motives and patriotism of their rivals, as might be expected from political movements trying to position themselves in an unfolding power struggle. More surprising, and more dangerous, has been the tendency of important groups to seek the silencing or outright disenfranchisement of competitors.
The military sponsored a constitutional referendum in March 2011 that supposedly laid out a path to transfer power to an elected, civilian government, but which depended on provisions poorly understood by Egyptian voters. The Islamists sided with the military, helping the referendum win 77 percent of the votes, and leaving secular liberal parties feeling tricked and overpowered. The real winner turned out to be the ruling generals, who took the win as an endorsement of their primacy over all political factions. The military promptly began rewriting the rules of the transition process.
With the army now the country’s uncontested power, some leading liberal political parties entered negotiations with the generals over the summer to secure one of their primary goals — a secular state — in a most illiberal manner: a deal with the army that would preempt any future constitution written by a democratically selected assembly.
The Muslim Brotherhood responded by branding the liberals traitors and scrapping its conciliatory rhetoric. The Islamists, with huge popular support, abandoned their initial promise of political restraint and instead moved to contest all seats and seek a dominant position in the post-Mubarak order. The Brotherhood now holds 46 percent of the seats in parliament, and with the ultra-Islamist Salafists holding another 24 percent, the Brotherhood effectively controls enough of the body to shut down debate. Within the Brotherhood, Khairat Al-Shater has led a ruthless purge of members who sought internal transparency and democracy — and is now considered a front-runner to be Egypt’s next prime minister.
The army generals in charge, meanwhile, have been using state media to demonize secular democracy activists and street protesters as paid foreign agents, bent on destroying Egyptian society in the service of Israel, the United States, and other bogeymen.
Since the parliament opened deliberations in January, the rupture has been on full and sordid display. The military has sought legal censure against a liberal member of parliament, Zyad Elelaimy, because he criticized army rule. State prosecutors have gone after liberals and Islamists who have voiced controversial political positions. Islamists and military supporters have also filed lawsuits against liberal politicians and human rights activists, while the military-appointed government has mounted a legal and public relations campaign against civil society groups.
THE CENTRAL QUESTION for Egypt’s future is whether these increasingly intolerant tactics mean that the country’s next leaders will govern just as repressively as its last. Scholars of political transition caution that for states shedding authoritarian regimes, it can take years or decades to assess the outcome. Still, there are some hallmarks of successful transitions that Egypt appears to lack. States do better if they have an existing tradition of political dissent or pluralism to fall back on, or strong state institutions independent of the political leadership. Egypt has neither.
“Transitions are always messy, and the Egyptian one is particularly messy,” said Habib Nassar, a lawyer who directs the Middle East and North Africa program at the International Center for Transitional Justice in New York. “To be honest, I’m not sure I see any prospects of improvement for the short term. You are transitioning from dictatorship to majority rule in a country that never experienced real democracy before.”
Outside the parliament’s early sessions in January, liberal demonstrators chanted that the Muslim Brotherhood members were illegitimate “traitors.” In response, paramilitary-style Brotherhood supporters formed a human cordon that kept protesters from getting close enough to the parliament building to even be heard by their representatives.
At a rally for the revolution’s one-year anniversary in Tahrir Square, Muslim Brotherhood leaders preached and made speeches from a high stage to celebrate their triumph. It was unclear whether they were referring to the revolution, or to their party’s dominance at the polls. It was too much for the secular activists. “This is a revolution, not a party,” some chanted. “Leave, traitors, the square is ours not yours,” sang others.
Hundreds of burly brothers linked arms, while a leader with a microphone seemed to taunt the crowd. “You can’t make us leave,” he said. “We are the real revolution.” In response, outraged members of the secular audience tore apart the Brotherhood’s sound system and pelted the stage with water bottles, corn cobs, rocks, even shards of glass.
The Arab world is watching closely to see what happens in the next several months, when Egypt will write a new constitution and elect a president in a truly competitive ballot, and the military will cede power, at least formally. Even in the worst-case scenario, it’s worth remembering that Egypt’s next government will be radically more representative than Mubarak’s Pharaonic police state.
Sadly, though, political discourse over the last year has devolved into something that looks more like a brawl than a negotiation. If it continues, the constitution drafting process could end up more ugly than inspiring. The shape of the new order will emerge from a struggle among the Islamists, the secular liberals, and the military — all of whom, it now appears, remain hostage to the culture of the regime they worked so hard to overthrow.
[Read the original piece in The Boston Globe (subscription required).]
After decades of nuclear brinkmanship, Americans felt profound relief when the Cold War ended. The Soviet Union’s collapse in 1989 transformed the world almost overnight from a battleground between two global giants — a bipolar world, in scholarly parlance — to a unipolar world, in which the United States outstripped all other powers.
In foreign policy circles, it was taken for granted that this dominance was good for America. Experts merely differed over how long the “unipolar moment” could last, or how big a peace dividend America could expect. Some even argued that the end of the arms race between Moscow and Washington had eliminated the threat of world war.
Now, however, with a few decades of experience to study, a young international relations theorist at Yale University has proposed a provocative new view: American dominance has destabilized the world in new ways, and the United States is no better off in the wake of the Cold War. In fact, he says, a world with a single superpower and a crowded second tier of distant competitors encourages, rather than discourages, violent conflict–not just among the also-rans, but even involving the single great power itself.
In a paper that appeared in the most recent issue of the influential journal International Security, political scientist Nuno P. Monteiro lays out his case. America, he points out, has been at war for 13 of the 22 years since the end of the Cold War, about double the proportion of time it spent at war during the previous two centuries. “I’m trying to debunk the idea that a world with one great power is better,” he said in an interview. “If you don’t have one problem, you have another.”
Sure, Monteiro says, the risk of apocalyptic war has decreased, since there’s no military equal to America’s that could engage it in mutually assured destruction. But, he argues, the lethal, expensive wars in the Persian Gulf, the Balkans, and Afghanistan have proved a major drain on the country.
Even worse, Monteiro claims, America’s position as a dominant power, unbalanced by any other alpha states actually exacerbates dangerous tensions rather than relieving them. Prickly states that Monteiro calls “recalcitrant minor powers” (think Iran, North Korea, and Pakistan), whose interests or regime types clash with the lone superpower, will have an incentive to provoke a conflict. Even if they are likely to lose, the fight may be worth it, since concession will mean defeat as well. This is the logic by which North Korea and Pakistan both acquired nuclear weapons, even during the era of American global dominance, and by which Iraq and Afghanistan preferred to fight rather than surrender to invading Americans.
Of course, few Americans long for the old days of an arms race, possible nuclear war, and the threat of Soviet troops and missiles pointed at America and its allies. Fans of unipolarity in the foreign policy world think that the advantages of being the sole superpower far outweigh the drawbacks — a few regional conflicts and insurgencies are a fair price to pay for eliminating the threat of global war.
DAVID FLAHERTY FOR THE BOSTON GLOBE
But Monteiro says that critics exaggerate the distinctions between the wars of today and yesteryear, and many top thinkers in the world of security policy are finding his argument persuasive. If he’s right, it means that the most optimistic version of the post-Cold War era — a “pax Americana” in which the surviving superpower can genuinely enjoy its ascendancy — was always illusory. In the short term, a dominant United States should expect an endless slate of violent challenges from weak powers. And in the longer term, it means that Washington shouldn’t worry too much about rising powers like China or Russia or the European Union; America might even be better off with a rival powerful enough to provide a balance. You could call it the curse of plenty: Too much power attracts countless challenges, whereas a world in which power is split among several superstates might just offer a paradoxical stability.
From the 1700s until the end of World War II in 1945, an array of superpowers competed for global influence in a multipolar world, including imperial Germany and Japan, Russia, Great Britain, and after a time, the United States. The world was an unstable place, prone to wars minor and major.
The Cold War era was far more stable, with only two pretenders to global power. It was, however, an age of anxiety. The threat of nuclear Armageddon hung over the world. Showdowns in Berlin and Cuba brought America and the Soviet Union to the brink, and the threat of nuclear escalation hung over every other superpower crisis. Generations of Americans and Soviets grew up practicing survival drills; for them, the nightmare scenario of thermonuclear winter was frighteningly plausible.
It was also an age of violent regional conflicts. Conflagrations in Asia, Africa, and Latin America spiraled into drawn out, lethal wars, with the superpowers investing in local proxies (think of Angola and Nicaragua as well as Korea and Vietnam). On the one hand, superpower involvement often made local conflicts far deadlier and longer than they would have been otherwise. On the other, the balance between the United States and the USSR reduced the likelihood of world war and kept the fighting below the nuclear threshold. By tacit understanding, the two powers had an interest in keeping such conflicts contained.
When the Soviet Union began its collapse in 1989, the United States was the last man standing, wielding a level of global dominance that had been unknown before in modern history. Policy makers and thinkers almost universally agreed that dominance would be a good thing, at least for America: It removed the threat of superpower war, and lesser powers would presumably choose to concede to American desires rather than provoke a regional war they were bound to lose.
That is what the 1991 Gulf War was about: establishing the new rules of a unipolar world. Saddam Hussein invaded Kuwait, Monteiro believes, because he miscalculated what the United States was willing to accept. After meeting Saddam with overwhelming force, America expected that the rest of the world would capitulate to its demands with much less fuss.
Monteiro compared the conflicts of the multipolar 18th century to those of the Cold War and current unipolar moment. What he found is that the unipolar world isn’t necessarily better than what preceded it, either for the United States or for the rest of the world. It might even be worse. “Uncertainty increases in unipolarity,” Monteiro says. “If another great power were around, we wouldn’t be able to get involved in all these wars.”
In the unipolar period, a growing class of minor powers has provoked the United States, willing to engage in brinkmanship up to and including violent conflict. Look no further than Iran’s recent threats to close the Strait of Hormuz to oil shipping and to strike the American Navy. Naturally, Iran wouldn’t be able to win such a showdown. But Iran knows well that the United States wants to avoid the significant costs of a war, and might back down in a confrontation, thereby rewarding Iran’s aggressive gambits. And if (or once) Iran crosses the nuclear threshold, it will have an even greater capacity to deter the United States. During the Cold War, on the other hand, regional powers tended to rely on their patron’s nuclear umbrella rather than seeking nukes of their own, and would have had no incentive to defy the United States by developing them.
Absent a rival superpower to check its reach, the United States has felt unrestrained, and at times even obligated, to intervene as a global police officer or arbiter of international norms against crimes such as genocide. Time and again in the post-Cold War age, minor countries that were supposed to meekly fall in line with American imperatives instead defied them, drawing America into conflicts in the Balkans, Somalia, Haiti, Iraq, and Afghanistan. This wasn’t what was supposed to happen: The world was supposed to be much safer for a unipolar superpower, not more costly and hazardous.
Not everyone agrees that the United States would benefit from having a major rival. The best-known academic authority on American unipolarity, Dartmouth College political scientist William C. Wohlforth, argues that it’s still far better to be alone at the top. Overall, Wohlforth says, America spends less of its budget on defense than during the Cold War, and fewer Americans are killed in the conflicts in which it does engage. “Those who wish to have a peer competitor back are mistaken,” he said. “They forget the huge interventions of the Cold War.”
Between 1945 and 1989, Wohlforth says, proxy wars between America and the Soviet Union killed hundreds of thousands of people, against the backdrop of a very real and terrifying threat of nuclear annihilation. Today, he says, the world is still dangerous, but it’s much less deadly and frightening than it was in the time of the nuclear arms race.
For his part, Monteiro agrees that the Cold War was nasty and scary; he just wants to debunk the notion that what came next was any better. According to Monteiro, bipolarity and unipolarity pose different kinds of dangers, but are equally problematic.
Accepting this theory would have stark implications. For one thing, it would mean that America shouldn’t be afraid of emerging superpowers. In a unipolar world, America faces the temptation to overextend itself, perhaps wasting its resources in conflicts like Iraq and Afghanistan rather than protecting its core national interests; having another superpower to contend with could change that focus. “It’s not a bad thing to have a peer competitor,” Monteiro says.
His analysis suggests that the one way the United States can reduce its involvement in costly conflict is to shift its strategy from what he calls “global dominance” to “disengagement,” a posture in which America would keep out of regional conflicts and let minor powers fight each other. The one thing that might restrain a problem state like Iran or North Korea, according to Monteiro, is an assurance that the United States won’t seek to overthrow their regime.
The debate has direct bearing on a question that increasingly preoccupies policy makers in Washington: How should the United States respond to China’s rising power? Conventional wisdom, with its aversion to a bipolar or multipolar world, would counsel the United States to do all it can to forestall or even thwart China’s ascension to superpower status. By Monteiro’s logic, however, America should consider China’s rise a potential boon, something to be managed with an eye toward minimizing friction. “We might not like it, but in many ways China’s rise is not very dangerous,” says Charles Glaser, an influential political scientist at The George Washington University who also writes about unipolarity, and who shares much of Monteiro’s skepticism about its benefits.
Monteiro’s most radical insight concerns what he considers the dim likelihood of stability in a world where the United States is dominant. “Regardless of US strategy, conflict will abound,” Monteiro writes. What does lie in America’s control, he argues, is the choice between laboring to extend that dominance or merely maintaining an edge while disengaging from most conflicts. If America continues to involve itself in the constant ebb and flow of regional wars, as it has during two decades as unipolar giant and global police officer, it will exhaust its treasury and military resources, and feel threatened when rising powers like China approach parity. If, on the other hand, it chooses to disengage, it can husband its military might for the more important task of facing the new superpowers that inevitably will emerge.
Wanted: A grand strategy
In search of a cohesive foreign policy plan for America
By Thanassis Cambanis
NOVEMBER 13, 2011
GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION
President Obama campaigned on the promise of change, and ended up with a world much fuller of it than he and his advisers expected. In the three years since he was elected, the foreign-policy landscape has shifted dramatically, not least because the financial crisis has spread worldwide and the Arab world has risen up against its autocratic rulers.
GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION
When the unexpected occurs in the realm of foreign policy–like this year’s Arab revolts, or a hypothetical military crisis in Asia–the nation’s leaders don’t always have ready-made plans on the shelf, and don’t have the luxury of time to start crafting a policy from scratch. What they can rely on, however, is what’s known as a grand strategy. A term from academia, “grand strategy” describes the real-world framework of basic aims, ideals, and priorities that govern a nation’s approach to the rest of the world. In short, a grand strategy lays out the national interest, in a leader’s eyes, and says what a state will be willing to do to advance it. A grand strategy doesn’t prescribe detailed solutions to every problem, but it gives a powerful nation a blueprint for how to act, and brings a measure of order to the rest of the world by making its expectations more clear.
In the absence of a clear road map for how, and why, America should be engaging with the world, a number of big-picture thinkers have recently rushed into the gap, filling the pages of the most prominent journals in the field and putting grand strategy at the center of the conversation at influential institutions from the National Defense University to America’s top policy schools.
What are the competing visions for an American grand strategy for the 21st century? All the proposed new strategies try to deal with a world in which America still holds the preponderance of power, but can no longer dominate the entire globe. The two-way Cold War contest is over, but it will be some time before another pretender to power–whether China, Russia, India, or the European Union–becomes a meaningful rival. One simple version has America stepping back from the world, husbanding its resources by projecting power at an imperial remove rather through attempts to micromanage the affairs of far-flung foreign nations. Another has it acting more like a multinational corporation, delegating authority to allies most of the time, but involving itself deeply and decisively wherever its interests are threatened. And some unapologetic America-firsters argue the United States can do better by openly trying to dominate the world, rather than by negotiating with it.
Whatever grand strategy emerges to guide 21st century America, the answer is likely to grow out of America’s history, rather than markedly depart from it. All successful American grand strategies–manifest destiny, Wilsonian idealism and self-determination, Cold War-era containment–were driven in part by a sense of American exceptionalism, the notion that America “stands tall” and acts as a beacon in the world. But they also included a dose of Machiavelli as well, nakedly seeking to contain security threats against America, while using international allies to further American interests.
One influential strain of thinking about grand strategy comes from the realm of small-l liberal realism. Liberal realists are unsentimental in their desire to see America maximize its power, but also restrained about how much America should get involved. They want America to reap more and spend less, in both financial and military terms. Their ideas tend to dominate at policy schools, where much of the applied thinking about foreign affairs comes from, and seem to be getting a thorough hearing within Hillary Rodham Clinton’s State Department.
As a potential grand strategy, the front-runner emerging from these realist thinkers isoff-shore balancing. Essentially, it advocates keeping America strong by keeping the rest of the world off balance. Military intervention, in this line of thinking, should always be a last resort rather than a first move; America’s military should lurk over the horizon, more powerful if it’s on the minds of its rivals rather than their territory. This grand strategy prescribes a “divide and conquer” approach, advocating that Washington use its diplomatic and commercial power to balance rising powers against one another, so that none can dominate a single region and proceed to threaten America. A prominent group of theorists has embraced this idea, including John Mearsheimer at the University of Chicago and Stephen Walt at Harvard University’s Kennedy School.
“Our first recourse should be to have local allies uphold the balance of power, out of their own self-interest,” Walt wrote in the most recent edition of The National Interest. “Rather than letting them free ride on us, we should free ride on them as much as we can, intervening with ground and air forces only when a single power threatens to dominate some critical region.”
A slightly different version could be called “internationalist tinkering”: It argues that America should focus on rebuilding its international relationships and coalitions, while reserving America’s right on occasion to act decisively and alone. Writing this summer in Foreign Affairs, another Boston-area political scientist, Daniel W. Drezner of the Fletcher School of Law and Diplomacy at Tufts University, argued that America can quite effectively maintain its international power by relying on cooperation and gentle persuasion most of the time, and force when challenged by other countries. This approach, Drezner believes, will have the result of “reassuring allies and signaling resolve to rivals.” An example is America’s investment in closer relations with Asian states, putting China on notice and forcing it to adjust its expansionist foreign policy. Drezner argues that although Obama might not have articulated it well, he is already driven by this approach, which Drezner calls “counterpunching.”
Princeton University’s G. John Ikenberry makes a similar argument in his latest book, “Liberal Leviathan.” America, he argues, can and should police the world order so long as it abides by the same rules as other nations. As a sort of first among equals, America holds the balance of power but cannot overtly dominate the world. It’s a position that demands considerable care to maintain, and requires investment in alliances, institutions like the UN, and international regimes like the ones that govern trade and certain types of crime.
Another line of thinking comes from scholars and writers who are more pessimistic about America’s prospects; they see an empire at the beginning of a long period of decline, and believe America needs to dramatically curtail its international engagements and get its own house in order first. It could be called the school of empire in eclipse, and its proponents have been labeled declinists. In effect, they argue that America’s economy and domestic infrastructure are collapsing, and the nation’s global influence will follow, unless America concentrates its resources on rebuilding at home. Many of the most influential thinkers in this strain aren’t against a robust foreign policy, but they argue that it’s impractical to worry too much about the rest of the world until we address the problems at home. The late historian Tony Judt was the one of the leading exponents of this view, and the writer George Packer has taken up many of the same themes. A folk version of this thinking drives much of Thomas Friedman’s writing, and underpins his most recent book, “That Used to Be Us: How America Fell Behind in the World It Invented and How We Can Come Back,” which he wrote with Michael Mandelbaum, an American foreign policy expert at Johns Hopkins University.
On the opposite end of the spectrum, neo-imperialism holds that America’s grand strategy needs only be more brash and more demanding. (Neo-imperialist thinkers are often called “global dominators” as well.) The recent mistake, in this view, is that America asks too little of the world, and thereby invites frustrating challenges. Mitt Romney’s claim that he “won’t apologize for America” reflects this view, which has its intellectual underpinnings in the writings of conservatives such as Robert Kagan, Niall Ferguson, Robert Kaplan, and Charles Krauthammer. Ferguson, a British historian, has made the provocative suggestion that America should pick up where the British Empire left off, directly managing the entire world’s affairs. In this view, even long and expensive entanglements like the ones in Iraq and Afghanistan aren’t major setbacks: Militarism, these expansionists argue, has propelled American economic growth and international influence for centuries, and has a long future as long as we aren’t shy about using the force we have.
A grand strategy has to match means with goals; it can’t merely assert American power, but needs to account for America’s own depth of resources. The precarious state of America’s economy and the wear and tear on its military suggest that any successful new strategy would allow for a modest period of retrenchment, one in which America continues to fancy itself the world’s leader but adopts the tone of a hard-boiled CEO rather than a field marshal–Jack Welch rather than George Patton.
Bush’s strategy foundered for those reasons; he boldly and clearly asserted that America would secure itself through preemptive war and the spread of democracy, but found that he simply didn’t have the resources to deliver–and that the world didn’t respond as he hoped to our prodding and aspirations.
America’s limits have grown apparent as it has discovered a surprising shortage of leverage, even over close allies. The liberal grand strategists are feeding into a process already underway in the Obama administration to systematize foreign policy into a coherent framework, something more akin to a grand strategy than the jumble of policies that has marked America’s foreign policy for the least three years. The conservatives, meanwhile, are looking for intellectual toeholds among the Republican presidential contenders.
Earlier this year a top Obama aide seemed to belittle the very idea of a grand strategy as a simplistic “bumper sticker,” something that reduced the world’s complexity to a slogan. But, in a sense, that’s exactly the point of having one. To be truly helpful in time of crisis, a grand strategy must be based on incredibly thorough and detailed thinking about how America will rank its competing interests, and what tools it might use to project power in the rest of the world. But it also demands simplicity: a principle, even a simple sentence, reflecting our values as well as our interests, based on right as well as might, and as clear to America’s enemies as it is to the American electorate.
WBUR’s Here & Now did a segment today on the success of Islamists in Tunisia’s elections, and the stated intent of Libya’s transitional rulers to base their new constitution on sharia (just like Iraq did during the American occupation!). Robin Young asked good questions, and was interested in probing the real debate within the Islamist political spectrum, to get past the all-too-frequent binary of scary Islamists versus nice secularists.
The leader of Libya’s transitional council says Sharia Law will be the basis for the country’s new democracy; a moderate Islamist party is claiming victory in Tunisia’s elections; and, the Muslim Brotherhood is poised to do well whenever Egypt votes. Should the West be concerned or is this just the natural course of moving toward democracy in these Muslim countries?
My sense is that as political space opens up in Egypt, Syria, and other Arab countries, there will be an increased fracture between a “liberal” pole, made up of religious parties that advocate secular and civic governance, and a “conservative” pole, led by fundamentalists interested in theocratic governance. The cleavage won’t precisely map traditional differences between liberals and conservatives, and all parties to the debate will define themselves as religious Islamists. “Islamism” will cease to be a meaningful distinguishing label. Already, it’s hard to rely on the term when it applies equally to parties as different as Turkey’s AKP party, Tunisia’s Ennahda, Egypt’s Muslim Brotherhood and Noor Party, Salafists in the Gulf, Hamas, Hezbollah, and so on.
My latest column in The Boston Globe.
Fundamentalism’s rise just shows that the secular state has won, says political scientist Olivier Roy
If you read the news, it can feel impossible to escape the growing influence of religious fundamentalism. Christian evangelicals set the talking points of the Republican primaries, and America’s leaders speak about God with an insistence that might have made the authors of the Constitution blush. Religious parties have become steady players in the politics of secular Europe. Meanwhile, Islamists from Morocco to Malaysia campaign to eliminate all barriers between the mosque and the state. With religion so ubiquitous in politics, it sometimes appears that the world may be crossing the threshold from the age of secularism into one of religious states.
But what if all this religious activism and faith-based politics herald not a new dominance but a passing into irrelevance? That’s the counterintuitive assertion of Olivier Roy, a French political scientist and influential authority on Islam, who has now turned his eye on the growth of fundamentalism across all religions.
As Roy argued in the book “Holy Ignorance,” published late last year, old-line religions with organized power structures are in a global decline, overtaken by fundamentalist sects such as Islamic Salafis and Christian Pentecostals, which make a charismatic appeal to the individual. But in fact, Roy argues, this trend suggests that religion has effectively given up the struggle for power in the public square. Despite the extremist rhetoric of televangelists in the American Midwest or the Persian Gulf, these growing religions are really just a reaction to the triumph of secularism. They demand a total personal commitment from their adherents: abstinence, sobriety, frequent prayer. But, though their leaders may talk of making America an officially Christian nation or of establishing pure Islamic states in places like Egypt, they carry none of the real institutional or military weight of established religions in history.
“The religious movements are here, they have an agenda,” Roy says. “I never deny that. I say that political ideologies based on religion don’t work. You can’t manage a country in the name of a religion.” In other words, as he interprets it, the world is not in the grip of a clash of civilizations; religious movements are just growing more strident because they are detached from the culture of nations. An age of vocal religious fundamentalists, contrary to our expectations, may also be an age of safely secular states.
If Roy’s hypothesis proves correct, it stands to change international politics by making us much less concerned about religious extremism. It could give secular forces new power to deal with fundamentalist organizations like the Muslim Brotherhood; since their religious agendas are not likely to dominate any nation’s political life, they could be dealt with on a purely material level. At home, American policy makers could rest assured that extreme fundamentalist positions on gay rights or teaching evolution in schools would inevitably be leavened by the political realities of a pluralistic, secular society. For those who treasure secular governance and separation of church and state, Roy’s hypothesis–if the 21st century bears it out–would mean there is far less in the world to fear.
In “Holy Ignorance,” Roy presents what reads like an alternate history of the last half century, narrating some familiar facts but drawing counterintuitive conclusions. Fundamentalist religion has been on the rise worldwide since the 1950s. In the Islamic world the fastest growing sect is the Salafis, who advocate a return to 7th-century piety. Among Christians, evangelical Christian sects like Mormons, Pentecostals, and Jehovah’s Witnesses are the fastest growing, while ultra-Orthodox Jews are gaining within the Jewish community. Almost nowhere, with the exception of Ayatollah Khomeini’s revolution in Iran, has the fundamentalist revival taken the form of a single religious group, unified behind one leader, trying to take over a country. Instead, ascendant fundamentalists are a fractious group. Moreover, their growth comes at a time when pluralistic, secular politics predominate, and religious institutions have less power than ever before. In a borderless, globalized world, the only religions that grow are exactly those that sever themselves from a specific national or cultural context, and appeal to universal humanity. Such religions might sound louder or more shrill, but they’re also by definition less significant to the political and cultural life of nations.
Roy’s argument has grown out of decades studying the interaction between religion, politics and culture. As a young man, Roy writes, he was jarred by the arrival of a born-again Christian in his religious youth group. It was his first encounter with charismatic religion. After he established himself as an authority on the Islamic world, Roy began to argue by the 1990s that Islamic fundamentalists would not manage to win over their own societies, despite their violent heyday; his 1994 book, “The Failure of Political Islam,” was criticized by some as premature, but now most scholars agree with its conclusions.
From today’s vantage point, religious politicians and commentators seem like a force to be reckoned with, but it’s instructive to compare them to the mainline religious institutions that used to wield power. During the Middle Ages, institutional religions possessed tremendous real-world power. The Catholic Church decided which leaders had legitimacy and triggered massive wars. With the Industrial Revolution, Christian missionary activity inextricably intertwined with European colonization. Even as recently as 1960, the Vatican held such sway that American voters feared John F. Kennedy would take orders from the pope; his edicts set the parameters of European social policy well into the 20th century. Meanwhile, in the Islamic world, until colonialism eroded the authority of the Ottoman Empire in the 19th century, the supposedly all-powerful caliphs and their governors could be overthrown when religious scholars withdrew their support.
Gradually, however, secularism took root. After the French Revolution, the number of European leaders claiming a divine right to rule dwindled. Even the caliphate in Istanbul began to look like a secular empire with vestigial religious trappings. Modern nation states were more concerned with holding territory and controlling populations than with matters of faith. Roy argues that even when they invoked religion to gain legitimacy, modern states were engaged in an inherently political quest, in which religion was marginalized.
It was the loss of direct religious power that prompted the rise of what Roy calls “pure” religions: sects that appeal to the universal individual. Old-line religions talked explicitly about power and considered themselves more powerful than states. As they declined, Roy says, the public began to move to groups that were more charismatic and focused on faith in the private sphere–Mormons, Pentecostals, Tablighis, Salafis, ultra-Orthodox Jews. We call these sects “fundamentalist” because they claim to return to the founding texts of their faith, stripping out the practices that had accreted in the centuries since the founding of their respective religions.
This return to basics has visceral appeal to those seeking a faith. But millions of born-again evangelicals, seeking salvation from a mosaic of political backgrounds, are unlikely to develop the clout the Vatican once had. Look closely at the surge of Salafists around the Islamic world, and you’ll find extreme fragmentation, with rivalries between the followers of competing sheikhs often taking priority over concern with unbelievers. The religions that are actually growing and winning converts are resolutely not political monoliths in the making.
In Roy’s view, the pure new religions employ simplistic ideas that sound universal and appeal to new converts, but whose vigor can be hard to sustain generation to generation. These charismatic, fundamentalist religions depend on constant renewals of faith, Roy says, and depend on the zeal of new converts for their growth and dynamism. Fundamentalist sects can afford to take extreme positions on belief, Roy says, precisely because they have none of the constraints of governance and authority.
Not everyone is convinced by Roy’s interpretation. Even if there are hardly any real theocracies left in the world, religious hard-liners still influence political life directly in countless places. One need look no further than the role of born-again Christian activists in the Republican Party, or the growing enrollment of Salafist political parties in Egypt.
Karen Barkey, a Columbia University sociologist, argues that Roy has overlooked the resilience of fundamentalism. Yes, secular culture has largely taken the reins of power, but it’s a two-way process; religion has fought back, and tailors its message and approach to local constraints. Roy is too optimistic when he claims that fundamentalism always wears itself out, Barkey says. Religion is asserting itself within evolving cultures, including democratic ones, and will continue to have great influence. “Some forms of religious fundamentalism may well be disappearing into the ether of abstraction,” Barkey wrote in a barbed essay contesting Roy’s view in Foreign Affairs, “but in most cases, religion, culture, and politics are still meeting on the ground.”
For those watching that clash, too, it can be hard to credit Roy’s interpretation. In phenomena like the rise of the evangelical right in American politics and Islamic fundamentalists in the Arab world, Roy sees a sign of defeated religion that has been pushed from the political playing field. But others argue that religion crucially shapes politics in ways that scholars, most of whom are secular, don’t well understand. Grandees of political science recently convened an initiative to systematically increase their field’s study of religion, and published a volume through Columbia University Press this year called “Religion and International Relations Theory.”
Still many scholars agree with Roy that the importance of religious extremists has been overstated. Alan Wolfe, director of the Boisi Center for Religion and American Public Life at Boston College, says that Roy provides a convincing riposte to those like Jerry Falwell, who called for religion to directly shape legislation. Wolfe believes Roy’s assertion that religious influence on politics will continue to wane.
The implications of Roy’s theory are stark. If he is correct, American policy makers can step away from the entire framework of “the clash of civilizations” and “the rise of Islam,” and finally begin to deal with political actors at face value. Iran’s ayatollahs, for example, are better understood as extreme nationalists rather than religious fanatics, Roy has written. While religion matters to Hindu nationalists in India, Shas Party members in Israel, and followers of Hezbollah in Lebanon, it’s their specific nationalist aims that have made them politically powerful players. In political terms, their faith is essentially irrelevant.
Most of all, Roy’s theory offers the possibility of no longer seeing our conflicts as religious wars. In 2003, Lieutenant General William Boykin scandalized secular Americans when he described battling a Muslim fighter and said, “I knew that my God was bigger than his. I knew that my God was a real God, and his was an idol.” If Roy is correct, such rhetoric is simply beside the point; as our century wears on, extreme religious belief will more and more be just a fringe phenomenon, unworthy of the attention of governments.
EVEN BEFORE WE knew who had committed the attacks of Sept. 11, 2001, it was clear that the event would be transformative, startling Americans into reconsidering how they understood matters from religion and culture to war and civil liberties.
For the nation’s foreign policy brain trust, it announced one thing in particular: A new species of power finally had come of age. The force behind the 9/11 attacks wasn’t an enemy nation but a small band of resourceful zealots scattered around the world. Al Qaeda was what experts call, for lack of a more elegant term, a “non-state actor” – one of a new species of powers that operate outside the umbrella of traditional governments. These powers run the gamut from terrorist cells to Google, from globe-spanning charities to oil conglomerates. As much as anything, the 9/11 strikes illustrated the profound influence that non-state actors could have on world affairs.
You might think that the sudden demonstration of the radical power of a non-state actor would have triggered an equally bold reaction at the highest levels of policy thinking: a coherent shift in grand strategy, in America’s thinking about how it should contend with the wider world, on the scale of the one that developed after World War II.
Surprisingly, though, if there is one thing that 9/11 didn’t change, it was this. Instead of a flurry of new thinking at the highest echelons of the foreign policy establishment, the major decisions of the past two administrations have been generated from the same tool kit of foreign policy ideas that have dominated the world for decades. Washington’s strategic debates – between neoconservatives and liberals, between interventionists and realists – are essentially struggles among ideas and strategies held over from the era when nation-states were the only significant actors on the world stage. As ideas, none of them were designed to deal effectively with a world in which states are grappling with powerful entities that operate beyond their control.
“Great power relations, war, diplomacy, we know how to do that,” says Stephen Krasner, a Stanford University international relations specialist who ran the policy planning department in George W. Bush’s State Department from 2005 to 2007. “We don’t know how to deal with these tricky non-state questions.”
A look at the major foreign crises of the last two decades, the ones that have demanded the most attention, money, and lives from America, reveals that virtually all of them were provoked not by some hostile scheming government in Moscow or Beijing, but by an entity that would have been beneath the radar of a classic global strategist. In the 1990s, warlords in Somalia and Bosnia ripped apart their regions while bedeviling old-line powers like the United States, Britain, and NATO. On the eve of the millennium, the United States struggled to grapple with the fallout from a failed hedge fund, Long-Term Capital Management, and Argentina’s economic collapse – financial crises whose political repercussions outstripped those of many wars. The Sept. 11 attacks only made obvious a change that had already been apparent to many.
Ten years later, we are still waiting for the intellectual response to that change. And already we can see the costs of its absence. The United States responded to the 9/11 attacks with strategies that made sense in the old days of state-vs-state conflict, deploying the world’s most powerful armed forces and spending more than $1 trillion trying to subdue bands of simply trained and lightly armed men in Afghanistan and Iraq. Many strategists see a rising China as the next major threat in a traditional sense, but even China’s challenge to American interests is largely taking place outside traditional state channels: in contests for oil concessions, and struggles for high-tech revenue, cultural influence, and intellectual property. The cost of getting these types of struggles wrong can, in the long term, be even greater than a runaway war.