[Originally published in The Boston Globe Ideas section.]
EVER SINCE AMERICANS had to briefly ration gas in 1973, “energy independence” has been one of the long-range goals of US policy. Presidents since Richard Nixon have promised that America would someday wean itself of its reliance on foreign oil and gas, which leaves us vulnerable to the outside world in a way that was seen as a gaping hole in America’s national security. It also handcuffs our foreign policy, entangling America in unstable petroleum-producing regions like the Middle East and West Africa.
Given the United States’ huge appetite for fuel, energy independence has always seemed more of a dream than a realistic prospect. But today, nearly four decades later, energy independence is starting to loom in sight. Sustained high oil prices have made it economically viable to exploit harder-to-reach deposits. Techniques pioneered over the last decade, with US government support, have made it possible to extract shale oil more efficiently. It helps, too, that Americans have kept reducing their petrochemical consumption, a trend driven as much by high prices as by official policy. Total oil consumption peaked at 20.7 million barrels per day in 2004. By 2010, the most recent year tracked in the CIA Factbook, consumption had fallen by nearly a tenth.
Last year, the United States imported only 40 percent of the oil it consumed, down from 60 percent in 2005. And by next year, according to the US Energy Information Administration, the United States will need to import only 30 percent of its oil. That’s been driven by an almost overnight jump in domestic oil production, which had remained static at about 5 million barrels per day for years, but is at 7 million now and will be at 8.5 million by the end of 2014. If these trends continue, the United States will be able to supply all its own energy needs by 2030 and be able to export oil by 2035. In fact, according to the government’s latest projections, the country is on track to become the world’s largest oil producer in less than a decade.
Yet as this once unimaginable prospect becomes a realistic possibility, it’s far from clear that it will solve all the problems it was supposed to. As much as boosters hope otherwise, energy independence isn’t likely to free America from its foreign policy entanglements. And at worst, say some skeptics who specialize in energy markets, it might create a whole new host of them, subjecting America to the same economic buffeting that plagues most oil exporters, and handing China even more global influence as the world’s behemoth consumer.
As much as the shift brings opportunities, however, it is also likely to open the United States up to liabilities we have not yet had to face. The consequences may be both good and bad, enriching and destabilizing for US interests—but they will certainly have a major impact on our geopolitics, in ways that the policy world is only just beginning to understand.
WHEN RICHARD NIXON was president, America consumed about one-third of the world’s oil, importing about 8.4 million barrels per day chiefly from the Middle East. The status quo hummed along until the Arab-Israeli war of 1973. The United States sent weapons to Israel, and the Arab states retaliated with a six-month oil embargo, refusing to sell oil to America. It was the only time in history that the “oil weapon” was effectively used, and it made a permanent impression on the United States.
Over time, the American response to the embargo came to include three major initiatives that still shape energy policy today. First, the government promoted lower oil consumption by pushing coal and natural gas power plants, home insulation, and mileage standards for cars. Second, the country drilled for more of its own oil. Third, and perhaps most important from a foreign-policy standpoint, the United States promoted a unified global oil market in which any country had the practical means to buy oil from any other. That meant that even if some countries couldn’t do business with each other—say, Iran and the United States—it wouldn’t affect the overall price and availability of oil. Other countries could fill in the gap.
The dreams of energy independence crossed party lines. Though liberals and conservatives differ on the means—how much we should rely on new drilling versus energy conservation—both parties have endorsed the quest. It was one of the few issues on which Presidents Carter and Reagan agreed.
America has made steady progress over the years, to the point where the nation’s total oil consumption has actually begun to drop. As this has happened, the high cost of global energy has also made it profitable to increase domestic production of natural gas and oil. A few months ago, both the US Energy Information Administration and the International Energy Agency predicted that if current production trends continue, the United States will overtake Saudi Arabia and Russia as the world’s largest oil producer in 2017.
Taken together, our slowing appetite and booming production mean that with a suddenness that has surprised many observers, the prospect of energy independence—technically speaking, at least—looms in the windshield.
Energy independence looks different today, however, than it did in the oil-shocked 1970s. For one thing, the energy market is a linchpin of the world order, and any big shift is likely to have costs to stability. Some analysts have warned that America’s growing oil production will create a glut that lowers prices, eating up the profits of oil countries and destabilizing their regimes. (That’s in the short term, anyway; worldwide, oil demand is still rising fast.) Falling prices mean that countries that depend on oil will face sudden cash shortages. It’s easy to imagine how destabilizing that could be for a natural-resource power like Russia, for the monarchs of the Persian Gulf, or for the dictators in Central Asia. No matter how distasteful their rule, the prospect of an unruly transition, or worse still, a protracted conflict, in any of those countries could cause havoc.
In the long term, this is not necessarily a bad thing: Weakening oppressive or corrupt governments could ultimately be beneficial for the people of those countries. And a shift in the balance of power away from the Gulf monarchies of OPEC and toward the United States could have a democratizing effect. In any event, though, lower oil prices and a dynamic energy market make the current stable order unpredictable.
China’s economic rise has also changed the global energy equation. For now, China is largely without its own petroleum supplies and is replacing the United States as the largest importer. As China steps into the United States’ shoes as the world’s largest oil customer, it will gain influence in oil-producing regions as American influence wanes. It might also feel compelled to invest more heavily in an aggressive navy, fearing that the United States will no longer shoulder the responsibility of policing shipping lanes in the Persian Gulf and elsewhere—a costly security service that America pays for but which benefits the entire network of global trade.
Domestically, there’s also the “resource curse,” which afflicts countries that depend too heavily on extracted commodities like minerals or petroleum. Such industries don’t add much value to a society beyond the price the commodity fetches at market, and that price is notoriously fickle, meaning fortunes and jobs rise and fall with swings in global prices. The resource curse often implies corruption and autocracy as well. But economists are less concerned about that, since the United States already has an effective government and laws to thwart corruption, and because oil will still make up a minuscule overall share of the economy. Last year oil and gas extraction amounted to just 1.2 percent of the American gross domestic product.
THERE ARE STILL plenty of people who think that energy self-sufficiency will be an unalloyed good. Jay Hakes, who has pursued the goal as an energy official under the last three Democratic presidents, says that America will reap countless political and economic dividends. It will help the trade deficit, give American companies and workers benefits when oil prices are high, and insulate the country from supply shocks. It will also give Washington wider latitude when dealing with oil-producing countries, on which it will depend less. “There are some downsides,” he acknowledges, “but they’re outweighed by all the positives.”
One benefit that self-sufficiency won’t bring, it seems clear, is a sudden independence from the politics of the Middle East. The region produces about half the world’s oil, and Saudi Arabia alone has so much oil that it can raise its capacity at a moment’s notice to make up for a shortfall anywhere else in the world.
Already, America is largely independent of Middle Eastern oil as a consumer: Only about 15 percent of our supply comes from the region. But we do depend on a stable world market—even more so if we become a net exporter ourselves. So even if we don’t buy Saudi oil, we’ll still need a stable Saudi regime that can add a few million barrels a day to world flows, at a moment’s notice, to offset a disruption somewhere else.
Michael Levi, a fellow at the Council on Foreign Relations and author of the book “The Power Surge: Energy, Opportunity, and the Battle for America’s Future,” believes that the biggest risk of achieving a goal like energy independence is complacency: Without the pressures that importing oil has brought, we may have little reason to innovate our way out of fossil fuels altogether. The policies themselves have achieved a great deal of good, he points out—stabilizing the world’s energy markets, reducing consumption, and pushing us beyond “independence” toward renewable sources like wind and solar power (though today these still make up a vanishingly small portion of the US energy supply).
Levi argues that an American oil bonanza could easily remove the political incentives for long-term planning and sacrifice. “I get scared that we’ll become complacent and make foolish decisions because we believe we’ve become energy independent,” Levi says. Energy independence was a useful slogan to motivate America, but in reality, a sensible energy policy has to balance a plethora of competing concerns, from geopolitics and the environment to consumer demand and fuel’s importance to the economy.
“The real way to be energy independent,” he said, “is actually to not use oil.”
Blast barriers in Baghdad, 2008. DONOVAN WYLIE / MAGNUM PHOTOS
[Originally published in The Boston Globe Ideas.]
BEIRUT — Everything that people love and hate about cities stems from the agglomeration of humanity packed into a tight space: the traffic, the culture, the chance encounters, the anxious bustle. Along with this proximity come certain feelings, a relative sense of security or of fear.
Over the last 13 years I have lived in a magical succession of cities: Boston, Baghdad, New York, and Beirut. They all made lovely and surprising homes—and they all were distorted to varying degrees by fear and threat.
At root, cities depend on constant leaps of faith. You cross paths every hour with people of vastly different backgrounds and trust that you’ll emerge safely. Each act of violence—a mugging, a murder, a bombing—erodes that faith. Over time, especially if there are more attacks, a city will adapt in subtle but profound and insidious ways.
The bombing last week was a shock to Boston, and a violation. As long as it’s isolated, the city will recover. But three people have died, and more than 170 have been wounded, and the scar will remain. As we think about how to respond, it’s worth also considering what happens when cities become driven by fear.
NEW YORK, WASHINGTON, and to a lesser extent the rest of America have exchanged some of the trappings of an open society for extra security since the Sept. 11 attacks. Metal detectors and guards have become fixtures at government buildings and airports, and it’s not unusual to see a SWAT team with machine guns patrolling in Manhattan. Police conduct random searches in subways, and new buildings feature barriers and setbacks that isolate them from the city’s pedestrian life.
Baghdad and Beirut, however, are reminders of the far greater changes wrought by wars and ubiquitous random violence. A city of about 7 million, low-slung Baghdad sprawls along the banks of the Tigris River. It’s the kind of place where almost everyone has a car, and it blends into expansive suburbs on its fringes. When I first arrived in 2003, the city was reeling from the shock-and-awe bombing and the US invasion. But it was the year that followed that slowly and inexorably transformed the way people lived. First, the US military closed roads and erected checkpoints. Then, militants started ambushing American troops and planting roadside bombs; the ensuing shootouts often engulfed passersby. Finally, extremists and sectarian militias began indiscriminately targeting Iraqi civilians and government personnel—in queues at public buildings, at markets, in mosques, virtually everywhere. Order crumbled.
Baghdad became a city of walls. Wealthy homeowners blocked their own streets with piles of gravel. Jersey barriers sprang up around every ministry, office, and hotel. As the conflict widened, entire neighborhoods were sealed off. People adjusted their commutes to avoid tedious checkpoints and areas with frequent car bombings. Drive times doubled or tripled. Long lines, with invasive searches, became an everyday fact of life. The geography of the city changed. Markets moved, even the old-fashioned outdoor kind where merchants sell livestock or vegetables. Entire sub-cities sprang up to serve Shia and Sunni Baghdadis who no longer could travel through each other’s areas.
Simple civic pleasures atrophied almost overnight. No one wanted to get blown up because they insisted on going out for ice cream. The famed riverfront restaurants went dormant; no more live carp hammered with a mallet and grilled before our eyes. The water-pipe joints in the parks went out of business. Most of the social spaces that defined the city shut down. Booksellers fled Mutanabe Street, the intellectual center of the city with its antique cafes. The amusement park at the Martyrs Monument shut its gates. Hotel bar pianists emigrated. Dust settled over the playgrounds and fountains at the strip of grassy family restaurants near Baghdad University.
In Beirut, where I moved with my family earlier this year, a generation-long conflict has Balkanized the city’s population. Here, most groups no longer trust each other at all. From 1975 to 1991 the city was split by civil war, and people moved where they felt safest. A cosmopolitan city fragmented into enclaves. Christians flocked to East Beirut, spawning a dozen new commercial hubs. Shiites squatted in the village orchards south of Beirut, and within a decade had built a city-within-a-city almost a million strong—the Dahieh, or “the Suburb.” The original downtown became a demilitarized zone, its Arabesque arcades reduced to rubble, and today has been rebuilt as a sterile, Disney-like tourist and office sector. Ras Beirut, my neighborhood, deteriorated from proudly diverse (and secular) to “Sunni West Beirut,” although it still boasts pockets of stubborn coexistence. In today’s Beirut, my block, where a Druze warlord lives across the street from a church and subsidizes the parking fees of his Shia, Sunni, and Christian neighbors, is a stark exception.
Mixing takes place, but tentatively, and because of frequent outbreaks of violence over the years, Beirutis have internalized the reflexes to fight, defend, and isolate. The result is a city alight with street life, cafes, and boutiques but which can instantaneously shift to war footing. One friend ran into his bartender on a night off at a checkpoint with bandoliers of bullets strapped to his chest. Even when the city appears calm, most people have laid in supplies in case an armed flare-up forces them to stay in their homes for a week. My friend’s teenage daughter keeps a change of clothes in her schoolbag in case she can’t return to her house. Uniformed private guards are everywhere, in every park, on every promenade, at every mall.
WITHIN A FEW weeks of our move to Beirut this year, my 5-year-old son traded his old fantasy, in which he played the doctor and assigned us roles as patients and nurses, for a new one: security guard. While we were setting up for a yard party, he arranged a few plastic chairs by the door. “I’ll check people here,” he declared. He also asked me a lot of questions about the heavily armed soldiers who stand watch on our street: “Will the army shoot me if I make a mistake?”
This is not the childhood he would have in Boston, even after this week. War-molded cities are nightmare reflections of failed states, places where government has gone into free fall, police don’t or can’t do their jobs, and normal life feels out of reach. Beirut is a kind of warning: Physically it appears normal on most days. But trust is gone. The public sphere feels wobbly and impermanent.
Boston is still lucky, with assets that Beirut lost generations ago; it has functional institutions, old communities with tangled but shared histories, and unifying cultural traditions. Boston has police that can get a job done and a baseball team that ties together otherwise divided corners of the city. One lesson of the city I live in now is that circumstances can sever these lifelines faster than we expect. Our connections require continued, perhaps redoubled, care. Without trust, a city can still be a magnificent place to live. Until all at once, it isn’t.
Syrian rebel fighters posed for a photo after several days of intense clashes with the Syrian army in Aleppo, Syria, in October. (AP: NARCISO CONTRERAS)
[Originally published in The Boston Globe Ideas.]
THE NEWS FROM SYRIA keeps getting worse. As it enters its third year, the civil war between the ruthless Assad regime and groups of mostly Sunni rebels has taken nearly 100,000 lives and settled into a violent, deadly stalemate. Beyond the humanitarian costs, it threatens to engulf the entire region: Syria’s rival militias have set up camp beyond the nation’s borders, destabilizing Turkey, Lebanon, and Jordan. Refugees have made frontier areas of those countries ungovernable.
United Nations peace talks have never really gotten off the ground, and as the conflict gets worse, voices in Europe and America, from both the left and right, have begun to press urgently for some kind of intervention. So far the Obama administration has largely stayed out, trying to identify moderate rebels to back, and officially hoping for a negotiated settlement—a peace deal between Assad’s regime and its collection of enemies.
Given the importance of what’s happening in Syria, it might seem puzzling that the United States is still so much on the sidelines, waiting for a resolution that seems more and more elusive with each passing week. But it is also becoming clear that for America, there’s another way to look at what’s happening. A handful of voices in the Western foreign policy world are quietly starting to acknowledge that a long, drawn-out conflict in Syria doesn’t threaten American interests; to put it coldly, it might even serve them. Assad might be a monster and a despot, they point out, but there is a good chance that whoever replaces him will be worse for the United States. And as long as the war continues, it has some clear benefits for America: It distracts Iran, Hezbollah, and Assad’s government, traditional American antagonists in the region. In the most purely pragmatic policy calculus, they point out, the best solution to Syria’s problems, as far as US interests go, might be no solution at all.
If it’s true that the Syrian war serves American interests, that unsettling insight leads to an even more unsettling question: what to do with that knowledge. No matter how the rest of the world sees the United States, Americans like to think of themselves as moral actors, not the kind of nation that would stand by as another country destroys itself through civil war. Yet as time goes on, it’s starting to look—especially to outsiders—as if America is enabling a massacre that it could do considerably more to end.
For now, the public debate over intervention in America has a whiff of hand-wringing theatricality. We could intervene to staunch the suffering but for circumstances beyond our control: the financial crisis, worries about Assad’s successor, the lingering consequences of the Iraq war. These might explain why America doesn’t stage a costly outright invasion. But they don’t explain why it isn’t sending vastly more assistance to the rebels.
The more Machiavellian analysis of Syria’s war helps clarify the disturbing set of choices before us. It’s unlikely that America would alter the balance in Syria unless the situation worsens and protracted civil war begins to threaten, rather than quietly advance, core US interests. And if we don’t want to wait for things to get that bad, then it is time for America’s policy leaders to start talking more concretely—and more honestly—about when humanitarian concerns should trump our more naked state interests.
MANY AMERICAN observers were heartened when the Arab uprisings spread to Syria in the spring of 2011, starting with peaceful demonstrations against Bashar al-Assad’s police state. Given Assad’s long and murderous reign, a democratic revolution seemed to offer hope. But the regime immediately responded with maximum lethality, arresting protesters and torturing some to death.
Armed rebel groups began to surface around the country, harassing Assad’s military and claiming control over a belt of provincial cities. Assad has pursued a scorched earth strategy, raining shells, missiles, and bombs on any neighborhood that rises up. Rebel areas have suffered for the better part of a year under constant strafing and sniper fire, without access to water, health care, or electricity. Iran and Russia have kept the military pipeline open, and Assad has a major storehouse of chemical weapons. While some rebel groups have been accused of crimes, the regime is disproportionately responsible for the killing, which earlier this year passed the 70,000 mark by a United Nations estimate that close observers consider an undercount.
As the civil war has hardened into a bloody, damaging standoff, many have called for a military intervention, pressing for the United States to side with one of the moderate rebel factions and do whatever it takes to propel it to victory. Liberal humanitarians focus on the dead and the millions driven from their homes by the fighting, and have urged the United States to join the rebel campaign. The right wants intervention on different grounds, arguing that the regional security implications of a failed Syria are too dangerous to ignore; the country occupies a significant strategic location, and the strongest rebel coalition, the Nusra Front, is an Al Qaeda affiliate. Given all those concerns, both sides suggest that it’s only a question of when, not if, the United States gets drawn in.
“Syria’s current trajectory is toward total state failure and a humanitarian catastrophe that will overwhelm at least two of its neighbors, to say nothing of 22 million Syrians,” said Fred Hof, an ambassador who ran Obama’s Syria policy at the State Department until last year, when he quit the administration and became a leading advocate for intervention. His feelings are widely shared in the foreign policy establishment: Liberals like Princeton’s Anne-Marie Slaughter and conservatives like Fouad Ajami have made the interventionist case, as have State Department officials behind the scenes.
Intervention is always risky, and in Syria it’s riskier than elsewhere. The regime has a powerful military at its disposal and major foreign backers in Russia and Iran. An intervention could dramatically escalate the loss of life and inflame a proxy struggle into a regional conflagration.
And yet there’s a flip side to the risks: The war is also becoming a sinkhole for America’s enemies. Iran and Hezbollah, the region’s most persistent irritants to the United States and Israel, have tied up considerable resources and manpower propping up Assad’s regime and establishing new militias. Russia remains a key guarantor of the government, costing Russia support throughout the rest of the Arab world. Gulf monarchies, which tend to be troublesome American allies, have invested small fortunes on the rebel side, sending weapons and establishing exile political organizations. The more the Syrian war sucks up the attention and resources of its entire neighborhood, the greater America’s relative influence in the Middle East.
If that makes Syria an unattractive target for intervention, so too do the politics and position of the combatants. For now, jihadist groups have established themselves as the most effective rebel fighters—and their distaste for Washington approaches their rage against Assad. Egos have fractured the rebellion, with new leaders emerging and falling every week, leaving no unified government-in-waiting for outsiders to support. The violent regime, meanwhile, is no friend to the West.
“I’ll come out and say it,” wrote the American historian and polemicist Daniel Pipes, in an e-mail. “Western powers should guide the conflict to stalemate by helping whichever side is losing. The danger of evil forces lessens when they make war on each other.”
Pipes is a polarizing figure, best known for his broadsides against Islamists and his critique of US policy toward the Middle East, which he usually says is naive. But in this case he’s voicing a sentiment that several diplomats, policy makers, and foreign policy thinkers have expressed to me in private. Some are career diplomats who follow the Syrian war closely. None wants to see the carnage continue, but one said to me with resignation: “For now, the war is helping America, so there’s no incentive to change policy.”
Analysts who follow the conflict up close almost universally want more involvement because they are maddened by the human toll—but many of them see national interests clearly standing in the way. “Russia gets to feel like it’s standing up to America, and America watches its enemies suffer,” one complained. “They don’t care that the Syrian state is hollowing itself out in ways that will come back to haunt everyone.”
IS IT EVER ACCEPTABLE to encourage a war to continue? In the policy world it’s seen as the grittiest kind of realpolitik, a throwback to the imperial age when competing powers often encouraged distant wars to weaken rivals, or to keep colonized nations compliant. During the Cold War the United States fanned proxy wars from Vietnam to Afghanistan to Angola to Nicaragua but invoked the higher principle of stopping the spread of communism, rather than admitting it was simply trying to wear out the Soviet Union.
In Syria it’s impossible to pretend that the prolonging of the civil war is serving a higher goal, and nobody, even Pipes, wants the United States to occupy the position of abetting a human-rights catastrophe. But the tradeoffs illustrate why Syria has become such a murky problem to solve. Even in an intervention that is humanitarian rather than primarily self-interested, a country needs to weigh the costs and risks of trying to help against the benefit we might realistically expect to bring—and it’s a difficult decision to get involved when those potential costs include threats to our own political interests.
So just what would be bad enough to induce the United States to intervene? An especially egregious massacre—a present-day Srebenica or Rwanda—could fan such outrage that the White House changes its position. So too would a large-scale violation of the Chemical Weapons Convention—signed by most states in the world, but not Syria. But far more likely is that the war simmers on, ever deadlier, until one side scores a military victory big enough to convince the outside powers to pick a winner. The White House hopes that with time, rebels more to its liking will gain influence and perhaps eclipse the alarming jihadists. That could take years. Many observers fear that Assad will fall and open the way to a five- or ten-year civil war between his successor and a well-armed coalition of Islamist militias, turning Syria into an Afghanistan on the Euphrates. The only thing that seems likely is that whatever comes next will be tragic for the people of Syria.
Because this chilly if practical logic is largely unspoken, the current hands-off policy continues to bewilder many American onlookers. It would be easier to navigate the conversation about intervention if the White House, and the policy community, admit what observers are starting to describe as the benefits of the war. Only then can we move forward to the real moral and political calculations at stake: for example, whether giving Iran a black eye is worth having a hand in the tally of Syria’s dead and displaced.
For those up close, it’s looking unhappily like a trip to a bygone era. Walid Jumblatt, the Lebanese Druze warlord, spent much of the last two years trying fruitlessly to persuade Washington and Moscow to midwife a political solution. Now he’s given up. Atop the pile of books on his coffee table sits “The Great Game,” a tale of how superpowers coldly schemed for centuries over Central Asia, heedless of the consequences for the region’s citizens. When he looks at Syria he sees a new incarnation of the same contest, where Russia and America both seek what they want at the expense of Syrians caught in the conflict.
“It’s cynical,” he said in a recent interview. “Now we are headed for a long civil war.”
A street poster from Cairo that reads, “My God, my freedom, O my country.” Photo: NEMO.
CAIRO — Every night, Egypt’s current comedic sensation, a doctor hailed as his country’s Jon Stewart, lambastes the nation’s president on TV, mocking his authoritarian dictates and airing montages that reveal apparent lies. On talk shows, opposition politicians hold forth for hours, excoriating government policy and new Islamist president Mohammed Morsi. Protesters use the earthiest of language to compare their political leaders to donkeys, clowns, and worse. Meanwhile, the president’s supporters in the Muslim Brotherhood respond in kind on their new satellite television station and in mass counter-rallies.
Before Egypt’s uprising two years ago, this kind of open debate about the president would have been unthinkable. For nearly three decades, former president Hosni Mubarak exerted near total control over the public sphere. In the twilight of his term, he imprisoned a famous newspaper editor who dared to publish speculation about the ailing president’s declining health. No one else touched the story again.
To Western observers, the freewheeling back-and-forth in Egypt right now might sound like the flowering of a young open society, one of the revolution’s few unalloyed triumphs. But amid the explosion of debate, something less wholesome has begun to arise as well. Though speech is far more open, it now carries a new and different kind of risk, one more unpredictable and sudden. Islamist officials and citizens have begun going after individuals for crimes such as blasphemy and insulting the president, and vaguer charges like sedition and serving foreign interests. The elected Islamist ruling party, the Muslim Brotherhood, pushed a new constitution through Egypt’s constituent assembly in December that expanded the number of possible free speech offenses—including insults to “all prophets.”
Worryingly, a recent report showed that President Morsi—a Brotherhood member, and Egypt’s first-ever genuinely elected, civilian leader—has invoked the law against insulting the presidency far more frequently than any of the dictators who preceded him, and has even directed a full-time prosecutor to summon journalists and others suspected of that crime.
The Muslim Brotherhood, as it rises to power, is playing host to conflicting ideas. It wants the United States to view it as a tolerant modern movement that doesn’t arbitrarily silence critics, but at the same time it needs to show its political base of socially conservative constituents in rural Egypt that it won’t tolerate irreligious speech at home. And it wants to argue that despite its religious pedigree, it is behaving within the constraints of the law.
For the time being, Egypt’s proliferating free expression still outstrips government efforts to shut it down. But as the new open society engenders pushback, what’s happening here is in many ways a test case for Islamist rule over a secular state. What’s at stake is whether Islamists—who are vying for elected power in countries around the Muslim world—really only respect the rules until they have enough clout to ignore them.
The text on this Cairo street poster reads, “As they breathe, they lie.” Photo: NEMO
EGYPTIANS ARE RENOWNED throughout the Arab world for jokes and wordplay, as likely to fall from the mouth of a sweet potato peddler as a society journalist. Much of daily life takes place in the crowded public social spaces where people shop, drink hand-pressed sugarcane juice, loiter with friends, or picnic with their families. But under the stifling police state built by Mubarak, that vitality was undercut by fear of the undercover police and informants who lurked everywhere, declaring themselves at sheesha joints or cafes when the conversation veered toward politics.
As a result, a prudent self-censorship ruled the day. State security officials had desks at all the major newspapers, but top editors usually saved them the trouble, restraining their own reporters in advance. In 2005, when one publisher took the bold step of publishing a judge’s letter critical of the regime, he confiscated the cellphones of all his editors and sequestered them in a conference room so they couldn’t tip off authorities before the paper reached the streets.
It wasn’t technically illegal to be a dissident in Egypt; that the paper could be published at all was testament to the fact that some tolerance existed. Egypt’s system was less draconian and violent than the police states in Syria and Iraq, where dissidents were routinely assassinated and tortured. But the limits of public speech were well understood, and Egyptians who cared to criticize the state carefully stayed on the accepted side of the line. Activists would speak out about electoral fraud by the ministry of the interior or against corruption by businesspeople, for example, but would carefully refrain from criticizing the military or Mubarak’s family. Political life as we understand it barely existed.
Egypt’s uprising marked an abrupt break in this long cultural balancing act. For the first time, millions of Egyptians expressed themselves freely and in public, openly defying the intelligence minions and the guns of the police. It was shocking when people in the streets called directly for the fall of the regime. Within weeks, previously unimaginable acts had become commonplace. Mubarak’s effigy hung in Tahrir Square. Military generals were mocked as corrupt, sadistic toadies in cartoons and banners. Establishment figures called for trials of former officials and limits on renegade security officials.
In the two years since, free speech has spread with dizzying speed—on buses, during marches, around grocery stalls, everywhere that people congregate. Today there are fewer sacred cows, although even at the peak of revolutionary fervor few Egyptians were willing to risk publicly impugning the military, which was imprisoning thousands without any due process. (An elected member of parliament faced charges when he compared the interim military dictator to a donkey.)
Mohammed Morsi was inaugurated in June, after a tight election that pitted him against a former Mubarak crony. Morsi campaigned on a promise to excise the old regime’s ways from the state, and on a grandiose Islamist platform called “The Renaissance.” His regime has fared poorly in its efforts to take control of the police and judiciary. Nor has it made much progress on its sweeping but impractical proposals to end poverty and save the Egyptian economy. It has proven easier to talk about Islamic social issues: allegations of blasphemy by Christians and atheist bloggers; alcohol consumption and the sexual norms of secular Egyptians; and the idea, widely held among Brotherhood supporters, that a godless cabal of old-regime supporters is secretly plotting to seize power.
Before it won the presidency, the Muslim Brotherhood emphasized it had been fairly elected; the party was Islamist, it said, but from the pragmatic, democratic end of the spectrum. But in recent months, there’s been more than a whiff of Big Brother about the Brotherhood. Supposed volunteers attacked demonstrators outside Morsi’s presidential palace—and then were videotaped turning over their victims to Brotherhood operatives. Allegations of torture, illegal detention, and murder by state agents pile up uninvestigated.
As revolutionaries and other critical Egyptians have turned their ire from the old regime to the new, the Brotherhood also has begun targeting political speech. The new constitution, authored by the Brotherhood and forced through Egypt’s constituent assembly in an overnight session over the objections of the secular opposition and even some mainstream religious clerics, criminalized blasphemy and expanded older statutes against insults to leaders, state institutions like the courts, and religious figures. Popular journalists have been threatened with arrest, while less famous individuals, including children improbably accused of desecrating a Koran, have been thrown into detention. Morsi’s presidential advisers regularly contact human rights activists and journalists to challenge their reports, a level of attention and pressure previously unknown here.
In addition to the old legal tools to limit free expression, which are now more heavily used by the Islamists than they were by Mubarak, the new constitution has added criminal penalties for insulting all religions and empowers courts to shut down media outlets that don’t “respect the sanctity of the private lives of citizens and the requirements of national security.”
The Egyptian government began an investigation of TV comedian Baseem Yousef but dropped its charges after a public outcry.
Egyptian human rights monitors have tracked dozens of such cases, including three that were filed by the president’s own legal team. Gamal Eid at the Arab Network for Human Rights Information charted 40 cases that prosecuted political critics for what amounted to dissenting speech in the first 200 days of Morsi’s regime. That’s more, he claims, than during Mubarak’s entire reign, and more charges of insulting the president than were filed since 1909, when the law was first written.
IT’S A WELL-KNOWN PRECEPT in politics that times of transition are the most unstable, and that the fight to establish civil liberties carries risks. The current speech crackdown may just be an expected symptom of the shift from an effective authoritarian state to competitive politics. Mubarak, of course, had less need to prosecute a population that mostly kept quiet.
It could also be a sign of desperation on the part of the Brotherhood, as it struggles to rule without buy-in from the police and state bureaucracy. Or it could, more alarmingly, mark a transition to a genuine new era of censorship in the most populous Arab country, this time driven as much by the Islamist cultural agenda as by the quest to keep a grip on power.
It is that last prospect that makes the path Egypt takes so important. By dint of its size and cultural heft, the country remains a major influence across the Arab world, and both in Egypt and elsewhere, the Muslim Brotherhood is at the front lines of political Islam—trying to balance the cultural conservatism of its rank-and-file supporters with the openness the world expects from democratic society.
There are signs that the Brotherhood wants to at least make gestures toward Western norms, though it remains hard to gauge exactly how open an Egypt its members would like to see. At one point the government began an investigation of Baseem Yousef, the Jon Stewart-like TV comedian, but abruptly dropped its charges in January after a public outcry.
During the wave of bad publicity around the investigation, one of President Morsi’s advisers issued a statement claiming that the state would never interfere in free speech—so long as citizens and the press worked to raise their “level of credibility.”
“Human dignity has been a core demand of the revolution and should not be undermined under the guise of ‘free speech,’” presidential adviser Esam El-Haddad said in a statement that placed ominous boundaries on the very idea of free speech that it purported to advance. “Rather, with freedom of speech comes responsibility to fellow citizens.”
What scares many people, is how they define “responsibility.” A widely watched video clip portrays a Salafi cleric lecturing his followers about how Egypt’s new constitution will allow pious Muslims to limit Christian freedoms and silence secular critics (the cleric, Sheikh Yasser Borhami, is from a more fundamentalist current, separate from but allied with the Brotherhood). When critics look at the Brotherhood’s current spate of investigations and threatened prosecutions, they see the political manifestation of the same exclusionary impulse: the polarizing notion that the Islamists’ actions are blessed by God and, by implication, that to criticize them is sacrilege.
Modern Islamism hasn’t reckoned with this implicit conflict yet, even internally. Officially, one current of the Brotherhood’s ideology prioritizes social activism over politics, and eschews coercion in religious matters. But another, perhaps more popular strain in Brotherhood thinking agitates for a religious revolution in people’s daily lives, and that strain appears to be driving the behavior of the Brothers suddenly in charge of the nation. Their fervor is colliding squarely with the secular responsibility of running a state like Egypt, which for all its shortcomings has real institutions, laws, and a civil society that expects modern freedoms and protections. The first stage of Egypt’s transition from military dictatorship has ended, but the great clash between religious and secular politics is just beginning to unfold.
Christia Fotini with a Syrian girl in a camp for Internally Displaced Persons (IDPs) in Syria in the village of Atmeh.
[Originally published in The Boston Globe.]
WHAT IS a civil war, really?
At one level the answer is obvious: an internal fight for control of a nation. But in the bloody conflicts that split modern states, our policy makers often understand something deeper to be at work. The vengeful slaughter that has ripped apart Bosnia, Rwanda, Syria, and Yemen is most often seen as the armed eruption of ancient and complex hatreds. Afghanistan is embroiled in a nearly impenetrable melee between Pashtuns and smaller ethnic groups, according to this thinking; Iraq is split by a long-suppressed Sunni-Shia feud. The coalitions fighting these wars are seen as motivated by the deepest sort of identity politics, ideologies concerned with group survival and the essence of who we are.
This view has long shaped America’s engagement with countries enmeshed in civil war. It is also wrong, argues Fotini Christia, an up-and-coming political scientist at MIT.
In a new book, “Alliance Formation in Civil Wars,” Christia marshals in-depth studies of the recent wars in Afghanistan, Iraq, and Bosnia, along with empirical data from 53 civil conflicts, to show that in one civil war after another, the factions behave less like enraged siblings and more like clinically rational actors, switching sides and making deals in pursuit of power. They might use compelling stories about religion or ethnicity to justify their decisions, but their real motives aren’t all that different from armies squaring off in any other kind of conflict.
How we understand civil wars matters. Most civil wars drag on until they’re resolved by a foreign power, which in this era almost always includes the United States. If she’s right, if we’re mistaken about what motivates the groups fighting in these internecine free-for-alls, we’re likely to misjudge our inevitable interventions—waiting too long, or guessing wrong about what to do.
CIVIL WARS ALWAYS have loomed large in the collective consciousness. Americans still debate theirs so vociferously that a blockbuster film about Abraham Lincoln feels topical 150 years after his death. Eastern Europe saw several years of ferocious killing in the round of civil wars that followed World War II.
Such wars have been understood as fights over differences that can’t be resolved any other way: fundamental questions of ideology, identity, creed. A disputed border can be redrawn; not so an ethnic grudge. In the last two decades, identity has become the preferred explanation for persistent conflicts around the world, from Chechnya to Armenia and Azerbaijan to cleavages between Muslims and Christians in Nigeria.
This thinking allows for a simple understanding, and conveniently limits the prospect for a solution. Any identity-based cleavage—Jew vs. Muslim, Bosnian vs. Serb, Catholic vs. Orthodox—is so profoundly personal as to be immutable. The conventional wisdom is best exemplified by a seminal 1996 paper by political scientist Chaim Kaufmann, “Possible and Impossible Solutions to Ethnic Civil Wars,” which argues that bitterly opposed populations will only stop fighting when separated from each other, preferably by a major natural barrier like a river or mountain range.
During the 1990s, this sort of ethnic determinism drove American policy toward Bosnia and Rwanda. It was popularized by Robert Kaplan’s book “Balkan Ghosts,” which was read in the Clinton White House and presented the wars in the former Yugoslavia as just the latest chapter in an insoluble, four-century ethnic feud. Like Kaufmann, Kaplan suggested that the grievances in civil wars could only be managed, never reconciled.
After 9/11, policy makers in Washington continued to view civil wars through this prism, talking about tribes and sects and ethnic groups rather than minority rights, systems of government, and resource-sharing. That view was so dominant that President Bush’s team insisted on designing Iraq’s first post-Saddam governing council with seats designated by sect and ethnicity, against the advice of Iraqis and foreign experts. It became a self-fulfilling prophecy as Iraq’s ethnic civil war peaked in 2006; things settled down only after death squads had cleansed most of Iraq’s mixed neighborhoods, turning the country into a patchwork of ethnically homogenous enclaves. Similarly, this thinking has shaped US policy in Afghanistan, where the military even sent anthropologists to help its troops understand the local culture that was considered the driving factor in the conflict.
Christia grew in up in the northern Greek city of Salonica in the 1990s, with the Bosnian war raging just over the border. “It was in our neighborhood and we discussed it vividly every night over dinner,” she says. The question of ethnicity seized her imagination: Were different peoples doomed to conflict by incompatible identities? Or were the decision-makers in civil wars working on a different calculus from their emotional followers? As a graduate student at Harvard, Christia flew to Afghanistan and tried to turn a dispassionate political scientist’s eye to the question of why warlords behave the way they do.
Christia spent years studying these warlords, the factional leaders in a civil war that broke out in the late 1970s. As a graduate student and later as a professor, she returned to Afghanistan to interview some of the nastiest war criminals in the country. She concluded that culture and identity, while important for their adherents, did not seem to factor into the motives of the warlords themselves, and specifically not in their choices of wartime allies. Despite the powerful rhetoric about ethnic alliances forged in blood, warlords repeatedly flipped and switched sides. They used the same language—about tribe, religion, or ethnicity—whether they were fighting yesterday’s foe or joining him.
If ethnicity, religion, and other markers of identity didn’t matter to warlords, Christia asked, what did? It turns out the answer was simple: power. After studying the cases of Afghanistan, Bosnia, and Iraq in intricate detail, Christia built a database of 53 conflicts to test whether her theory applied more widely. She ran regression analyses and showed that it did: Warlords adjusted their loyalties opportunistically, always angling for the best slice of the future government. It’s not quite as simple as siding with the presumed winner, she says: It’s picking the weakest likely winner, and therefore the one most likely to share power with an ally.
In this model of warlord behavior, the many factions in a civil war are less like Cain and Abel and more like the mafia families in “The Godfather” trilogy. Loyalties follow business interests, and business interests change; meanwhile, the talk about family and blood keeps the foot soldiers motivated. In Bosnia, one Muslim warlord joined forces with the Serbs after the Serbs’ horrific massacre of Muslims at Srebenica, and justified his switch by saying that the central government in Sarajevo was run by fanatics while he represented the true, moderate Islam. In case after case of intractable civil wars—Afghanistan, Lebanon, Iraq, the former Yugoslavia—Christia found similar patterns of fluid alliances.
“The elites make the decision, and then sell it to the people who follow them with whatever narrative sticks,” Christia said. “We’re both Christians? Or we’re both minorities? Or we’re both anti-communist? Whatever sticks.”
CHRISTIA’S WORK has been received with great interest, though not all her academic colleagues agree with her conclusions. Critics say identity is more important in civil wars than she gives it credit for, and we ignore it at our peril. Roger Petersen, an expert on ethnic war and Eastern Europe who is a colleague of Christia’s at MIT and supervised her dissertation, argues that in some conflicts, identity—ethnic, religious, or ideological—is truly the most important factor. Leaders might make a pact with the devil to survive, but once a conflict heads to its conclusion, irreconcilable conflicts often end with a fight to the death. Communists and nationalists fought for total victory in Eastern Europe’s civil wars, with no regard to their fleeting coalitions of opportunity against foreign occupiers during World War II. More recently, Bosnia’s war only ended after the country had split into ethnically cleansed cantons.
Christia acknowledges that her theory needs further testing to see if it applies in every case. She is currently studying how identity politics play out at most local level in present-day Syria and Yemen.
If it holds up, though, Christia’s research has direct bearing on how we ought to view the conflict today in a nation like Syria. The teetering dictatorship is the stronghold of the minority Allawite sect in a Sunni-majority nation. And leader Bashar Assad has rallied his constituents on sectarian grounds, saying his regime offers the only protection for Syria’s minorities against an increasingly Sunni uprising. But Syria’s rebellion comprises dozens of armed factions, and Christia suggests that these militants, which run the gamut of ethnic and sectarian communities, will be swayed more by the prospect of power in a post-Assad Syria than by ethnic loyalty. That would mean the United States could win the loyalty of different fighting factions by ignoring who they are—Sunni, Kurd, secular, Armenian, Allawite—and by focusing instead on their willingness to side with America or international forces in exchange for guns, money, or promises of future political power.
For America, civil wars elsewhere in the world might seem like somebody else’s problem. But in reality we’re very likely to end up playing a role: Most civil wars don’t end without foreign intervention, and America is the lone global superpower, with huge sway at the United Nations. Christia suggests that Washington would do well to acknowledge early on that it will end up intervening in some form in any civil war that threatens a strategic interest. That doesn’t necessarily mean boots on the ground, but it means active funding of factions and shaping of the alliances that are doing the fighting. In a war like Syria’s, that means the United States has wasted precious time on the sidelines.
Despite her sustained look at the worst of human conflict, Christia says she considers herself an optimist: People spend most of their history peacefully coexisting with different groups, and only a tiny portion of the time fighting. And once civil wars do break out, the empirical evidence shows that hatreds aren’t eternal. “If identities mattered so much,” she says, “you wouldn’t see so much shifting around.”
[Originally published in The Boston Globe Ideas.]
ROCKETS AND MORTARS have stopped flying over the border between Gaza and Israel, a temporary lull in one of the most intractable, hot-and-cold wars of our time. The hostilities of late November ended after negotiators for Hamas and Israel—who refused to talk face-to-face, preferring to send messages via Egyptian diplomats—agreed to a rudimentary cease-fire. Their tenuous accord has no enforcement mechanism and doesn’t even nod to discussing the festering problems that underlie the most recent crisis. Both sides say they expect another conflict; experience suggests it’s just a question of when.
Generations of negotiators have cut their teeth trying to forge a peace agreement between Israel and the Palestinians, and their failures are as varied as they are numerous: Camp David, Madrid, the Oslo Accords, Wye River, Taba, the Road Map. For diplomats and deal-makers around the world—even those with no particular stake in Middle East peace—Israel and Palestine have become the ultimate test of international negotiations.
For Guy Olivier Faure, a French sociologist who has dedicated his career to figuring out how to solve intractable international problems, they’re something else as well: an almost unparalleled trove of insights into how negotiations can go wrong.
For more than 20 years, Faure has studied not only what makes negotiations around the world succeed, but how they break down. From Israel and the Palestinians to the Biological Weapons Convention protocol to the ongoing talks about Iran’s nuclear program, it’s far more common for negotiations to fail than to work out. And it’s from these failures, Faure says, that we can harvest a more pragmatic idea of what we should be doing instead. “In order to not endlessly repeat the same mistakes, it is essential to understand their causes,” he says.
TODAY’S INTERNATIONAL order turns on successful negotiation. When we think about what’s right in the world, we’re often thinking about the results of agreements like the START treaties, which ended the nuclear arms race between the United States and the Soviet Union; the Geneva Conventions, which govern the conduct of war; or even the General Agreement on Tariffs and Trade, drafted in 1948, which still underpins globalized free trade.
But in negotiations over the most vexing international problems—a hostage situation, a war between a central government and terrorist insurgents, a new multinational agreement—such successes are few and far between. Failure is the norm. Understandably, experts tend to focus on the wins. From US presidents to obscure third-party diplomats, negotiators pore over rare historical successes for tips rather than face the copious and dreary overall record.
Faure wants to change that focus. As an expert he straddles two worlds: He studies diplomacy academically as a sociologist at the Sorbonne, in Paris, and has also trained actual negotiators for decades, at the European Union, the World Trade Organization, and UN agencies. Over his career, he has produced 15 books spanning all the different theories behind negotiation, and ultimately concluded that negotiations that failed, or simply sputtered out inconclusively, were the most interesting. Each failure had multiple causes, but it was possible to compile a comprehensive list, and from that, consistent patterns.
“Unfinished Business” takes a look at what happened during a number of high-profile failures, and examines the underlying conditions of each set of talks: trust, cultural differences, psychology, the role of intermediaries, and outsiders who can derail negotiations or overload them with extraneous demands.
One of Faure’s insights concerns the mindset of negotiators—a factor negotiators themselves often believe is irrelevant, but which Faure and his colleagues believe can often determine the outcome. Incompatible values on the two sides of the table, he says, are much harder to bridge than practical differences, like an argument over a boundary or the mechanics of a cease-fire. As Faure says, “A quantity can be split, but not a value.” This is what Faure saw at work when the Palestinians and Israelis embarked on a rushed negotiation at the Egyptian seaside resort of Taba during Bill Clinton’s final month in office. The two sides had already reached an impasse at a lengthier negotiation in 2000 at Camp David. With the end of his presidency looming and Israeli elections coming up, Clinton summoned them back to the table for a no-nonsense session he hoped would bring speedy closure to disputes over borders, Jerusalem, and refugees. The Palestinians, however, felt that the two sides simply didn’t share the same view of justice and weren’t truly aiming at the same goal of two sovereign states—and so didn’t feel driven to make a deal. That mismatch of long-term beliefs, Faure says, doomed the talks.
There are other warning signs that emerge as patterns in failed talks. Time and again, parties embark on tough negotiations already convinced they will fail—a defeatism that becomes a self-fulfilling prophesy. In interviewing professional negotiators, Faure and his colleagues found that they often don’t pay that much attention to the practical aspects of how to run a negotiation—a surprising lapse.
Faure and his team have found that a well-planned process is one of the best predictors of success, and that many negotiations are terrible at it. When the European Union and the United States talked to Iran about its nuclear program, various European countries kept adding extraneous issues to the talks, for instance linking Iran’s behavior with nukes to existing trade agreements. The additions made the negotiations unwieldy, and provoked crises over matters peripheral to the actual subject. In the case of the mediation over Cyprus, the Greek and Turkish sides didn’t bother coming up with any tangible proposed solution to negotiate over, instead talking vaguely about a Swiss model. Negotiations failed in part because neither side knew what that would mean for Cyprus.
Ultimately, Faure argues, mistrust and inflexibility tangle up negotiators more than any other factor. Negotiators often end up demonizing the other side, and as a result might embark on a process that by its structure encourages failure. For instance, Israeli and Palestinian reliance on mediators to ferry messages—even between delegations in the same resort—maximizes misunderstandings and minimizes the possibility that either side will sense a genuine opening.
WHAT EMERGES FROM Faure’s work, overall, is that the outlook for negotiations is usually pretty bleak—certainly bleaker than Faure himself prefers to highlight. In some cases, he suggests that diplomats should put off an outright negotiation until they’ve dealt with gaps in trust and cultural communication, or until the conflict feels “ripe” for solution to the parties involved. There’s no point, he suggests, in embarking on a negotiation if all the stakeholders are convinced it’s a waste of time—indeed, a failed negotiation can sometimes exacerbate a problem.
The most promising scenarios occur when both sides are suffering under the status quo, which creates what social scientists call a “mutually hurting stalemate,” with soldiers or civilians dying on both sides, and a “mutually enticing opportunity” if there’s a peace agreement or a prisoner swap. In that case, a decent deal will give both sides a chance to genuinely improve their lot.
Unfortunately for the many whose hopes are riding on negotiations, the truly challenging international problems of our age don’t always come with a strong incentive to compromise. In military conflicts, there is little incentive to resolve matters when a conflict is lopsided in one side’s favor (Shia versus Sunni in Iraq, Israel versus Hamas, the Taliban versus the United States in Afghanistan). The same holds in broader international agreements: They’re complicated and intractable largely because the states involved are—no matter what they say—quite comfortable with the status quo. Think about climate change: The biggest gas-guzzlers and polluters, the ones whose assent matters the most for a carbon-reduction treaty, are often the last states that will pay the price for rising oceans. Meanwhile, the poorer nations whose populations are most at the mercy of sea levels or changing weather have little clout. Just as it’s easy for a relatively secure Israel to stand pat on the Palestinian question, there are few immediate consequences for the United States and China if they sit out climate talks.
It’s not all bleak news. Even in cases where negotiations appear hamstrung—like climate change and Palestine—there are, Faure points out, plenty of other reasons to continue negotiating. Negotiations are a form of diplomacy, dialogue, and recognition, and even in failure can serve some other interests of the parties involved. But—as the impressive historical record of failed international agreements shows—it’s naive to think that they will always yield a solution.
[Originally published in The Boston Globe.]
During the presidential campaign, two issues often seemed like the only foreign policy topics in the entire world: the Middle East and China. Those are unquestionably important: The wider Middle East contains most of the world’s oil and, currently, much of its conflict; and China is the world’s manufacturing base and America’s primary lender. But there are a host of other issues that are going to demand Washington’s sustained attention over the next four years, and don’t occupy anywhere near the same amount of Americans’ attention.
You could call them the icebergs, largely hidden challenges that lie in wait for the second Obama administration. Like all of us, when it comes to priorities, the people in Washington assume that the thing that comes to mind first must be the most important. The recent crises or tensions with Afghanistan, Benghazi, and China make these feel like the whole story. But in fact they are really just a few chapters, and the ones we’re ignoring completely may actually have the most surprises in store.
If the administration wants to stay ahead of the game, here’s what it will need to spend more of its time and energy dealing with in the coming four years.
Europe’s recovery needs to be managed, and that requires global cooperation and money.
Washington and China, along with the International Monetary Fund and the World Bank, will have to be closely involved, and that won’t happen without American leadership. Though the European crisis has already been a front-burner problem for two years, in the United States it barely cracks the public agenda except as a rhetorical bludgeon: “That guy wants to turn America into Greece!” But Europe’s importance to the global economy, and to America, is staggering: It’s the world’s largest economic bloc, worth $17 trillion, and it’s the US’s largest trading partner. If Europe goes down, we all go down.
Climate change. No politician likes to talk about climate change. It’s depressing news. It’s become highly partisan in this country, and it has no obvious solution even for those who understand the threat. It requires discussion of all kinds of hugely complex, dull-sounding science. When we do talk about it as a political issue, it’s largely as a domestic one: saving energy, dealing with the increasing fury and frequency of storms like Hurricane Sandy, investing in new infrastructure.
In fact, climate change is a massive foreign policy issue as well. On the preventive side, any emissions reduction requires cooperation across borders—between small numbers of powerful nations, like America and China, along with massive worldwide accords like the failed Kyoto Protocol. The responses will often need to be global as well. Rising oceans and temperatures have no regard for national boundaries, and most of the world’s population lives near soon-to-be-vulnerable coastlines. Entire cities might have to move, or be rebuilt, often across
borders. Sandy could cost the American Northeast close to $100 billion when all is said and done (current damage estimates already top $50
billion). Imagine the price of climate-proofing the cities where most of the world lives—Mumbai, Shangahi, Lagos, Alexandria, and so on. Climate change, if unaddressed, could well become an American security issue, propelling unrest and failed states that will spur threats against the US.
Pakistan. Like our tendency to obsess over shark attacks rather than, say, the more significant risk of getting hit by a car, we often find our foreign policy elite preoccupied with rare, dramatic potential threats rather than actual banal ones. You’ll keep hearing about Iran, which might one day have a bomb and which emits noxious rhetoric while supporting well-documented militant groups like Hezbollah. What we really need to hear more and do more about, however, is a regional power that already has nukes (90 to 120 warheads), that is reportedly planning for battlefield bombs that are easier to misplace or steal, and that sponsors rogue terrorist groups that have been regularly killing people in Afghanistan and India for years.
That country is Pakistan. Power there is split among an unstable cast of characters: a dictatorial military, super-empowered Islamic fundamentalists, and a corrupt civilian elite. A significant portion of its huge population has been radicalized, and can easily flit across borders with Iran, Afghanistan, and India. Pakistan isn’t a potential problem; it’s a huge actual problem, a driver of war in Afghanistan, a sponsor of killers of Americans, and perennially, the only actor in the world that actively poses the threat of nuclear war. (The hot war between India and Pakistan in Kargil in 1999 was the first active conflict between two nuclear powers. It’s not talked about much, but remains a genuine nightmare scenario.) Pakistan is also a huge recipient of American aid. We need to find leverage and work to contain, restrain, and stabilize Pakistan.
Transnational crime and drugs. When it comes to violence in the world, foreign-policy thinkers tend to think first about wars, militaries, and diplomacy. But to save money and lives, it would be smarter to think about drugs. In much of the world, the resources spent and lives lost to criminal syndicates in the drug war rival the costs of traditional conflict. Narco-states in the Andes and, increasingly, Central America, make life miserable for their own inhabitants. Criminal off-the-book profits symbiotically feed international crime and terrorism. And in every region of the world, drugs provide the economic engine and financing for militias and terrorist groups; they fuel innumerable security problems, such as human trafficking, illicit weapons sales, piracy, and smuggling. Ultimately, wherever the drug business flourishes, it tends to corrode state authority, leaving vast ungoverned swaths of territory and promoting political violence and weak policing.
The United States pays a lot of attention to this problem in Afghanistan and Mexico, but it’s a drain on resources in corners of the globe that get less attention, from Southeast Asia to Africa. Washington needs to approach the international illegal drug trade like the globalized, multifaceted problem that it is, requiring international law enforcement cooperation but also smart economic solutions to change the market, including legalization.
Mexico. It feels almost painfully obvious, but it’s been a long time since a US president has prioritized our next-door neighbor. Our economies are inextricably linked. America’s supposed problem with illegal immigration is actually the organic
development of a fluid shared labor market across the US-Mexico border. Meanwhile, the distant war in Afghanistan eats up an enormous amount of resources while another conflict races on next door: Mexico’s increasingly violent drug war. Since 2006, it has claimed 50,000 lives, and the violence regularly spills over the border. Washington has collaborated piecemeal with Mexico’s government, but this is a regional conflict, involving criminal syndicates indifferent to jurisdiction. The United States needs to persuade Mexico to pursue a less violent, more sustainable strategy to counter the drug gangs, and then partner with the government there wholeheartedly.
The dangerous Internet. Cyber security might sound like a boondoogle for defense contractors looking for more money to spend on a ginned-up threat. Yet in the last year we’ve seen the real-world consequences of cyber attacks on Iran’s nuclear program, apparently orchestrated by the
United States and Israel, and an effective cyber response apparently by Iran that hobbled Saudi Arabia’s oil industry. Harvard’s Joseph Nye points out that cyber espionage and crime already pose serious transnational threats, and recent developments show how war and terrorism will spill into our online networks, potentially threatening everything from our power supply to our personal data.
The US budget. Elementary economics usually begins with the discussion of guns vs. butter: You can’t pay for everything given limited resources, so do you eat or defend yourself? For generations, America has had the luxury of not really having to choose: The economy has mostly boomed since
World War II, meaning we never had to cut anything fundamentally important. But America now faces a contracting global economy and a world in which it increasingly has to share resources with other rising powers. This is unfamiliar, and unhappy, territory: America’s next defense and foreign affairs budgets will probably be the first since the Second World War to require serious downsizing at a time when there are actual credible threats to the United States.
The Americans who reelected President Obama didn’t care that much about his foreign policy, according to polls. And, perhaps fittingly, Obama dealt with the rest of world during his first term with competence and caution rather than with flair and executive drive. His impressive focus on Al Qaeda hasn’t been mirrored so far in the rest of his national security policy, made by a team better known for its meetings than for setting clear priorities.
In the wake of a decisive reelection, Obama will have the political latitude to shape a more creative and forward-thinking foreign policy in his second term. If he does, he’ll have to work around both deeply divided legislators and a constrained budget: We simply can’t pay for everything, from land wars to cyber threats to sea walls to protected American industries. The priorities the next administration chooses—and its ability to pass any budget—will dramatically shape the kind of foreign influence America yields over the next four years.
Cops say they figure out a suspect’s intentions by watching his hands, not by listening to what comes out of his mouth. The same goes for American foreign policy. Whatever Washington may be saying about its global priorities, America’s hands tend to be occupied in the Middle East, site of all America’s major wars since Vietnam and the target of most of its foreign aid and diplomatic energy.
How to handle the Middle East has become a major point in the presidential campaign, with President Obama arguing for flexibility, patience, and a long menu of options, and challenger Mitt Romney promising a tougher, more consistent approach backed by open-ended military force.
Lurking behind the debate over tactics and approach, however, is a challenge rarely mentioned. The broad strategy that underlies American policy in the region, the Carter Doctrine, is now more than 30 years old, and in dire need of an overhaul. Issued in 1980 and expanded by presidents from both parties, the Carter doctrine now drives American engagement in a Middle East that looks far different from the region for which it was invented.
President Jimmy Carter confronted another time of great turmoil in the region. The US-supported Shah had fallen in Iran, the Soviets had invaded Afghanistan, and anti-Americanism was flaring, with US embassies attacked and burned. His new doctrine declared a fundamental shift. Because of the importance of oil, security in the Persian Gulf would henceforth be considered a fundamental American interest. The United States committed itself to using any means, including military force, to prevent other powers from establishing hegemony over the Gulf. In the same way that the Truman Doctrine and NATO bound America’s security to Europe’s after World War II, the Carter Doctrine elevated a crowded and contested Middle Eastern shipping lane to nearly the same status as American territory.
In 2012, we look back on a recent level of American engagement with the Middle East never seen before. Even the failures have been failures from which we can learn. The decade that began with the US invasion of Afghanistan and ended with a civil war in Syria holds some transformative lessons, ones that could point the next president toward a new strategy far better suited to what the modern Middle East actually looks like—and to America’s own values.
President Carterissued his new doctrine in what would turn out to be his final State of the Union speech in January 1980. America had been shaken by the oil shocks of the 1970s, in which the Arab-dominated OPEC asserted its control, and also by the fall of the tyrannical Mohammad Reza Pahlavi, Shah of Iran, who had been a stalwart security partner to the United States and Israel.
Nearly everyone in America and most Western economies shared Carter’s immediate goal of protecting the free flow of oil. What was significant was the path he chose to accomplish it. Carter asserted that the United States would take direct charge of security in this turbulent part of the world, rather than take the more indirect, diplomatic approach of balancing regional powers against each other and intervening through proxies and allies. It was the doctrine of a micromanager looking to prevent the next crisis.
Carter’s focus on oil unquestionably made sense, and the doctrine proved effective in the short term. Despite more war and instability in the Middle East, America was insulated from oil shocks and able to begin a long period of economic growth, in part predicated on cheap petrochemicals. But in declaring the Gulf region an American priority, it effectively tied us to a single patch of real estate, a shallow waterway the same size as Oregon, even when it was tangential, or at times inimical, to our greater goal of energy security. The result has been an ever-increasing American investment in the security architecture of the Persian Gulf, from putting US flags on foreign tankers during the Iran-Iraq war in the 1980s, to assembling a huge network of bases after Operation Desert Storm in 1991, to the outright regime-building effort of the Iraq War.
In theory, however, none of this is necessary. America doesn’t really need to worry about who controls the Gulf, so long as there’s no threat to the oil supply. What it does need is to maintain relations in the region that are friendly, or friendly enough, and able to survive democratic changes in regime—and to prevent any other power from monopolizing the region.
The Carter Doctrine, and the policies that have grown up to enforce it, are based on a set of assumptions about American power that might never have been wholly accurate. They assume America has relatively little persuasive influence in the region, but a great deal of effective police power: the ability to control major events like regional wars by supporting one side or even intervening directly, and to prevent or trigger regime change.
Our more recent experience in the Middle East has taught us the opposite lesson. It has become painfully clear over the last 10 years that America has little ability to control transformative events or to order governments around. Over the past decade, when America has made demands, governments have resolutely not listened. Israel kept building settlements. Saudi Arabia kept funding jihadis and religious extremists. Despots in Egypt, Syria, Tunisia, and Libya resisted any meaningful reform. Even in Iraq, where America physically toppled one regime and installed another, a costly occupation wasn’t enough to create the Iraqi government that Washington wanted. The long-term outcome was frustratingly beyond America’s control.
When it comes to requests, however, especially those linked to enticements, the recent past has more encouraging lessons. Analysts often focus on the failings of George W. Bush’s “freedom agenda” period in the Middle East; democracy didn’t break out, but the evidence shows that no matter how reluctantly, regional leaders felt compelled to respond to sustained diplomatic requests, in public and private, to open up political systems. It wasn’t just the threat of a big stick: Egypt and Israel weren’t afraid of an Iraq-style American invasion, yet they acceded to diplomatic pressure from the secretary of state to liberalize their political spheres. Egypt loosened its control over the opposition in 2005 and 2006 votes, while Israel let Hamas run in (and win) the 2006 Palestinian Authority elections. Even prickly Gulf potentates gave dollops of power to elected parliaments. It wasn’t all that America asked, but it was significant.
Paradoxically, by treating the Persian Gulf as an extension of American territory, Washington has reduced itself from global superpower to another neighborhood power, one than can be ignored, or rebuffed, or hectored from across the border. The more we are committed to the Carter Doctrine approach, which makes the military our central tool and physical control of the Gulf waters our top priority, the less we are able to shape events.
The past decade, meanwhile, suggests that soft power affords us some potent levers. The first is money. None of the Middle Eastern countries have sustainable economies; most don’t even have functional ones. The oil states are cash-rich but by no means self-sufficient. They’re dependent on outside expertise to make their countries work, and on foreign markets to sell their oil. Even Israel, which has a real and diverse economy, depends on America’s largesse to undergird its military. That economic power gives America lots of cards to play.
The second is defense. The majority of the Arab world, plus Israel, depends on the American military to provide security. In some cases the protection is literal, as in Bahrain, Qatar, and Kuwait, where US installations project power; elsewhere, as in Saudi Arabia, Egypt, and Jordan, it’s indirect but crucial. (American contractors, for instance, maintain Saudi Arabia’s air force.) America’s military commitments in the Middle East aren’t something it can take or leave as it suits; it’s a marriage, not a dalliance. A savvier diplomatic approach would remind beneficiaries that they can’t take it for granted, and that they need to respond to the nation that provides it.
The Carter Doctrineclearly hasn’t worked out as intended; America is more entangled than ever before, while its stated aims—a secure and stable Persian Gulf, free from any outside control but our own—seem increasingly out of reach. A growing, bipartisan tide of policy intellectuals has grappled with the question of what should replace it, especially given our recent experience.
One response has been to seek a more morally consistent strategy, one that seeks to encourage a better-governed Middle East. This idea has percolated on the left and the right. Alumni of Bush’s neoconservative foreign-policy brain trust, including Elliott Abrams, have argued that a consistent pro-democratic agenda would better serve US interests, creating a more stable region that is less prone to disruptions in the oil supply. Voices on the left have made a similar argument since the Arab uprisings; they include humanitarian interventionists like Anne-Marie Slaughter at Princeton, who argue for stronger American intervention in support of Syria’s rebels. Liberal fans of development and political freedoms have called for a “prosperity agenda,” arguing that societies with civil liberties and equitably distributed economic growth are not only better for their own citizens but make better American allies.
Then there’s a school that says the failures of the last decade prove that America should keep out of the Middle East almost entirely. Things turn out just as badly when we intervene, these critics argue, and it costs us more; oil will reach markets no matter how messy the region gets. This school includes small-footprint realists like Stephen Walt at Harvard and pugilistic anti-imperial conservatives like Andrew Bacevich at Boston University. (Bacevich argues that the more the US intervenes with military power to create stability in the oil-producing Middle East, the more instability it produces.)
While the realists think we should disentangle from the region because the US can exert strategic power from afar, others say we should pull back for moral reasons as well. That’s the argument made over the last year by Toby Craig Jones, a political scientist at Rutgers University who says that the US Navy should dissolve its Fifth Fleet base so it can cut ties with the troublesome and oppressive regime in Bahrain. America’s military might guarantees that no power—not Iran, not Iraq, not the Russians—can sweep in and take control of the world’s oil supply. Therefore, the argument goes, there’s no need for America to attend to every turn of the screw in the region.
What’s clear, from any of these perspectives, is that the Carter Doctrine is a blunt tool from a different time. It’s now possible, even preferable, to craft a policy more in keeping with the modern Middle East, and also more in line with American values. It might sound obvious to say that Washington should be pushing for a liberalized, economically self-sufficient, stable, but democratic Middle East, and that there are better tools than military power to reach those aims. In fact, that would mark a radical change for the nation—and it’s a course that the next president may well find within his power to plot.
[Originally published in The Boston Globe Ideas section.]
The case that a political term has outlived its usefulness
[Originally published in The Boston Globe Ideas section.]
To watch the Arab world’s political transformation over the past year has been, in part, to track the inexorable rise of Islamism. Islamist groups—that is, parties favoring a more religious society—are dominating elections. Secular politicians and thinkers in the Arab world complain about the “Islamicization” of public life; scholars study the sociology of Islamist movements, while theologians pick apart the ideological dimensions of Islamism. This March, the US Institute for Peace published a collection of essays surveying the recent changes in the Arab world, entitled “The Islamists Are Coming: Who They Really Are.”
From all this, you might assume that “Islamism” is the most important term to understand in world politics right now. In fact, the Islamist ascendancy is making it increasingly meaningless.
In Tunisia, Libya, and Egypt, the most important factions are led overwhelmingly by religious politicians—all of them “Islamist” in the conventional sense, and many in sharp disagreement with one another over the most basic practical questions of how to govern. Explicitly secular groups are an exception, and where they have any traction at all they represent a fragmented minority. As electoral democracy makes its impact felt on the Arab world for the first time in history, it is becoming clear that it is the Islamist parties that are charting the future course of the Arab world.
As they do, “Islamist” is quickly becoming a term as broadly applicable—and as useless—as “Judeo-Christian” in American and European politics. If important distinctions are emerging within Islamism, that suggests that the lifespan of “Islamist” as a useful term is almost at an end—that we’ve reached the moment when it’s time to craft a new language to talk about Arab politics, one that looks beyond “Islamist” to the meaningful differences among groups that would once have been lumped together under that banner.
Some thinkers already are looking for new terms that offer a more sophisticated way to talk about the changes set in motion by the Arab Spring. At stake is more than a label; it’s a better understanding of the political order emerging not just in the Middle East, but around the world.
THE TERM “ISLAMIST” came into common use in the 1980s to describe all those forces pushing societies in the Islamic world to be more religious. It was deployed by outsiders (and often by political rivals) to describe the revival of faith that flowered after the Arab world’s defeat in the 1967 war with Israel and subsequent reflective inward turn. Islamist preachers called for a renewal of piety and religious study; Islamist social service groups filled the gaps left by inept governments, organizing health care, education, and food rations for the poor. In the political realm, “Islamist” applied to both Egypt’s Muslim Brotherhood, which disavowed violence in its pursuit of a wealthier and more powerful Islamic middle class, and radical underground cells that were precursors to Al Qaeda.
What they had in common was that they saw a more religious leadership, and more explicitly Islamic society, as the antidote to the oppressive rule of secular strongmen such as Hafez al-Assad, Hosni Mubarak, and Saddam Hussein.
Over the years, the term “Islamist” continued to be a useful catchall to describe the range of groups that embraced religion as a source of political authority. So long as the Islamist camp was out of power, the one-size-fits-all nature of the term seemed of secondary importance.
But in today’s ferment, such a broad term is no longer so useful. Elections have shown that broad electoral majorities support Islamism in one flavor or another. The most critical matters in the Arab world—such as the design of new constitutional orders in Egypt, Tunisia, and Libya—are now being hashed out among groups with competing interpretations of political Islam. In Egypt, the non-Islamic political forces are so shy about their desire to separate mosque from government that many eschew the term “secular,” requesting instead a “civil” state.
In Tunisia’s elections last fall, the Islamist Ennahda Party—an offshoot of the Muslim Brotherhood—swept to victory, but is having trouble dealing with its more doctrinaire Islamist allies to the right. In Libya, virtually every politician is a socially conservative Muslim. The country’s recent elections were won by a party whose leaders believe in Islamic law as a main reference point for legislation and support polygamy as prescribed by Islamic sharia law, but who also believe in a secular state—unlike their more Islamist rivals, who would like a direct application of sharia in drafting a new constitutional framework.
In Egypt, the two best-organized political groups since the fall of Mubarak have been the Muslim Brotherhood and the Salafi Noor Party—both “Islamist” in the broad sense, but dramatically different in nearly all practical respects. The Brotherhood has been around for 84 years, with a bourgeois leadership that supports liberal economics and preaches a gospel of success and education. The rival Salafi Noor Party, on the other hand, includes leaders who support a Saudi-style extremist view of Islam that holds the religious should live as much as possible in a pre-modern lifestyle, and that non-Muslims should live under a special Islamic dispensation for minorities. A third Islamist wing in Egypt includes the jihadists—the organization that assassinated President Anwar Sadat in 1981, which has officially renounced violence and has surfaced as a political party. (Its main agenda item is to advocate the release of “the blind sheikh,” Omar Abdel-Rahman imprisoned in the United States as the mastermind of the 1993 World Trade Center bombing.)
“ISLAMIST” MIGHT BE an accurate label for all these parties, but as a way to understand the real distinctions among them it’s becoming more a hindrance than a help. A useful new terminology will need to capture the fracture lines and substantive differences among Islamic ideologies.
In Egypt, for example, both the Muslim Brotherhood and the Salafis believe in the ultimate goal of a perfect society with full implementation of Islamic sharia. Yet most Brothers say that’s an abstract and unattainable aim, and in practice are willing to ignore many provisions of Islamic law—like those that would limit modern finance, or those that would outright ban alcohol—in the interest of prosperity and societal peace. The Salafis, by contrast, would shut down Egypt’s liquor industry and mixed-gender beaches, regardless of the consequences for tourism or the country’s Christian minority.
There’s a cleavage between Islamists who still believe in a secular definition of citizenship that doesn’t distinguish between Muslims and non-Muslims, and those who believe that citizenship should be defined by Islamic law, which in effect privileges Muslims. (Under Saudi Arabia’s strict brand of Islamist government, the practice of Christianity and Shiite Islam is actually illegal.) And there’s the matter of who would interpret religious law: Is it a personal matter, with each Muslim free to choose which cleric’s rulings to follow? Or should citizens be legally required to defer to doctrinaire Salafi clerics?
Many thinkers are trying to craft a new language for the emerging distinctions within Islamism. Issandr El Amrani, who edits The Arabist blog and has just started a new column for the news site Al-Monitor about Islamists in power, suggests we use the names of the organizations themselves to distinguish the competing trends: Ikhwani Islamists for the establishment Muslim Brothers and organizations that share its traditions and philosophy; Salafi Islamists for Salafis, whose name means “the predecessors” and refers to following in the path of the Prophet Mohammed’s original companions; and Wasati Islamists for the pluralistic democrats that broke away from the Brotherhood to form centrist parties in Egypt.
Gilles Kepel, the French political scientist who helped popularize the term “Islamist” in his writings on the Islamic revival in the 1980s, grew dissatisfied with its limits the more he learned about the diversity within the Islamist space. By the 1990s, he shifted to the more academic term “re-Islamification movements.” Today he suggests that it’s more helpful to look at the Islamist spectrum as coalescing around competing poles of “jihad,” those who seek to forcibly change the system and condemn those who don’t share those views, and “legalism,” those who would use instruments of sharia law to gradually shift it. But he’s still frustrated with the terminology’s ability to capture politics as they evolve. “I’ve tried to remain open-eyed,” he said.
It’s also helpful to look at what Islamists call themselves, but that only offers a perfunctory guide, since many Islamists consider religion so integral to their thinking that it doesn’t merit a name. Others might seek for domestic political reasons to downplay their religious aims. For example, Turkey’s ruling party, a coterie of veteran Islamists who adapted and subordinated their religious principles to their embrace of neoliberal economics, describes itself as a party of “values,” rather than of Islam. In Libya, the new government will be led by the personally conservative technocrat Mahmoud Jibril; though his party could be considered “Islamist” in the traditional sense, it’s often identified as secular in Western press reports, to distinguish it from its more religious rivals. Jibril himself prefers “moderate Islamic.”
The efforts to come up with a new language to talk about Islamic politics are just beginning, and are sure to evolve as competing movements sharpen their ideologies, and as the lofty rhetoric of religion meets the hard road of governing. The importance of moving beyond “Islamism” will only grow as these changes make themselves felt: What we call the “Islamic world” includes about a quarter of the world’s population, stretching from Muslim-majority nations in the Arab world, along with Turkey, Pakistan, and Indonesia, to sizable communities from China to the United States. For Islam, the current political moment could be likened to the aftermath of 1848 in Europe, when liberal democracy coalesced as an alternative to absolute monarchy. Only after that, once virtually every political movement was a “liberal” one, did it become important to distinguish between socialists and capitalists, libertarians and statists—the distinctions that have seemed essential ever since.
[Originally published in The Boston Globe.]
Right now, the Islamic world is in the midst of a grand experiment. After decades facing an unappetizing choice among secular dictatorship, monarchy, and Iranian-style theocracy, nations across the region are grappling with how to build genuinely modern governments and societies that take into account the Islamist principles shared by a majority of voters.
As they do, a shadow hangs over their prospects. Islamic nations in the Middle East on the whole have underperformed their counterparts in the West. Asian nations that were poorer than the Arab world at the beginning of the Cold War have overtaken the Middle East. And promising experiments with democracy have been few and far between.
The question of why is a contentious one. Has the Islamic world been held back by its treatment at the hands of history? Or could the roots of the problem lie in its shared religion—in the Koran, and Islamic belief itself?
A provocative new answer is emerging from the work of Timur Kuran, a Turkish-American economist at Duke University and one of the most influential thinkers about how, exactly, Islam shapes societies. In a growing body of work, Kuran argues that the blame for the Islamic world’s economic stagnation and democracy deficit lies with a distinct set of institutions that Islamic law created over centuries. The way traditional Islamic law handled finance, inheritance, and incorporation, he argues, held back both economic and political development. These practices aren’t inherent in the religion—they emerged long after the establishment of Islam, and have partly receded from use in the modern era. But they left a profound legacy in many societies where Islam held sway.
Kuran’s critics think he unfairly impugns religious law. Pakistani scholar Arshad Zaman argues that Kuran misunderstands the very nature of Islamic law and business practice, which elevate worthwhile economic goals such as income equality and social justice above growth. Others argue that the harm suffered at the hands of legacy Western colonial powers is far more important in explaining why the Muslim world is struggling today.
Kuran himself sees his work as coming from a sympathetic perspective: He wants to combat the argument that Islam is incompatible with modernity and liberty, a notion he decries as “one of the most virulent ideas of our time.” He worries about anti-Islamic sentiment from outside the religion, as well as the rigid and defensive posture of some orthodox Islamists. (He pointedly avoids discussing his own faith. “I write as a scholar,” he says.)
Thanks in part to this careful navigation, Kuran’s scholarship gives economists, and perhaps political leaders in the Middle East, a way to talk about the Islamic world’s problems without resorting to crude stereotypes or heightened “clash of civilizations” rhetoric. In Kuran’s analysis, Islam itself is neither the problem nor the solution; indeed, most of the rigid practices have long been supplemented by or in some cases abandoned for more Western models.
However, his work does carry stark implications for countries such as Egypt, Libya, and Tunisia, whose emerging political futures are likely to be shaped by Islamist majorities or pluralities. A democratic renaissance could paradoxically lead to more stagnation if it imposes calcified institutions of Islamic yesteryear on modern society. If Kuran is right, the nations of the Arab Spring face a conundrum: The institutions most in keeping with societies’ religious principles could be the ones most likely to hold it back.
While Europe suffered centuries of decline and intellectual darkness, the early Islamic world bubbled with vitality. Competing schools of Islamic jurisprudence produced texts still consulted as references today, while merchants and caliphs left copious written records for future scholars to study. In the course of his work, Kuran was able to comb through business records and commercial ledgers spanning more than a millennium.
What he found, he says, was that two legal traditions pervasive in the Islamic world became especially limiting: the laws governing the accumulation of capital, and those governing how institutions were organized. The growth of capital was limited by laws of inheritance and Islamic partnership, which required that large fortunes and enterprises be split up with each passing generation. The waqf, or Islamic trust, had even greater ramifications, because it determined the structure of most social relationships and had wide-ranging consequences for civil society.
Under Islamic law, the trust—rather than the corporation—is the most common legal unit of organization for entities outside the government. Until modern times, cities, hospitals, schools, parks, and charities were all set up and governed by the immutable deed of an Islamic trust. Under its terms, the founder of the trust donates the land or other capital that funds it in perpetuity, and sets its rules in the deed. They can never be altered or amended. The waqf was developed by Islamic scholars in the centuries after the religion was established, drawing on Koranic principles barring usury and demanding justice in business. (It is not an institution stipulated by the Koran itself.) Much like a trust in the West, a waqf is not “governed” so much as executed. It is also limited in what it can do. A waqf is prohibited from engaging in politics, which means it cannot form coalitions, pool its resources with other organizations, or oppose the state.
Drawing on voluminous study of the mechanisms of money, power, and law going back to the 7th century founding of Islam, Kuran draws a picture of nations whose rulers wielded central and often highly authoritarian power, and faced little challenge from either business owners or a waqf-bound civil society.
Over time, he argues, this structure led to a radically different social system than the one that arose in the West. There, the rise of the corporation created a vehicle for prosperity and a civilian counterweight to state power—an institution that could adapt and grow, survive from one generation to the next, and pay benefits to its shareholding owners, who are thus motivated to steer it toward expansion and influence. Nonprofit corporations enjoy similar flexibility and freedom of action, though they don’t have shareholders.
“In the West, you had universities, unions, churches, organized as corporations that were free to make coalitions, engage in politics, advocate for more freedoms, and they became a civil society,” Kuran said in an interview. “Democracy is a system of checks and balances. It can’t develop if a population is passive.”
In modern times, Islamic nations have adopted Western institutions like corporations and banks to manage their affairs. Municipalities and private enterprise are now more commonly incorporated rather than set up as trusts. But the trust remains pervasive, especially in the realm of social services: Hospitals, schools, and aid societies are still almost always trusts rather than corporations—a factor that correlates with their quiescence in balancing state power.
In focusing on the specific legal institutions of Islamic civic life, Kuran’s thesis directly targets those “apologists” who blame the economic and political problems of the Middle East solely on colonialism and other outside forces. He also takes aim at essentialists who hold Islam as a religion responsible for the problems of Islamic countries. In fact, he argues, Islamic states that have embraced modernization programs and gone through the sometimes painful process of adopting new institutions, as Turkey and Indonesia have, have had great success in developing both democracy and economic prosperity widely shared among citizens.
While some critics attack Kuran from an Islamic perspective, like Arshad Zaman, others share his approach but dispute his findings. Maya Shatzmiller, a historian at Western University in Canada, believes that the real specific causes of economic growth are particular to the circumstances of each individual region, and that by focusing on some notional qualities common to the entire Islamic world, his work generically indicts Islam without offering real insight into the economic problems of individual Middle Eastern states.
Kuran believes the evidence of a gap between the Islamic world and the West is undeniable and merits serious examination of what those countries have in common. And it’s patronizing, he writes, to suggest that the Islamic world will be offended by a vigorous debate on the subject.
Kuran’s approach has influenced other social scientists to use similar tools in the hope of offering more precise and useful answers. Eric Chaney, a Harvard economist, uses the mathematical modeling of econometrics to pinpoint the historical factors that correlate with lagging democratization and development in the Islamic world. Jared Rubin, an economist at Chapman University in California, studies the effect of technologies like the printing press on economic disparities. Jan Luiten van Zanden, a renowned Dutch historian, has begun a deep comparative study of the organization of cities in the Islamic world and the West.
For the new architects of Islamic politics, Kuran’s work offers a clear blueprint, though perhaps a difficult one to follow. It suggests that states heavily reliant on Islamic law may need to reformulate their approach, extending Western-style rules to organize their nonstate entities: banks, companies, nonprofits, political parties, religious societies. Over time, this will seed a more empowered civic society and ultimately pull greater numbers of citizens into the fabric of political life.
It’s a challenge, however. Authoritarian states are unlikely to promote reforms that will weaken their control. And the resurgence of Islamist politics has created a new wave of support for a more doctrinaire application of Islamic law and traditions.
Another barrier to reform is the slow pace of cultural change. Once modern institutions are in place, Kuran warns, it takes a long time for their use to become widespread and for people to trust them. Simply put, for an institution to grow powerful and influential, whether it’s a bank or a political party, it needs to build support from a large, trusting public of strangers. Much of the Middle East still operates on smaller units, in which customers or citizens expect to know who’s running the company or institution that serves them. For example, Kuran points out that despite the prevalence of banks, only one in five families in the Arab world actually has a bank account. (By comparison, three in five Turkish families have bank accounts, and nearly every US family does.)
Most broadly, change requires a shift in the constraints on civil society. In recent decades, Middle Eastern regimes have systematically destroyed any opposition and kept rigid control over the media, official religious groups, and any body that might develop a political identity, from university faculty to labor unions. The most effective dissent survived deep within mosques, where even the most repressive police states hesitated to go.
In the long run, to end the cycle of autocracy and violence, the Islamic world will need space for civil society to grow outside the constraints of the state and the mosque. Only then will citizens grow accustomed to making decisions that have traditionally been made on their behalf. And breaking the old habits, on the street and in election booths, will likely take time.
“The state itself,” Kuran says, “cannot change the way people relate to each other.”
[Published in The Boston Globe Ideas section.]
When President Obama and Mitt Romney cross swords on defense policy, it can sound like a schoolyard fight: Who loves the military more? Who is tougher? Who would lead a more muscular America?
This is the way we expect candidates to talk about defense: in terms of power, force, even national pride. But increasingly, when it comes to the role the Department of Defense actually plays for the nation, it misses the point. Over the past decade, the Pentagon has become far more complex than the conversation about it would suggest. What “military” means has changed sharply as the Pentagon has acquired an immense range of new expertise. What began as the world’s most lethal strike force has grown into something much more wide-ranging and influential.
Today, the Pentagon is the chief agent of nearly all American foreign policy, and a major player in domestic policy as well. Its planning staff is charting approaches not only toward China but toward Latin America, Africa, and much of the Middle East. It’s in part a development agency, and in part a diplomatic one, providing America’s main avenue of contact with Africa and with pivotal oil-producing regimes. It has convened battalions of agriculture specialists, development experts, and economic analysts that dwarf the resources at the disposal of USAID or the State Department. It’s responsible for protecting America’s computer networks. In May of this year, the Pentagon announced it was creating its own new clandestine intelligence service. And the Pentagon has emerged as a surprisingly progressive voice in energy policy, openly acknowledging climate change and funding research into renewable energy sources.
The huge expansion of the Pentagon’s mission has, not surprisingly, rung plenty of alarm bells. In the policy sphere, critics worry about the militarization of American foreign policy, and the fact that much of the world—especially the most volatile and unstable parts—now encounters America almost exclusively in the form of armed troops. Hawkish critics worry that the Pentagon’s ballooning responsibilities are a distraction from its main job of providing a focused and prepared fighting force. But this new reality will be with us for a while, and in the short term it creates an opportunity for the next president. Super-empowered and quickly deployable, the Pentagon has become a one-stop shop for any policy objective, no matter how far removed from traditional warfare.
That means the next administration will have ample room to shape the priorities, and even perhaps to reimagine the mission of a Pentagon that plays a leading role in areas from language research to fighting the drug trade. And it means that voters will need to consider the full breadth of its capabilities when they hear candidates talk about “defense.”
In campaigning so far, neither candidate has seriously engaged with the real challenges of steering the most diverse and powerful entity under his control. For both Obama and Romney, the most central question about foreign policy—and even some of their domestic priorities—may be how creatively and effectively they can use the Pentagon to further their aims.
The current balance of power in Washington runs counter to most of American history. Traditionally, the United States has related to the world chiefly through diplomats: A civilian president set the policy, civilian envoys worked to implement it, and gunboats stepped in only when diplomacy failed. Indeed, until World War II, the Department of State outranked Defense in size as well as influence. That began to change in the 1940s, first with the huge mobilization of World War II and then the Cold War. Funding and power began to accumulate permanently in the Pentagon.
In the decade since 9/11, the Pentagon has undergone another transformation. The military was asked to fight two complex wars in Afghanistan and Iraq, while also engaging in a sprawling operation dubbed the Global War on Terror. In practice, this meant soldiers and other troops were asked to design nation-building operations on the fly; produce the kind of pro-democracy propaganda that decades earlier was the province of the Voice of America; and do police, intelligence, and development work in conflict zones that had long bedeviled experts in far more stable locales. In Iraq, the Pentagon was essentially expected to provide the full gamut of services normally offered by a national government. Army commanders in provincial outposts dispensed cash grants to business start-ups, supervised building renovations, managed police forces, and built electricity plants.
As American involvement in those wars winds down, we are left with a Department of Defense that has become Washington’s default tool for getting things done in the world. Unlike diplomats, who serve abroad for limited stints and who can refuse to work in dangerous places, military personnel have to go where ordered, and stay as long as the government needs them. They haven’t always succeeded, leaving any number of failed governance projects in their wake. But it’s understandable why the White House has turned more and more often to warriors. The military is undeniably good at taking action: A lieutenant colonel can spend a hundred thousand dollars on a day’s notice to dig a well or refurbish a mayor’s office or rebuild a village market. In contrast, civilian USAID specialists operating under the agency’s rules would take months, or even years, to put out bids and hire a local subcontractor to do the same job.
“The president who comes into office and thinks about what he wants to do, when he looks around for capabilities he tends to see someone in uniform,” says Gordon Adams, an American University political scientist and expert in the defense budget. “The uniformed military are really the only global operational capacity the president has.”
And that capacity stretches into some surprising domains. The Pentagon maintains an international rule of law office staffed with do-gooder lawyers. It has trained and deployed agriculture battalions. Its regional commands, as well as its war-fighting generals in Afghanistan and Iraq, have tapped hundreds of economists, anthropologists, and other field experts as unconventional military assets. Its special operators conduct the kind of clandestine operations once reserved for the CIA, but also do a lot of in-the-field political advising for local leaders in unstable countries. The US Cyber Command runs a kind of geek tech shop in charge of protecting America’s computer networks. The world’s most high-tech navy runs counter-piracy missions off the coast of Somalia, essentially serving as a taxpayer-funded security force for private shipping companies. Much of drug policy is executed by the military, which is in charge of intercepting drug shipments and has been the key player in drug-supplying countries like Colombia.
With little fanfare, the Pentagon—currently the greatest single consumer of fossil fuels in all of America, accounting for 1 percent of all use—has begun promoting fuel efficiency and alternate energy sources through its Office of Operation Energy Plans and Programs. Using its gargantuan research and development budget, and its market-making purchasing power, the Defense Department has demanded more efficient motors and batteries. Its approach amounts to a major official policy shift and huge national investment in green energy, sidestepping the ideological debate that would likely hamstring any comparable effort in Congress.
This huge expansion of what the Department of Defense does is not the same thing as a runaway military, though there are critics who see it that way. At the height of the Cold War, the United States dedicated far more of its budget to defense—around 60 percent, compared to 20 percent now. It is more a matter of vast “mission creep.” Inevitably, it is to the Pentagon that the government will turn when it faces urgent, unexpected needs: Hurricane Andrew, the 2005 tsunami in Asia, propaganda in the Islamic world. Men and women in camouflage uniforms can be found helping domestic law enforcement pursue cattle rustlers in North Dakota using loaned military drones, or working with Afghan farmers to increase crop yields.
Paul Eaton, a major general in the US Army who retired in 2006 and now advises a Washington think tank called the National Security Network, describes a meeting he attended in Kampala, Uganda, this May, convened by the American general in charge of the Africa Command, or AfriCom. The top commanders of 35 militaries on the continent gather every other year, hash out policy matters, and forge personal ties.
“It was as much diplomacy and politics as anything else,” Eaton said. “Nobody could give me an example of the State Department doing anything like that.”
What SHOULD a president do about this metamorphosed Pentagon? Or more practically, what should be done with it? A question to watch for in the coming presidential debates is whether either candidate is willing to discuss reorienting the Pentagon toward its core mission of armed defense, shedding its new capacities in the interest of keeping it focused or saving money. Neither candidate has suggested so far that he will.
Pentagon cutbacks are politically difficult. No president likes to argue against national defense, and Pentagon spending by design sprawls across congressional districts, creating a built-in bipartisan lobby against cuts. But a president who tried to return the Pentagon to a more strictly military mission could expect at least some support from the Department of Defense itself. Many career officers view the extra missions with dismay, fearing that the Pentagon will get worse at fighting wars as it spends more and more time patrolling cyberspace, organizing diplomatic retreats, and deploying agricultural battalions to train farmers in war zones. Eaton, whose three children all serve in the armed forces, has been a vocal critic of the new military, and thinks the best thing the next administration could do is defund and shut down all the niche capacities that have sprung up since 9/11.
The past two US defense secretaries, including George W. Bush appointee Robert Gates, have also expressed concern about the department’s expansion. In a 2008 speech, while still in office, Gates ripped into the “creeping militarization” of foreign policy, expressing concern that the Pentagon was like an “800-pound gorilla” taking over the intelligence community, foreign aid, and diplomacy in conflict zones. Both Gates and his successor, Leon Panetta, have vociferously advocated for a bigger, better-funded State Department more capable of deploying around the world, conducting diplomacy in hot zones, and dispensing emergency relief and development aid.
It’s hard to imagine the Pentagon shedding capacity anytime soon. As the wars in Iraq and Afghanistan subside, the Defense Department appears likely to keep most of its enormous budget. During a period when most branches of government will be struggling to survive budget cutting, the Pentagon will more than ever have the global reach and the policy planning muscle to set the agenda and execute foreign policy.
So in the next several months, we should be on the lookout for specific ideas from candidates about what do with this excess power—at the least, an acknowledgment that it exists. Domestically, the Pentagon has the opportunity to shape university research priorities; it influences White House policy planning anytime a crisis erupts in a new place. Abroad, the military can do considerable good by using its money and expertise to improve quality of life, burnishing America’s reputation as a font of positive development rather than just counter-terrorism and counter-insurgency.
But while it lasts, the breadth of the current military presents grave challenges, not least for a democratic country that in principle, if not always in policy, opposes military dictatorships around the world. Even Pentagon insiders worry about this dissonance. Whatever good our deployments can do, it will be harder to promote civilian ideals so long as our foreign policy wears a uniform.
[Originally published in The Boston Globe.]
President Obama and his presumptive challenger Mitt Romney agree on at least one important matter: the world these days is a terrifying place. Romney talks about the “bewildering” array of threats; Obama about the perils of nuclear weapons in the wrong hands. They differ only on the details.
A bipartisan emphasis on threats from outside has always been a hallmark of American foreign-policy thinking, but it has grown more widespread and more heightened in the decade since 9/11. General Martin E. Dempsey, chairman of the Joint Chiefs of Staff, captured the spirit when he spoke in front of Congress recently: “It’s the most dangerous period in my military career, 38 years,” he said. “I wake up every morning waiting for that cyberattack or waiting for that terrorist attack or waiting for that nuclear proliferation.”
The unpredictability and extremism of America’s enemies today, the thinking goes, makes them even more threatening than our old conventional foes. The old enemy was distant armies and rival ideologies; today, it’s a theocracy with missiles, or a lone wolf trying to detonate a suitcase nuke in an American city.
But what if the entire political and foreign policy elite is wrong? What if America is safer than it ever has been before, and by focusing on imagined and exaggerated dangers it is misplacing its priorities?
That’s the bombshell argument put forth by a pair of policy thinkers in the influential journal Foreign Affairs. In an essay entitled “Clear and Present Safety: The United States Is More Secure Than Washington Thinks,” authors Micah Zenko and Michael A. Cohen argue that that American policy leaders have fallen into a nearly universal error. Across the ideological board, our leaders and experts genuinely believe that the world has gotten increasingly dangerous for America, while all available evidence suggests exactly the opposite: we’re safer than we’ve ever been.
“The United States faces no serious threats, no great-power rival, and no near-term competition for the role of global hegemon,” Cohen says. “Yet this reality of the 21st century is simply not reflected in US foreign policy debates or national security strategy.”
It might seem that the extra caution couldn’t hurt. But Zenko and Cohen argue that excessive worry about security leads America to focus money and attention on the wrong things, sometimes exacerbating the problems it seeks to prevent. If Americans and their leaders recognized just how safe they are, they would spend less on the military and more on the slow nagging problems that undermine our economy and security in less dramatic ways: creeping threats like refugee flows, climate change, and pandemics. More important, the United States would avoid applying military solutions to non-military problems, which they argue has made containable problems like terrorism worse. In effect, they argue, the United States should keep a pared-down military in reserve for traditional military rivals. The bulk of America’s security efforts could then be spent on remedies like policing and development work — more appropriate responses to the terrorism and global crime syndicates that understandably drive our fears.
Why should Americans feel so secure right now? Zenko and Cohen write that a calm appraisal of global trends belies the danger consensus. There are fewer violent conflicts than at almost any point in history, and a greater number of democracies. None of the states that compete with America come close to matching its economic and military might. Life expectancy is up, and so is prosperity. As vulnerable as the nation felt in the wake of 9/11, American soil is still remarkably insulated from attack.
Nonetheless, more than two-thirds of the members of the Council on Foreign Relations — as good a cross-section of the foreign-policy brain trust as there is — said in a 2009 Pew Survey that the world today was as dangerous, or even more so, than during the Cold War. Other surveys of experts and opinion-makers showed the same thing: the overwhelming majority of experts believe the world is becoming more dangerous. Zenko and Cohen claim, essentially, that the entire foreign policy elite has fallen prey to a long-term error in thinking.
“More people have died in America since 9/11 crushed by furniture than from terrorism,” Zenko says in an interview. “But that’s not an interesting story to tell. People have a cognitive bias toward threats they can perceive.”
The paper’s authors are, in effect, skewering their own peers. Zenko is a Council on Foreign Relations political scientist with a PhD from Brandeis. Cohen (a colleague of mine at The Century Foundation, and a previous contributor to Ideas) worked as a speechwriter in the Clinton administration and has been a mainstay in the thinktank world for a decade.
Zenko and Cohen point out that there are plenty of good-faith reasons that experts tend to overestimate our national risk. A raft of psychological research from the last two decades that shows human nature is biased to exaggerate the threat of rare events like terrorist attacks and underestimate the threat from common ones like heart attacks. And security policy in general is extremely risk-averse: we expect our military and intelligence community to tolerate no failures. A cabinet secretary who pledged to reduce terrorist attacks to just a few per year would not last long in the job: the only acceptable goal is zero.
Electoral politics, of course, is greatest driver of what Zenko and Cohen call “domestic threat-mongering.” In an endless contest for votes, Republicans do well by claiming to be tough in a scary world. Democrats adopt the same rhetoric in order to shield themselves from political attacks. Both sides see an advantage in a politically risk-averse strategy. If a minor threat today turns into a sizeable one tomorrow, better to have sounded the alarm early than to have appeared naïve or feckless.
But Zenko and Cohen make the politically uncomfortable argument that it’s wrong to govern based on the prospect of unlikely but extreme events. Instead of marshalling our resources for the 1 percent risk of a nuclear jihadist, as Vice President Dick Cheney argued we should, we should really set our security policy based on the 99 percent of the time when things go America’s way.
They point to the number of wars between major states and the number of people killed in wars every year, both of which have been steadily declining for decades. They also point to the historical, systematic growth of the global economy and spread of financial and trade links, which have undergirded an unprecedented period of peace among rival great powers and within the West.
Their thinking follows in the footsteps of a small but persistent group of contrarian security scholars, who have noted the post-9/11 spike in America’s already long history of threat exaggeration. Best known among them is Ohio State University political scientist John Mueller, who has argued that American alarmism about terrorism can cause more harm to our well-being and national economy than terrorism itself. In that vein, Zenko and Cohen claim that America’s over-militarization prompts avoidable wars and has in fact created far more problems than it solves, from terrorist blowback to huge drains on the Treasury.
The implications, as they see it, are clear: spend less money on the military; spend more on the boring, international initiatives that actually make America safe and powerful. Zenko’s favorite example is loose nukes, which pose hardly any threat today but were a real cause for concern as the Soviet Union collapsed in the early 1990s, leaving poorly secured weapons across an entire hemisphere.
“There was a solution: limit nuclear stockpiles and secure them,” Zenko says. “We took common-sense steps, none of which involved the US military. We send contractors to these facilities in Russia, and they say ‘The fence doesn’t work, the cameras don’t work, the guards are drunk.’ It’s cheap, and it works. This is what keeps us safe.”
Not everyone agrees. Robert Kagan, the most influential proponent of robust American power, argues that America is safe today precisely because it throws its military might around. President Obama said he relied for his most recent state of the union address on Kagan’s newest book, “The World America Made.” Mackenzie Eaglen, a defense expert at the American Enterprise Institute, says America benefits even when it appears to overreact. According to this thinking, even if the Pentagon designs the military for improbable threats and deploys at the drop of the hat, it is performing a service keeping the world stable and deterring would-be rivals. Like most establishment defense thinkers, Eaglen believes American dominance could easily and quickly come to an end without this kind of power projection.
“Power abhors a vacuum,” she says. “If we don’t fill it, others will, and we won’t like what that looks like.”
Other critics of Zenko and Cohen’s argument, like defense policy writer Carl Prine, say the comforting data about declining wars and violent deaths is misleading. Today’s circumstances, they argue, don’t preclude something drastic happening next — say, if China, or even a rising power like Brazil, veers into an unpredictably bellicose path and clashes violently with American interests. Simply put, safe today doesn’t mean safe tomorrow.
Even if Zenko and Cohen are right, however, and the big-military crowd is wrong, it is nearly impossible to imagine a spirit of “threat deflation” taking hold in American politics. The already alarmist expert community that shapes US government thinking was further electrified by 9/11. Anyone in either party who argues for a leaner military, or pruning back intelligence infrastructure, risks being portrayed as inviting another attack on the homeland. The result is an unshakeable institutional inertia.
To get a sense of what they’re up against, listen to James Clapper, the director of national intelligence, presenting the “Worldwide Threat Assessment” to Congress, as his office does every year. The latest version, in January this year, reads like a catalogue of nightmares, describing macabre possibilities ranging from Hezbollah sleeper cells attacking inside America to a cyberattack that could turn our own infrastructure into murderous drones. Are these science fiction visions, or simply the wise man’s anticipation of the next war? It’s impossible to know, but one thing is striking: there’s no attempt in the intelligence czar’s report to rank the threats or assess their real likelihood. He simply and clinically presents every possible, terrifying thing that could happen, and signs off. It’s truly frightening reading.
This is what Zenko dismissively deems the “threat smorgasbord.” It makes clear how high the stakes are in planning for war and terrorist attacks, and how much emotional power the issue has. The reasonable thing to do might simply never be politically palatable. The current debate about Iran’s nuclear intentions is a perfect example, says Eaglen, who debated Cohen about his thesis on Capitol Hill in April.
“I can say Iran’s a threat, Michael can say it’s not,” she says. “But if he gets it wrong, we’re in trouble.”
Thanassis Cambanis, a fellow at The Century Foundation, is the author of “A Privilege to Die: Inside Hezbollah’s Legions and Their Endless War Against Israel” and blogs at thanassiscambanis.com. He is an Ideas columnist.
[Originally published in The Boston Globe, subscribers only.]
CAIRO — It might have seemed naïve to an outsider, but one of the great hopes among the revolutionaries who humiliated Egyptian dictator Hosni Mubarak more than a year ago was that the country’s strongman regime would finally yield to a democratic variety of voices. Both Islamic revolutionaries and secular liberals spoke up for modern ideas of pluralism, tolerance, and minority rights. Tahrir Square was supposed to turn the page on a half-century or more of one-party rule, and open the door to something new not just for Egypt, but for the Arab world: a genuine diversity of opinion about how a nation should govern itself.
“We disagree about many things, but that is the point,” one of the protest organizers, Moaz Abdelkareem, said in February 2011, the week that Mubarak quit. “We come from many backgrounds. We can work together to dismantle the old regime and make a new Egypt.”
A year later, it is still far from clear what that new Egypt will look like. The country awaits a new constitution, and although a competitively elected parliament sat in January for the first time in contemporary Egyptian history, it is still subordinate to a secretive military regime. Like all transitions, the struggle against Egyptian authoritarianism has been messy and complex. But for those who hoped that Egypt would emerge as a beacon of tolerant, or at least diverse, politics in the Arab world, there has been one big disappointment: It’s safe to say that one early casualty of the struggle has been that spirit of pluralism.
“I do not trust the military. I do not trust the Muslim Brothers,” Abdelkareem says in an interview a year later. In the past year, he helped establish the Egyptian Current, a small liberal party that wants to bring direct democracy to Egyptian government. Despite his inclusive principles, he’s urged the dismissal from public life of major political constituencies with whom he disagrees: former regime supporters, many Islamists, old-line liberals, and supporters of the military. He shrugs: “It’s necessary, if we’re going to change.”
He’s not the only one who has left the ideal of pluralism behind. A survey of the major players in Egyptian politics yields a list of people and groups who have worked harder to shut down their opponents than to engage them. The Muslim Brotherhood — the Islamist party that is the oldest opposition group in Egypt, and the one with by far the most popular support — has run roughshod over its rivals, hoarding almost every significant procedural power in the legislature and cutting a series of exclusive deals with the ruling generals. Secular liberals, for their part, have suggested that an outright coup by secular officers would be better than a plural democracy that ended up empowering bearded fundamentalists who disagree with them.
When pressed, most will still say they want Egypt to be the birthplace of a new kind of Arab political culture — one in which differences are respected, minorities have rights, and dissent is protected. However, their behavior suggests that Egypt might have trouble escaping the more repressive patterns of its past.
IN A COUNTRY that had long barred any meaningful politics at all, Tahrir’s leaderless revolution begat a brief but golden moment of political pluralism. Activists across the spectrum agreed to disagree — this, it was widely believed, was the very practice that would lead Egypt from dictatorship to democracy. During the first month of maneuvering after Mubarak resigned in February 2011, Muslim Brotherhood leaders vowed to restrain their quest for political power; socialists and liberals emphasized due process and fair elections. The revolution took special pride in its unity, inclusiveness, and plethora of leaders: It included representatives of every part of society, and aspired to exclude nobody.
“Our first priority is to start rebuilding Egypt, cooperating with all groups of people: Muslims and Christians, men and women, all the political parties,” the Muslim Brotherhood’s most powerful leader, Khairat Al-Shater, told me in an interview last March, a year ago. “The first thing is to start political life in the right, democratic way.”
Within a month, however, that commitment had begun to fray. Jostling factions were quick to question the motives and patriotism of their rivals, as might be expected from political movements trying to position themselves in an unfolding power struggle. More surprising, and more dangerous, has been the tendency of important groups to seek the silencing or outright disenfranchisement of competitors.
The military sponsored a constitutional referendum in March 2011 that supposedly laid out a path to transfer power to an elected, civilian government, but which depended on provisions poorly understood by Egyptian voters. The Islamists sided with the military, helping the referendum win 77 percent of the votes, and leaving secular liberal parties feeling tricked and overpowered. The real winner turned out to be the ruling generals, who took the win as an endorsement of their primacy over all political factions. The military promptly began rewriting the rules of the transition process.
With the army now the country’s uncontested power, some leading liberal political parties entered negotiations with the generals over the summer to secure one of their primary goals — a secular state — in a most illiberal manner: a deal with the army that would preempt any future constitution written by a democratically selected assembly.
The Muslim Brotherhood responded by branding the liberals traitors and scrapping its conciliatory rhetoric. The Islamists, with huge popular support, abandoned their initial promise of political restraint and instead moved to contest all seats and seek a dominant position in the post-Mubarak order. The Brotherhood now holds 46 percent of the seats in parliament, and with the ultra-Islamist Salafists holding another 24 percent, the Brotherhood effectively controls enough of the body to shut down debate. Within the Brotherhood, Khairat Al-Shater has led a ruthless purge of members who sought internal transparency and democracy — and is now considered a front-runner to be Egypt’s next prime minister.
The army generals in charge, meanwhile, have been using state media to demonize secular democracy activists and street protesters as paid foreign agents, bent on destroying Egyptian society in the service of Israel, the United States, and other bogeymen.
Since the parliament opened deliberations in January, the rupture has been on full and sordid display. The military has sought legal censure against a liberal member of parliament, Zyad Elelaimy, because he criticized army rule. State prosecutors have gone after liberals and Islamists who have voiced controversial political positions. Islamists and military supporters have also filed lawsuits against liberal politicians and human rights activists, while the military-appointed government has mounted a legal and public relations campaign against civil society groups.
THE CENTRAL QUESTION for Egypt’s future is whether these increasingly intolerant tactics mean that the country’s next leaders will govern just as repressively as its last. Scholars of political transition caution that for states shedding authoritarian regimes, it can take years or decades to assess the outcome. Still, there are some hallmarks of successful transitions that Egypt appears to lack. States do better if they have an existing tradition of political dissent or pluralism to fall back on, or strong state institutions independent of the political leadership. Egypt has neither.
“Transitions are always messy, and the Egyptian one is particularly messy,” said Habib Nassar, a lawyer who directs the Middle East and North Africa program at the International Center for Transitional Justice in New York. “To be honest, I’m not sure I see any prospects of improvement for the short term. You are transitioning from dictatorship to majority rule in a country that never experienced real democracy before.”
Outside the parliament’s early sessions in January, liberal demonstrators chanted that the Muslim Brotherhood members were illegitimate “traitors.” In response, paramilitary-style Brotherhood supporters formed a human cordon that kept protesters from getting close enough to the parliament building to even be heard by their representatives.
At a rally for the revolution’s one-year anniversary in Tahrir Square, Muslim Brotherhood leaders preached and made speeches from a high stage to celebrate their triumph. It was unclear whether they were referring to the revolution, or to their party’s dominance at the polls. It was too much for the secular activists. “This is a revolution, not a party,” some chanted. “Leave, traitors, the square is ours not yours,” sang others.
Hundreds of burly brothers linked arms, while a leader with a microphone seemed to taunt the crowd. “You can’t make us leave,” he said. “We are the real revolution.” In response, outraged members of the secular audience tore apart the Brotherhood’s sound system and pelted the stage with water bottles, corn cobs, rocks, even shards of glass.
The Arab world is watching closely to see what happens in the next several months, when Egypt will write a new constitution and elect a president in a truly competitive ballot, and the military will cede power, at least formally. Even in the worst-case scenario, it’s worth remembering that Egypt’s next government will be radically more representative than Mubarak’s Pharaonic police state.
Sadly, though, political discourse over the last year has devolved into something that looks more like a brawl than a negotiation. If it continues, the constitution drafting process could end up more ugly than inspiring. The shape of the new order will emerge from a struggle among the Islamists, the secular liberals, and the military — all of whom, it now appears, remain hostage to the culture of the regime they worked so hard to overthrow.
Wanted: A grand strategy
In search of a cohesive foreign policy plan for America
By Thanassis Cambanis
NOVEMBER 13, 2011
GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION
President Obama campaigned on the promise of change, and ended up with a world much fuller of it than he and his advisers expected. In the three years since he was elected, the foreign-policy landscape has shifted dramatically, not least because the financial crisis has spread worldwide and the Arab world has risen up against its autocratic rulers.
GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION
When the unexpected occurs in the realm of foreign policy–like this year’s Arab revolts, or a hypothetical military crisis in Asia–the nation’s leaders don’t always have ready-made plans on the shelf, and don’t have the luxury of time to start crafting a policy from scratch. What they can rely on, however, is what’s known as a grand strategy. A term from academia, “grand strategy” describes the real-world framework of basic aims, ideals, and priorities that govern a nation’s approach to the rest of the world. In short, a grand strategy lays out the national interest, in a leader’s eyes, and says what a state will be willing to do to advance it. A grand strategy doesn’t prescribe detailed solutions to every problem, but it gives a powerful nation a blueprint for how to act, and brings a measure of order to the rest of the world by making its expectations more clear.
In the absence of a clear road map for how, and why, America should be engaging with the world, a number of big-picture thinkers have recently rushed into the gap, filling the pages of the most prominent journals in the field and putting grand strategy at the center of the conversation at influential institutions from the National Defense University to America’s top policy schools.
What are the competing visions for an American grand strategy for the 21st century? All the proposed new strategies try to deal with a world in which America still holds the preponderance of power, but can no longer dominate the entire globe. The two-way Cold War contest is over, but it will be some time before another pretender to power–whether China, Russia, India, or the European Union–becomes a meaningful rival. One simple version has America stepping back from the world, husbanding its resources by projecting power at an imperial remove rather through attempts to micromanage the affairs of far-flung foreign nations. Another has it acting more like a multinational corporation, delegating authority to allies most of the time, but involving itself deeply and decisively wherever its interests are threatened. And some unapologetic America-firsters argue the United States can do better by openly trying to dominate the world, rather than by negotiating with it.
Whatever grand strategy emerges to guide 21st century America, the answer is likely to grow out of America’s history, rather than markedly depart from it. All successful American grand strategies–manifest destiny, Wilsonian idealism and self-determination, Cold War-era containment–were driven in part by a sense of American exceptionalism, the notion that America “stands tall” and acts as a beacon in the world. But they also included a dose of Machiavelli as well, nakedly seeking to contain security threats against America, while using international allies to further American interests.
One influential strain of thinking about grand strategy comes from the realm of small-l liberal realism. Liberal realists are unsentimental in their desire to see America maximize its power, but also restrained about how much America should get involved. They want America to reap more and spend less, in both financial and military terms. Their ideas tend to dominate at policy schools, where much of the applied thinking about foreign affairs comes from, and seem to be getting a thorough hearing within Hillary Rodham Clinton’s State Department.
As a potential grand strategy, the front-runner emerging from these realist thinkers isoff-shore balancing. Essentially, it advocates keeping America strong by keeping the rest of the world off balance. Military intervention, in this line of thinking, should always be a last resort rather than a first move; America’s military should lurk over the horizon, more powerful if it’s on the minds of its rivals rather than their territory. This grand strategy prescribes a “divide and conquer” approach, advocating that Washington use its diplomatic and commercial power to balance rising powers against one another, so that none can dominate a single region and proceed to threaten America. A prominent group of theorists has embraced this idea, including John Mearsheimer at the University of Chicago and Stephen Walt at Harvard University’s Kennedy School.
“Our first recourse should be to have local allies uphold the balance of power, out of their own self-interest,” Walt wrote in the most recent edition of The National Interest. “Rather than letting them free ride on us, we should free ride on them as much as we can, intervening with ground and air forces only when a single power threatens to dominate some critical region.”
A slightly different version could be called “internationalist tinkering”: It argues that America should focus on rebuilding its international relationships and coalitions, while reserving America’s right on occasion to act decisively and alone. Writing this summer in Foreign Affairs, another Boston-area political scientist, Daniel W. Drezner of the Fletcher School of Law and Diplomacy at Tufts University, argued that America can quite effectively maintain its international power by relying on cooperation and gentle persuasion most of the time, and force when challenged by other countries. This approach, Drezner believes, will have the result of “reassuring allies and signaling resolve to rivals.” An example is America’s investment in closer relations with Asian states, putting China on notice and forcing it to adjust its expansionist foreign policy. Drezner argues that although Obama might not have articulated it well, he is already driven by this approach, which Drezner calls “counterpunching.”
Princeton University’s G. John Ikenberry makes a similar argument in his latest book, “Liberal Leviathan.” America, he argues, can and should police the world order so long as it abides by the same rules as other nations. As a sort of first among equals, America holds the balance of power but cannot overtly dominate the world. It’s a position that demands considerable care to maintain, and requires investment in alliances, institutions like the UN, and international regimes like the ones that govern trade and certain types of crime.
Another line of thinking comes from scholars and writers who are more pessimistic about America’s prospects; they see an empire at the beginning of a long period of decline, and believe America needs to dramatically curtail its international engagements and get its own house in order first. It could be called the school of empire in eclipse, and its proponents have been labeled declinists. In effect, they argue that America’s economy and domestic infrastructure are collapsing, and the nation’s global influence will follow, unless America concentrates its resources on rebuilding at home. Many of the most influential thinkers in this strain aren’t against a robust foreign policy, but they argue that it’s impractical to worry too much about the rest of the world until we address the problems at home. The late historian Tony Judt was the one of the leading exponents of this view, and the writer George Packer has taken up many of the same themes. A folk version of this thinking drives much of Thomas Friedman’s writing, and underpins his most recent book, “That Used to Be Us: How America Fell Behind in the World It Invented and How We Can Come Back,” which he wrote with Michael Mandelbaum, an American foreign policy expert at Johns Hopkins University.
On the opposite end of the spectrum, neo-imperialism holds that America’s grand strategy needs only be more brash and more demanding. (Neo-imperialist thinkers are often called “global dominators” as well.) The recent mistake, in this view, is that America asks too little of the world, and thereby invites frustrating challenges. Mitt Romney’s claim that he “won’t apologize for America” reflects this view, which has its intellectual underpinnings in the writings of conservatives such as Robert Kagan, Niall Ferguson, Robert Kaplan, and Charles Krauthammer. Ferguson, a British historian, has made the provocative suggestion that America should pick up where the British Empire left off, directly managing the entire world’s affairs. In this view, even long and expensive entanglements like the ones in Iraq and Afghanistan aren’t major setbacks: Militarism, these expansionists argue, has propelled American economic growth and international influence for centuries, and has a long future as long as we aren’t shy about using the force we have.
A grand strategy has to match means with goals; it can’t merely assert American power, but needs to account for America’s own depth of resources. The precarious state of America’s economy and the wear and tear on its military suggest that any successful new strategy would allow for a modest period of retrenchment, one in which America continues to fancy itself the world’s leader but adopts the tone of a hard-boiled CEO rather than a field marshal–Jack Welch rather than George Patton.
Bush’s strategy foundered for those reasons; he boldly and clearly asserted that America would secure itself through preemptive war and the spread of democracy, but found that he simply didn’t have the resources to deliver–and that the world didn’t respond as he hoped to our prodding and aspirations.
America’s limits have grown apparent as it has discovered a surprising shortage of leverage, even over close allies. The liberal grand strategists are feeding into a process already underway in the Obama administration to systematize foreign policy into a coherent framework, something more akin to a grand strategy than the jumble of policies that has marked America’s foreign policy for the least three years. The conservatives, meanwhile, are looking for intellectual toeholds among the Republican presidential contenders.
Earlier this year a top Obama aide seemed to belittle the very idea of a grand strategy as a simplistic “bumper sticker,” something that reduced the world’s complexity to a slogan. But, in a sense, that’s exactly the point of having one. To be truly helpful in time of crisis, a grand strategy must be based on incredibly thorough and detailed thinking about how America will rank its competing interests, and what tools it might use to project power in the rest of the world. But it also demands simplicity: a principle, even a simple sentence, reflecting our values as well as our interests, based on right as well as might, and as clear to America’s enemies as it is to the American electorate.
My latest column in The Boston Globe.
Fundamentalism’s rise just shows that the secular state has won, says political scientist Olivier Roy
If you read the news, it can feel impossible to escape the growing influence of religious fundamentalism. Christian evangelicals set the talking points of the Republican primaries, and America’s leaders speak about God with an insistence that might have made the authors of the Constitution blush. Religious parties have become steady players in the politics of secular Europe. Meanwhile, Islamists from Morocco to Malaysia campaign to eliminate all barriers between the mosque and the state. With religion so ubiquitous in politics, it sometimes appears that the world may be crossing the threshold from the age of secularism into one of religious states.
But what if all this religious activism and faith-based politics herald not a new dominance but a passing into irrelevance? That’s the counterintuitive assertion of Olivier Roy, a French political scientist and influential authority on Islam, who has now turned his eye on the growth of fundamentalism across all religions.
As Roy argued in the book “Holy Ignorance,” published late last year, old-line religions with organized power structures are in a global decline, overtaken by fundamentalist sects such as Islamic Salafis and Christian Pentecostals, which make a charismatic appeal to the individual. But in fact, Roy argues, this trend suggests that religion has effectively given up the struggle for power in the public square. Despite the extremist rhetoric of televangelists in the American Midwest or the Persian Gulf, these growing religions are really just a reaction to the triumph of secularism. They demand a total personal commitment from their adherents: abstinence, sobriety, frequent prayer. But, though their leaders may talk of making America an officially Christian nation or of establishing pure Islamic states in places like Egypt, they carry none of the real institutional or military weight of established religions in history.
“The religious movements are here, they have an agenda,” Roy says. “I never deny that. I say that political ideologies based on religion don’t work. You can’t manage a country in the name of a religion.” In other words, as he interprets it, the world is not in the grip of a clash of civilizations; religious movements are just growing more strident because they are detached from the culture of nations. An age of vocal religious fundamentalists, contrary to our expectations, may also be an age of safely secular states.
If Roy’s hypothesis proves correct, it stands to change international politics by making us much less concerned about religious extremism. It could give secular forces new power to deal with fundamentalist organizations like the Muslim Brotherhood; since their religious agendas are not likely to dominate any nation’s political life, they could be dealt with on a purely material level. At home, American policy makers could rest assured that extreme fundamentalist positions on gay rights or teaching evolution in schools would inevitably be leavened by the political realities of a pluralistic, secular society. For those who treasure secular governance and separation of church and state, Roy’s hypothesis–if the 21st century bears it out–would mean there is far less in the world to fear.
In “Holy Ignorance,” Roy presents what reads like an alternate history of the last half century, narrating some familiar facts but drawing counterintuitive conclusions. Fundamentalist religion has been on the rise worldwide since the 1950s. In the Islamic world the fastest growing sect is the Salafis, who advocate a return to 7th-century piety. Among Christians, evangelical Christian sects like Mormons, Pentecostals, and Jehovah’s Witnesses are the fastest growing, while ultra-Orthodox Jews are gaining within the Jewish community. Almost nowhere, with the exception of Ayatollah Khomeini’s revolution in Iran, has the fundamentalist revival taken the form of a single religious group, unified behind one leader, trying to take over a country. Instead, ascendant fundamentalists are a fractious group. Moreover, their growth comes at a time when pluralistic, secular politics predominate, and religious institutions have less power than ever before. In a borderless, globalized world, the only religions that grow are exactly those that sever themselves from a specific national or cultural context, and appeal to universal humanity. Such religions might sound louder or more shrill, but they’re also by definition less significant to the political and cultural life of nations.
Roy’s argument has grown out of decades studying the interaction between religion, politics and culture. As a young man, Roy writes, he was jarred by the arrival of a born-again Christian in his religious youth group. It was his first encounter with charismatic religion. After he established himself as an authority on the Islamic world, Roy began to argue by the 1990s that Islamic fundamentalists would not manage to win over their own societies, despite their violent heyday; his 1994 book, “The Failure of Political Islam,” was criticized by some as premature, but now most scholars agree with its conclusions.
From today’s vantage point, religious politicians and commentators seem like a force to be reckoned with, but it’s instructive to compare them to the mainline religious institutions that used to wield power. During the Middle Ages, institutional religions possessed tremendous real-world power. The Catholic Church decided which leaders had legitimacy and triggered massive wars. With the Industrial Revolution, Christian missionary activity inextricably intertwined with European colonization. Even as recently as 1960, the Vatican held such sway that American voters feared John F. Kennedy would take orders from the pope; his edicts set the parameters of European social policy well into the 20th century. Meanwhile, in the Islamic world, until colonialism eroded the authority of the Ottoman Empire in the 19th century, the supposedly all-powerful caliphs and their governors could be overthrown when religious scholars withdrew their support.
Gradually, however, secularism took root. After the French Revolution, the number of European leaders claiming a divine right to rule dwindled. Even the caliphate in Istanbul began to look like a secular empire with vestigial religious trappings. Modern nation states were more concerned with holding territory and controlling populations than with matters of faith. Roy argues that even when they invoked religion to gain legitimacy, modern states were engaged in an inherently political quest, in which religion was marginalized.
It was the loss of direct religious power that prompted the rise of what Roy calls “pure” religions: sects that appeal to the universal individual. Old-line religions talked explicitly about power and considered themselves more powerful than states. As they declined, Roy says, the public began to move to groups that were more charismatic and focused on faith in the private sphere–Mormons, Pentecostals, Tablighis, Salafis, ultra-Orthodox Jews. We call these sects “fundamentalist” because they claim to return to the founding texts of their faith, stripping out the practices that had accreted in the centuries since the founding of their respective religions.
This return to basics has visceral appeal to those seeking a faith. But millions of born-again evangelicals, seeking salvation from a mosaic of political backgrounds, are unlikely to develop the clout the Vatican once had. Look closely at the surge of Salafists around the Islamic world, and you’ll find extreme fragmentation, with rivalries between the followers of competing sheikhs often taking priority over concern with unbelievers. The religions that are actually growing and winning converts are resolutely not political monoliths in the making.
In Roy’s view, the pure new religions employ simplistic ideas that sound universal and appeal to new converts, but whose vigor can be hard to sustain generation to generation. These charismatic, fundamentalist religions depend on constant renewals of faith, Roy says, and depend on the zeal of new converts for their growth and dynamism. Fundamentalist sects can afford to take extreme positions on belief, Roy says, precisely because they have none of the constraints of governance and authority.
Not everyone is convinced by Roy’s interpretation. Even if there are hardly any real theocracies left in the world, religious hard-liners still influence political life directly in countless places. One need look no further than the role of born-again Christian activists in the Republican Party, or the growing enrollment of Salafist political parties in Egypt.
Karen Barkey, a Columbia University sociologist, argues that Roy has overlooked the resilience of fundamentalism. Yes, secular culture has largely taken the reins of power, but it’s a two-way process; religion has fought back, and tailors its message and approach to local constraints. Roy is too optimistic when he claims that fundamentalism always wears itself out, Barkey says. Religion is asserting itself within evolving cultures, including democratic ones, and will continue to have great influence. “Some forms of religious fundamentalism may well be disappearing into the ether of abstraction,” Barkey wrote in a barbed essay contesting Roy’s view in Foreign Affairs, “but in most cases, religion, culture, and politics are still meeting on the ground.”
For those watching that clash, too, it can be hard to credit Roy’s interpretation. In phenomena like the rise of the evangelical right in American politics and Islamic fundamentalists in the Arab world, Roy sees a sign of defeated religion that has been pushed from the political playing field. But others argue that religion crucially shapes politics in ways that scholars, most of whom are secular, don’t well understand. Grandees of political science recently convened an initiative to systematically increase their field’s study of religion, and published a volume through Columbia University Press this year called “Religion and International Relations Theory.”
Still many scholars agree with Roy that the importance of religious extremists has been overstated. Alan Wolfe, director of the Boisi Center for Religion and American Public Life at Boston College, says that Roy provides a convincing riposte to those like Jerry Falwell, who called for religion to directly shape legislation. Wolfe believes Roy’s assertion that religious influence on politics will continue to wane.
The implications of Roy’s theory are stark. If he is correct, American policy makers can step away from the entire framework of “the clash of civilizations” and “the rise of Islam,” and finally begin to deal with political actors at face value. Iran’s ayatollahs, for example, are better understood as extreme nationalists rather than religious fanatics, Roy has written. While religion matters to Hindu nationalists in India, Shas Party members in Israel, and followers of Hezbollah in Lebanon, it’s their specific nationalist aims that have made them politically powerful players. In political terms, their faith is essentially irrelevant.
Most of all, Roy’s theory offers the possibility of no longer seeing our conflicts as religious wars. In 2003, Lieutenant General William Boykin scandalized secular Americans when he described battling a Muslim fighter and said, “I knew that my God was bigger than his. I knew that my God was a real God, and his was an idol.” If Roy is correct, such rhetoric is simply beside the point; as our century wears on, extreme religious belief will more and more be just a fringe phenomenon, unworthy of the attention of governments.
How consequential was 9/11? How much of our response was theatrics, how much a lost opportunity, and to what extent is our brain trust doing its job? My column last week about America’s failure to adapt to a world no longer dominated by traditional states has prompted some interesting responses. Dan Drezner at Foreign Policy opens a conversation about my contention that international affairs thinking still hasn’t caught up with a fluid post-Cold War order whose arrival was emphasized by 9/11. One question he raises:
Is it that international relations theory has gone stale… or is it simply that the wrong set of existing theories are in vogue today?
Isn’t is possible that both are the case? Folks like Nye, Walt, and Doyle have been wrong about some big things in the 1990s (like the Democratic Peace; it turns out that democracies, or democratizing nations, do sometimes go to war with each other). And in the 2000s, there was a big idea, Bush’s Freedom Agenda, but it was the wrong idea, and didn’t work.
Roland Paris adds another salient question to the discussion. Is there really a need for a bumper sticker, a grand unifying theory? Or doesn’t a complicated world require complicated thinking? To that I say, of course – but if you don’t have a grand strategy that’s easy to articulate, akin to containment after World War II, then you probably are adrift. Complex ideas that are clear and compelling can be summarized clearly without being reduced to a silly bullet point. Paris also points out some recent thinking I didn’t mention in my column:
… more thinking has taken place on the changing nature of international affairs than Cambanis acknowledges. He mentions the works of Nye, Doyle and Walt, all of whom are great scholars. However, as my former University of Colorado colleague, Dan Drezner, points out, the ideas that Cambanis mentions are not new. Nye’s insights into ‘soft power’ were first published in 1990, for example, while Doyle’s pioneering work on the ‘democratic peace’ appeared in the late 1970s. Drezner could have added that in the ensuing years there has been a flurry of scholarly writing addressing the issues that Cambanis raises– including works about the role of non-state actors and about ‘fuzzier’ problems such as transnational networks and failed states. Admittedly, much of this literature is full of academic jargon, but an interested reader with a bit of perseverance will discover rich veins of gold to mine. (If you are interested in taking up pick and shovel, this course syllabus, which lists some recent writings, is a good place to start. If, however, you prefer to have a shiny nugget handed to you, take a look at this thoughtful article by Princeton’s John Ikenberry.)
It’s possible that the great ideas that will carry us into the future are circulating quietly and will soon hold sway in the salons of power. It’s also possible that America is strategically adrift.
EVEN BEFORE WE knew who had committed the attacks of Sept. 11, 2001, it was clear that the event would be transformative, startling Americans into reconsidering how they understood matters from religion and culture to war and civil liberties.
For the nation’s foreign policy brain trust, it announced one thing in particular: A new species of power finally had come of age. The force behind the 9/11 attacks wasn’t an enemy nation but a small band of resourceful zealots scattered around the world. Al Qaeda was what experts call, for lack of a more elegant term, a “non-state actor” – one of a new species of powers that operate outside the umbrella of traditional governments. These powers run the gamut from terrorist cells to Google, from globe-spanning charities to oil conglomerates. As much as anything, the 9/11 strikes illustrated the profound influence that non-state actors could have on world affairs.
You might think that the sudden demonstration of the radical power of a non-state actor would have triggered an equally bold reaction at the highest levels of policy thinking: a coherent shift in grand strategy, in America’s thinking about how it should contend with the wider world, on the scale of the one that developed after World War II.
Surprisingly, though, if there is one thing that 9/11 didn’t change, it was this. Instead of a flurry of new thinking at the highest echelons of the foreign policy establishment, the major decisions of the past two administrations have been generated from the same tool kit of foreign policy ideas that have dominated the world for decades. Washington’s strategic debates – between neoconservatives and liberals, between interventionists and realists – are essentially struggles among ideas and strategies held over from the era when nation-states were the only significant actors on the world stage. As ideas, none of them were designed to deal effectively with a world in which states are grappling with powerful entities that operate beyond their control.
“Great power relations, war, diplomacy, we know how to do that,” says Stephen Krasner, a Stanford University international relations specialist who ran the policy planning department in George W. Bush’s State Department from 2005 to 2007. “We don’t know how to deal with these tricky non-state questions.”
A look at the major foreign crises of the last two decades, the ones that have demanded the most attention, money, and lives from America, reveals that virtually all of them were provoked not by some hostile scheming government in Moscow or Beijing, but by an entity that would have been beneath the radar of a classic global strategist. In the 1990s, warlords in Somalia and Bosnia ripped apart their regions while bedeviling old-line powers like the United States, Britain, and NATO. On the eve of the millennium, the United States struggled to grapple with the fallout from a failed hedge fund, Long-Term Capital Management, and Argentina’s economic collapse – financial crises whose political repercussions outstripped those of many wars. The Sept. 11 attacks only made obvious a change that had already been apparent to many.
Ten years later, we are still waiting for the intellectual response to that change. And already we can see the costs of its absence. The United States responded to the 9/11 attacks with strategies that made sense in the old days of state-vs-state conflict, deploying the world’s most powerful armed forces and spending more than $1 trillion trying to subdue bands of simply trained and lightly armed men in Afghanistan and Iraq. Many strategists see a rising China as the next major threat in a traditional sense, but even China’s challenge to American interests is largely taking place outside traditional state channels: in contests for oil concessions, and struggles for high-tech revenue, cultural influence, and intellectual property. The cost of getting these types of struggles wrong can, in the long term, be even greater than a runaway war.
There’s been some interesting criticism of my piece in The Boston Globe on Sunday about the increasing calls for violence among revolutionaries in Egypt. The best-known scholars of nonviolence, whose research I cite in my column, take exception to my analysis and raise some valid objections to the way I framed some of the other research. Other writers have gone further, accusing me of advocating violence. Briefly: the question is not whether nonviolent revolt is preferable – of course it is, and when successful it leads to more liberal, stable political transitions than violent revolt. The pertinent question is, in what situations are nonviolent revolts doomed to fail? And against recalcitrant regimes that will use all possible force to suppress dissent, what avenues are open to the revolutionary? In short, what works against the most oppressive and violent regimes? This inquiry is more topical than ever today, as revolts have failed, are fizzling, or are headed toward full-scale war in countries including China, Iran, Syria, Yemen and Libya. It’s not a question of averages (what works most of the time?), but one of specifics – what works against the most brutal states?
Scholars Erica Chenoweth and Maria Stephan have studied more than a century of uprisings and found that nonviolence is twice as likely to succeed as violence. In a post on Rational Insurgent, Chenoweth says I misread the literature. She’s right that I somewhat sloppily described the work of Robert Pape and Ivan Arreguín-Toft; their research argues that violence works and makes strategic sense, but doesn’t explicitly compare it to nonviolence. I know that Chenowith and Stephan’s work followed the research of Pape and Arreguín-Toft, but the Globe essay makes it sound the other way around. Chenowith goes on to write:
Contrary to Cambanis’s argument, the historical record reveals rather dramatically that nonviolent resistance is strategically superior, and, in the end, often leads to much more democratic and stable societies than violent insurgency. Although Egyptians may be rightly frustrated with the pace and direction of the transition, they need only look to other recent cases—such as Libya or Yemen—to see the risks of using violence to attempt to improve their strategic positions. Our research indicates that if Egyptians resort to violence, their chances of success will drop by about half, the risk of civil war will steeply rise, and the chances for democracy in the foreseeable future will be considerably reduced.
Other writers take up a related form of argument. Tom H. Hastings writes that nonviolence is more likely to overturn a repressive government than violent revolt, which might well replace one tyranny with another. I think he has a good point, and I personally don’t think a violent revolt in Egypt is more likely than a peaceful one to lead to a liberal state; nowhere in my Globe piece do I make this contention. Hastings also nastily suggests that I must be a fan of mass killings in the name of revolution, a pointless and unfounded slander which merits no response but says something about the low standards to which people hold themselves in public writing these days.
Eric Stoner asks whether The Boston Globe “would prefer violence in Egypt.” Speaking for myself, again, I would not. But I’m a journalist, not an ideologue, so I’m asking questions like what actually works, and what is actually happening. If frustrated Egyptians start taking out hits on police officers, it won’t be because of my column. And if Egypt’s admirable nonviolent revolutionaries manage finally to push the ruling military out of power and install an elected, accountable, civilian government in its place, without widespread violence, it won’t be because of our essays. Although it will be an outcome to cheer.
The genesis of my Globe essay was entirely organic. In more than a dozen interviews, Egyptian activists told me without my even raising the subject that they were considering adopting violent resistance (mostly vigilante killings of police officers who had killed civilians) because non-violent protest hadn’t accomplished as much as they would like. This widespread organic sentiment prompted me to explore the idea as a story. None of the strategic organizers of the protests in Tahrir Square advocated a turn to violence, although several of them told me that they thought some limited violence might prompt further concessions from the military junta presently ruling Egypt.
The phenomenon of arguing for violence led me to history and political science. Historically, had successful revolutions employed violence? And what had political scientists found?
The record, of course, is mixed. The French Revolution became increasingly violent and radical, and of course, led to a backlash and an illiberal century. The Russian Revolution began comparatively peacefully, toppled the czar, and then was overtaken by a violent Bolshevik coup that created a brutal police state. Eastern Europe in the late twentieth century offered up a cast of “Velvet Revolutions,” in which violent states were quickly overturned by peaceful protest.
A few scholars I contacted offered very thoughtful comments that were sadly edited out of the published piece for length.
Mark Kramer, director of the Project on Cold War Studies at Harvard’s Davis Center for Russian and Eurasian Studies, said that proponents of violence often misunderstood the legacy of the Russian Revolution.
First of all, the real Russian Revolution was in March 1917, and it was peaceful. The Tsar’s regime collapsed in the face of escalating protests and mutinies in the army, and a new, democratic government (the provisional government headed by Aleksandr Kerensky) came to power. What happened in November 1917, when the Bolsheviks seized power, was really a coup d’etat and not a revolution. … The real revolution (in March 1917 with the overthrow of the Tsar’s regime) was a good case in which peaceful protests worked. The Bolsheviks’ subsequent willingness to rely on mass violent terror to seize power was a disaster for Russia. I have no doubt at all that Russia would be a much better country nowadays if it had not been forced to endure seven decades of rule under one of the most violent dictatorships that ever existed.
About the research of Chenoweth and Stephan he says:
I find their argument very intriguing, but one clear problem is that their database unavoidably omits countless non-violent resistance campaigns that never begin (because they are deterred) or that are crushed at a very early stage before they become widely known. Hence, the database is biased toward successful cases of non-violence, leaving ample room for debate about the authors’ conclusions. Moreover, even if Stephan and Chenoweth are correct in their aggregate analysis of non-violent resistance campaigns unadjusted for size, the existence of crucial outliers — China in June 1989, Burma in 2007, Zimbabwe in 2005 and 2008, and Iran in June-July 2009 — raises further questions about the validity of their argument. Suffice to say that more research will be needed.
For now, the question of whether the use of violence by a protest movement is likely to contribute to the ouster of a regime remains open. The downfall of regimes in numerous countries after non-violent protests over the past 35 years, most recently this year in the Arab world, is extremely important to bear in mind, but in many instances, including China, Uzbekistan, Burma, Zimbabwe, Iran, and Belarus among others, autocratic governments have been willing to rely on ruthless violence to crush mass non-violent protests and prevent further challenges.
On the other hand, the inefficacy of non-violence in many cases does not necessarily mean that the use of violence would have been more conducive to success. Clearly in some cases the use of violence has simply made things worse. The Chechens’ resumption of violent attacks in August 1999, rather than proceeding with non-violent negotiations to settle the final status of Chechnya as called for under the August 1996 Khasavyurt accords, ended what was arguably the best chance Chechnya has ever had (or will ever have) to achieve independence. The Chechen guerrillas’ decision to resort to violence proved disastrous.
Personally, I don’t advocate violence, torture, or extrajudicial killing, whether by states or insurgents. It’s still worth asking though – and the research of Chenowith and Stephan in no way ends the debate – whether nonviolent protest works against a regime truly determined to stay in power at all costs (cf. China, Iran, Syria, and Libya).
CAIRO – When Egyptian president Hosni Mubarak resigned after 18 days of public demonstrations here last winter, Tahrir Square instantly took its place in the world’s iconography of peaceful protest. Young men and women brandishing nothing more lethal than shoes and placards had toppled a dictator. One subversive slogan – “The people want the fall of the regime” – in the mouths of a million people overpowered a merciless police state.
That was half a year ago. Today, Mubarak’s military council runs the country, wielding even more power than before when it had to share authority with the president’s family and civilian inner circle. The military has detained thousands of people after secret trials, accused protesters of sedition, and issued only opaque directives about the country’s path toward a constitution and a new elected civilian government.
As time passes and revolutionary momentum fades in the broader public, a new current of thought is arising among the protesters who still occupy Tahrir Square, demanding civilian rule and accountability for former regime figures. Many are now asking an unsettling question: What if nonviolence isn’t the solution? What if it’s the problem?
“We have not yet had a true revolution,” said Ayman Abouzaid, a 25-year-old cardiologist who has taken part in every stage of the revolution so far. At the start, Abouzaid wholeheartedly embraced nonviolence, but now believes that only armed vigilante attacks will force the regime to purge the secret police and other operatives who still retain their jobs from the Mubarak era. “We need to take our rights with our own hands,” he says.
Among the dedicated core of Egyptian street activists who have been at the forefront of the protests since the beginning, an increasing number have begun to argue that a regime steeped in violence will respond only to force. Egypt’s revolution appeared nonviolent, they argue, only because it wasn’t a revolution at all: it was a quiet military coup that followed the resignation of the president. They cast a glance at nearby Syria and Libya, still racked by sustained violent revolts against their authoritarian leaders, and wonder if that may be what a true revolution looks like.
Leftist political thinkers have turned to the history of the French and Russian Revolutions to argue that a full break from Egypt’s authoritarian past will ultimately require the use of force against the regime. Rank-and-file activists in Tahrir Square invoke a more visceral rule of power, pointing out that riot troops and secret police agents will yield only to the raw strength of popular confrontation.
Egypt’s trajectory is also raising a bigger question about revolutions: Is the modern view of regime change naive and inaccurate, reading too much into the uprisings that swept Eastern Europe after the fall of the Soviet Union? Perhaps these swift and largely peaceful overthrows of former Communist regimes are the exception rather than the rule when it comes to revolution. And if that’s true, Egypt and the other countries driving the Arab Awakening might be heading not toward something better, but something worse.
Nonviolence was philosophically at the heart of Egypt’s revolution since the beginning, and it’s part of why Tahrir Square appealed not only to millions of Egyptians but to so many in the West. January 25 fit nicely on a bumper sticker, signifying a gentle, acceptable kind of popular uprising for the modern age.
But some in Egypt – even among those who don’t want to see a more violent turn now – are already saying we need to see last winter’s events differently. The days that led to Mubarak’s fall were starkly violent, they point out, and the youth who battled their way into Tahrir Square in January did so by overpowering riot police with rocks and Molotov cocktails.
“We did use violence, but we never started violence,” says Alaa Abd El Fattah, an influential labor activist and blogger who has been organizing teach-ins and impromptu conferences on his country’s future. He has pushed a less utopian narrative of the revolution’s origins, although he still believes that protesters today need to remain nonviolent to achieve their goals.
Recent events, however, have convinced some revolutionaries to feel otherwise. Since Mubarak resigned in February, the military has taken charge of internal security and run the show with the same caprice and impunity that characterized the reign of Mubarak’s secret police. Little headway has been made on the demand that unifies protesters and the Egyptian public – that police officers who killed or abused civilians under the old regime be removed from their jobs and held accountable. Egyptian citizens who express political dissent are still routinely denounced on state-run television as foreign agents and spies. And on June 28, the riot police deployed for the first time since Mubarak left office, and with apparent relish pummeled demonstrators with tear gas, birdshot, and plastic bullets. YouTube videos capture police in and out of uniform taunting demonstrators with swords and sarcastically chanting one of the uprising’s own slogans back to them: “Raise your head, you’re Egyptian.”
Since then, a persistent chorus has started to call for a more violent challenge to the regime’s behavior. Core activists in Tahrir Square point out that it was the brute force of people fighting riot police in January that startled the regime and forced Mubarak’s resignation; they argue that the mostly peaceful manifestations since then have allowed the military dictatorship to survive intact. During the initial uprising, Abouzaid, the cardiologist, slept in front of Egyptian Army tanks to stop them advancing into Tahrir Square. In the past month he has come to embrace an even more radical approach. “Freedom means death,” Abouzaid said. “That is the equation of a true revolution. You know the police officer who killed your son? You go and kill him.”
Families of those killed in the initial uprising have organized a major pressure campaign to force the government account for the missing and put on trial all the officials involved in attacking demonstrators. Otherwise, they say, they will have every right to take justice into their own hands.
“I want them all to go to hell,” says Sayed Goma, 52, who has been wandering through Tahrir Square since January with a picture of his son, who disappeared in the Jan. 28 clashes with police and is presumed dead. Goma blames the government coroner he accuses of burying unidentified bodies of slain protesters in secret mass graves.
“I will kill his son and not give him the body,” Goma says.
Some street-fight veterans talk of assaulting police stations. Others, including some leaders, say that targeted assassinations of murderous police would spur the regime into action.
The pressure to escalate resistance and employ violence has driven a rift through the disparate coalition of groups still occupying Tahrir Square. Leaders of the April 6 movement, a driving force behind the revolution that commands deep support in working-class areas because of its history of labor activism, have studiously shut down any talk of violent action. April 6 has embraced mainstream positions, even tempering criticism of the military junta while generals were accusing the movement of subversion. But rank-and-file members of the group in Egypt, like Joe Gabra, chafe: “We need to take up arms,” says Gabra, a young organizer. “If you get shot at, this peaceful stuff doesn’t work.”
Even Abd El Fattah, the labor activist and blogger who argues against a revolutionary embrace of violence, muses publicly about using it. “I talk about killing police officers all the time. It’s a fantasy,” he says. “It could solve our problems, but it doesn’t mean I am planning to do it.”
The argument in Tahrir Square is more than a local debate over tactics; it reflects a real divide among political thinkers over what works best when challenging a police state. Though revolutions have long been associated with bloodshed, the astonishing success of the “soft” revolutions of Poland, Czechoslovakia, and later the Ukraine drove much policy and social science research in the post-Cold War era. An influential study published in 2008 made a strong case that peaceful protest was the most effective way to challenge authoritarian regimes. In “Why Civil Resistance Works: The Strategic Logic of Nonviolent Conflict,” Maria J. Stephan and Erica Chenoweth studied 323 resistance campaigns from 1900 to 2006, and concluded that nonviolent efforts succeeded twice as often as violent ones. Violent crackdowns on peaceful protesters tended to backfire, Stephan and Chenoweth argued, and nonviolent resistance garnered greater international support.
This line of thinking has apparently been compelling not only to academics but to authoritarian rulers in places like China and Russia, who have deployed the full force of the state against even tiny, marginal civil disobedience campaigns, unnerved at the prospect that they might swell into massive nonviolent uprisings. And the philosophy of the Eastern European “velvet” or “color” revolutions infused some of the drivers of the Arab Awakening itself: April 6 leaders even traveled to Europe to meet leaders of Otpor, the nonviolent Serbian movement that overthrew Slobodan Milosevic.
But these gentle revolutions, it turns out, might be exceptions rather than the rule. There’s a backlash among some historians and political scientists that echoes the gut feeling of Egypt’s frustrated revolutionaries. They suggest, sometimes reluctantly, that regimes that insist on ruling by the gun, so to speak, might only be pushed aside by the gun.
Robert Pape, a University of Chicago political scientist, studied terrorist attacks, aerial bombing, and other forms of coercion, and concluded that violence achieves strategic goals far more effectively than peaceful means. Ivan Arreguín-Toft, a political scientist at Boston University, makes a similar argument about the critical role of violence for opposition movements in his book “How the Weak Win Wars: A Theory of Asymmetric Conflict.”
Some analysts and academics seeking to understand the forces at play in the Arab Awakening look less to the gentle power transitions in contemporary Europe than to the fiery, radical and violent regime changes in Iran in 1979 and France in 1789. These scholars say that the French Revolution, with its guillotine and counterrevolutionary backlash, might be a more useful example of a true break with the past than the measured transitions two centuries later in Eastern Europe. The French Revolution swept aside an entire system, not only removing the royal family from power but smashing the feudal economy and the monarchical philosophy on which it was built. Arab activists have also noted that Iran’s revolution – whose theocratic aims they do not share – successfully remade the entire state because it brooked no dissent and warmly embraced violent tactics.
The gentler Eastern European uprisings, by contrast, are now seen as a different kind of regime change: not so much revolutions as restorations, a return to a broader European trajectory interrupted by Soviet domination at the end of World War II. Other uprisings, like the “people power” waves that overthrew dictators in the Philippines and Indonesia, largely avoided violence because they demanded incremental reform rather than the toppling of an entire system, and the military didn’t defend the regime. It is usually when the dictator’s ruling apparatus refuses any compromise at all that reformist opposition morphs into violent revolution – as is the case today in Syria, where rebellion is veering toward open conflict, and Libya, fully in the throes of civil war. Most Egyptians, it is clear from the public conversation here, would prefer the smoother kind of transition: a revolution without the sturm und drang. The question is whether that will be enough to unseat the military.
Theda Skocpol, the Harvard sociologist who redefined the study of comparative revolutions, cautioned that “there is no one ‘paradigm’ for a revolution.”
“Egypt seems to me to be going through a kind of political change, but not a full-blown revolution,” Skocpol says. With the army still in charge, she argues, nonviolent protests may yet be the most effective approach, especially since the military establishment, which depends on US support, will be sensitive to international public opinion.
Back in Tahrir Square, conversations are rife with historical analogies, as activists trade theories about revolutions past, in France, Russia, and Eastern Europe, and ongoing fights elsewhere in the Arab world.
“We need patience,” Sayed Radwan, a 52-year-old airline counter representative, said on a recent afternoon. “The French Revolution took 30 years.”
“Yes,” snapped his friend, “but first, they killed all the leaders.”