AFP PHOTO/MARWAN NAAMANI/GETTY IMAGES
Burj Khalifa soars above the other buildings in Dubai.
[Originally published in The Boston Globe Ideas section.]
BEIRUT, Lebanon — The Palestinian poet and filmmaker Hind Shoufani moved to Dubai for the same reasons that have attracted millions of other expatriates to the glitzy emirate. In 2009, after decades in the storied and mercurial Arab capital cities of Damascus and Beirut and a sojourn in New York, she wanted to live somewhere stable and cosmopolitan where she also could earn a living.
Five years later, she’s won a devoted following for the Poeticians, a Dubai spoken-word literary performance collective she founded. The group has created a vibrant subculture of writers, all of them expats.
To its critics—and even many of its fans—“culture” and “Dubai” barely belong in the same sentence. The city is perhaps the world’s most extreme example of a business-first, built-from-the-sand boomtown. But Shoufani and her fellow Poeticians have become a prime exhibit in a debate that has broken out with renewed vigor in the Arab world and among urban theorists worldwide: whether the gleaming boomtowns of the Gulf are finally establishing themselves as true cities with a sustainable economy and an authentic culture, and, in the process, creating a genuine new path for the Middle East.
KARIM SAHIB/AFP/GETTY IMAGES
Burj Khalifa, the world’s tallest tower.
This is a question of both economic interest and huge sentimental importance. The Arab world is already home to a series of capitals whose greatness reaches deep into antiquity. The urban fabric and dense ancient quarters of Baghdad, Damascus, Cairo, and Beirut have long nourished Arab culture and politics. But, racked by insurrection, unemployment, and fading fortunes, they have also begun to seem, to many observers, more mired in the past than a template for the future.
The Dubai debate broke out again in October when Sultan Al Qassemi, a widely read gadfly and member of one of the United Arab Emirates’ ruling families, wrote a provocative essay arguing that the new Gulf cities, Dubai most notable among them, had once and for all eclipsed the ancient capitals as the “new centers of the Arab world.” A flurry ofwithering essays, newspaper articles, and denunciations followed. “I touched a sensitive nerve,” Al Qassemi said in an interview.
His critics object that Dubai is hardly a model—as they point out, 95 percent of the city’s population is not even naturalized, but made up of expatriates with limited rights. And there’s another problem as well. Every one of the Gulf boomtowns—besides Dubai, they include Abu Dhabi, Qatar, Manama, and Kuwait City—has been underwritten, directly or indirectly, by windfall oil profits that won’t last forever.
AMRO MARAGHI/AFP/GETTY IMAGES
A mosque in Cairo.
In her seminal work “Cities and the Wealth of Nations,” Jane Jacobs argued that cities that were a “monoculture” last only as long as the boom that created them, whether it involved bauxite, rubber plants, or oil. To thrive in the long term, cities need adaptable, productive economies with diverse, high-quality workers and enough capitalist free-for-all so that unsuccessful businesses fail and new ones spring up. Otherwise they risk the fate of single-industry cities like New Bedford, Detroit, or the completely abandoned onetime mining city at Hashima Island in Japan.
Can Dubai and its peers successfully make that transition? Started as the kind of monocultures that Jacobs argued are doomed to fail, they are now trying to harness their money and top-down management to create a broader web of interconnected industries in the cities and their surrounds.
Dubai is the cutting edge of this experiment. With its reserves depleted, its growth comes from a diverse, post-oil economy, although it still receives significant financial support from other Emirates that are still pumping petrochemicals. Its rulers are determined to make their city a center for culture and education, building museums and institutes, sponsoringfestivals and conferences, with the expectation that they can successfully promote an artistic ecosystem through the same methods that attracted new business. What happens next stands to tell us a lot about whether an artificial urban economy can be molded into one that is complex and sustainable. If it can, that may matter not just for the Middle East, but for cities everywhere.
JACOBS, A PIONEERING WRITER on cities and urban economics who died in 2006, is perhaps best known for “The Death and Life of the Great American City,” her paean to Greenwich Village and small-scale urban planning. But in her 1984 follow-up about the economies of cities and their surrounding hinterlands, Jacobs showed a harder nose for business. To be wealthy and dynamic, she argued, cities needed not to depend on military contracts or to be hampered by having to subsidize other, poorer territories—pitfalls that have driven the decline of many a capital city. In her book, she touted Boston and Tokyo as creative, diversified economic engines. But many of the world’s storied capital cities, like Istanbul and Paris, she wrote, were fatally bound to declining industries and poor, dependent provinces.
Today, that description perfectly encapsulates the burden carried by the Arab world’s great cities. Baghdad, Damascus, and Cairo historically hosted multiple vibrant economic sectors: finance, research, manufacturing, design, and architecture. Eventually, though, they were hollowed out. Oil money, aid, and trade eliminated local industry, and the profits of these cities were siphoned away to support the poverty-stricken rural areas around them.
As these cities fell behind, a very different new urban model was rising nearby, along the Persian Gulf. As the caricature of the Gulf states goes, nomadic tribes unchanged for millennia suddenly found themselves enriched beyond belief when oil was discovered. The nouveaux riches cities of the Gulf were born of this encounter between the Bedouin and the global oil market.
The reality is more nuanced and interesting. The small emirates along the Gulf coast had long been trade entrepôts, and Dubai was among the most active. Its residents were renowned smugglers, with connections to Persia, the Arabian peninsula, and the Horn of Africa. When oil came, the Emirates already had a flourishing economy. And because their reserves were relatively small, they moved quickly to invest the petro-profits into other sectors that could keep them wealthy when the oil and gas ran out. Dubai, Abu Dhabi, and Sharjah (all in the Emirates) pioneered this model, with neighboring Manama, Qatar, and Kuwait City following it closely.
Skeptics have decried the new Gulf cities, often vociferously, ever since the oil sheikhs announced their grand ambitions to build them in the 1970s. In his 1984 classic “Cities of Salt,” the great novelist Abdelrahman Munif chronicled the rise of the Arab monarchs in the Gulf. He explained the title to Tariq Ali in an interview: “Cities of salt means cities that offer no sustainable existence,” Munif said. “When the waters come in, the first waves will dissolve the salt and reduce these great glass cities to dust. With no means of livelihood they won’t survive.”
And yet, despite the apparent contempt of cultural elites, when civil war swept Lebanon, the Arab world’s financial center moved to the Gulf. Soon other sectors blossomed: light manufacturing, tourism, technology, eventually music and television production.
Dubai led the way. It built the infrastructure for business, and business quickly came. Over the decades, investment and workers flowed to a desert city of malls and gated communities, which had a huge airport, well-maintained streets, and clear rules of the road. Abu Dhabi, Manama, and Doha followed suit, although they took it more slowly; with continuing oil and gas revenue, they didn’t need to take the risk of growth as explosive as Dubai’s. Unlike the austere cities of Saudi Arabia, all the Gulf’s coastal trading cities had a tradition of a kind of tolerance. Other religions were welcome, and so were foreigners, so long as they didn’t question the absolute authority of the ruling family.
In the last four decades of the oil era, that model has evolved into the peculiar institution of a city-state dependent on a short-term foreign labor pool from top to bottom. The most extreme case is Dubai, where less than 5 percent of the 2 million people are citizens. Citizens form a minority in all the other Gulf cities as well. Wages for expatriates—especially workers in construction and service sectors like the airlines—are kept low, and foreign laborers are isolated from better-off city residents in labor camps. Construction workers who complain or try to unionize have been deported. White-collar residents who have criticized Emirati rulers or who have supported movements like the Muslim Brotherhood have had their contracts canceled or their residencies not renewed.
The economic crash of 2008 wiped out some of Dubai’s more excessive projects (although the signature underwater hotel finally opened this year). The real estate bubble burst; expats abandoned their fancy cars at the airport. “There was this glee that the city was over. But it was resilient,” said Yaser Elsheshtawy, a professor of architecture at the UAE University. The Gulf cities bounced back. Millions of new workers, from Asia, Europe, as well as the Arab world, have migrated to the Gulf since then.
“The Dubai model might be good, it might be bad, but it deserves to be looked at with respect,” Elsheshtawy said. Egyptian by birth, Elsheshtawy has lived on three continents, and he’s grown tired of having to defend his choice to work in the Emirates. After he read dozens of ripostes to Al Qassemi’s polemic, including many that he felt smacked of cultural snobbery toward anyone who lived in the “superficial” Gulf cities, Elsheshtawy penned an eloquent defense of Dubai called “Tribes with Cities” on his blog Dubaization. He doesn’t like everything about the Gulf, but Elsheshtawy believes that Dubai and the other booming Gulf cities, “unburdened by ancient history” and blessed by a mix of cultures, can provide the world “the blueprint for our urban future.”
DUBAI AND ABU DHABI, the showcase cities of the Emirates, often seem like a they’re run by a sci-fi chamber of commerce. They’ve got the world’s tallest building, the biggest new art collections in starchitect-designed museums, the busiest airports, and growing populations. Beneath that surface, though, lies a structure that worries even many supporters: Freedoms are tightly constrained, and most of the population is made of explicitly second-class noncitizens. Other growing cities chafe under censorship or political restrictions—Beijing, Hong Kong, and Singapore spring to mind. But there’s a difference between those places, where citizen-stakeholders live out their entire lifetimes, and a city where almost everyone is fundamentally a visitor.
Even Al Qassemi, the Emirati who believes the new cities have pioneered a better economic model, has argued that the citizenship restriction will hurt Dubai and cities that follow its model. “Without naturalization, all the Arabs who move here and are creating these cities will see them only as stepping stones to greener pastures,” Al Qassemi said. “People make money and they leave.”
There’s a glaring moral problem with a city ruled by a tiny clan where most of the workers have no rights. But the last few decades suggest that citizenship and political freedom aren’t prerequisites for GDP growth. Jacobs wrote a lot about what cities need, but the only kind of freedom she wrote about was the freedom to innovate and create wealth. The new Gulf cities have carefully provided a state-of-the-art, fairly enforced body of regulations for corporations—precisely the kind of rule of law they actively deny to foreign workers.
In treating businesses more solicitously than individuals, the Gulf city model may depend on a twist that Jacobs never foresaw: They don’t care whether people stick around. In fact, these new cities assume they will be able to innovate precisely because they won’t be encumbered by citizens whose skills are no longer needed. If Dubai needs fewer construction and more service workers, or fewer film producers and more computer programmers, it simply lets its existing contracts lapse and hires the people it needs on the global market. The churn isn’t a flaw in the model; it’s part of its foundation.
That may explain why even Dubai’s defenders are not planning to stick around. Shoufani, the poet, says she cherishes the secure space to create that Dubai has given her, but she still plans to move on in a few years. So does Elsheshtawy, the architecture professor whose academic studies of urban space have helped counter the narrative of Dubai as a joyless, dystopian city interested only in the pursuit of money. He plans to retire somewhere else. It may not matter to Dubai’s fortunes, however, as long as people arrive to take their place.
The next few years will begin to tell how this experiment has turned out. Just as Jane Jacobs said, it doesn’t matter so much how a city was born. It matters how its economy operates. If Dubai and its imitators outlive the oil revenues and regional instability that helped them boom, it will be a lesson for cities everywhere in how to invent a viable urban economy—even if it leads to a kind of city that Jacobs herself might have loathed to live in.
PABLO AMARGO FOR THE BOSTON GLOBE
[Originally published in The Boston Globe Ideas section.]
BEIRUT—In any American university, what the six researchers in this room are doing would be totally unremarkable: launching a new project to use the tools of social science to solve urgent problems in their home countries.
But for the young Arab Council for the Social Sciences, this work is anything but routine. The group’s fall meeting, initially planned for Cairo, was canceled at the last minute when Egyptian state security agents demanded a list of participants and research questions. The group relocated its meeting to Beirut, meaning some researchers couldn’t come because of visa problems. Still others feared traveling to Lebanon during a week when the United States was considering air strikes against neighboring Syria. Ultimately, the team met in an out-of-the-way hotel, with one participant joining via Skype.
Their first, impromptu agenda item: whether any Arab country could host their future meetings without any political or security risk.
The council’s struggle reflects, in microcosm, a much bigger problem facing the Middle East. With old regimes overthrown or tottering, for the first time in generations a spirit of optimism about change has swept through the region. The Arab uprisings cracked open the door to new ideas for how a modern Arab nation should govern itself—how it could rebalance authority and freedom, religious tradition and civil rights. But a key source of those new ideas is almost completely shut off. With few exceptions, universities and think tanks have been yoked under tight state control for decades. Military and intelligence officials closely monitor research, fearing subversion from political scientists, historians, anthropologists, and other scholars whose work might challenge official narratives and government power.
Just one year after its launch, the ACSS is hoping to fill that vacuum, giving backing and support to scholars who want to do independent, even critical thinking on the problems their societies face. It’s a daunting order in a region where for generations talented scholars have routinely fled abroad, mostly to Europe and North America.
“There’s been almost a criminalization of research here,” says Seteney Shami, the Jordanian-born, American-trained anthropologist tapped to get the new council running. “We want to change the way people think about the region.”
It took nearly five years of planning, but the new council is now finishing its first year of operation. It has awarded roughly $500,000 to 50 researchers. Its goal is ambitious: to build an enduring network that will connect individual researchers who until now have mostly labored alone, under censorship, or overseas. The council aims to give those thinkers institutional punch—not just funding for projects that aren’t popular with local regimes or Western universities, but muscle to fight authorities who still maneuver to block even the most innocuous-sounding research missions. Today issues of ethnic, sectarian, and sexual identity are still taboo for governments—and they’re precisely the focus for most of the researchers who met in Beirut.
Shami and the other founders envision the council as only a first step, helping push for new openness in universities and in publishing. Ultimately, they believe, it stands to change not only how Arab countries are perceived abroad, but the way those countries are governed. Given the incredible turbulence in the Arab world today, it’s easy to see why a new flowering of social science is needed—and also why it’s a risky proposition for whoever hopes to set it in motion.
WHEN WE THINK about where ideas come from, we often picture lone thinkers toiling indefatigably until they achieve their “eureka” moments. But in fact, the ideas that change the way we organize or understand our daily world grow most readily from ecosystems that can train such scholars, test their claims, and ultimately spread and promote their new thinking. The modern West takes this system for granted; universities are perhaps the most important nodes in a network that also includes foundations, think tanks, and relatively hands-off government funding agencies.
Even societies like China, with its tradition of centralized state control, support a vigorous web of universities and institutes that produce and test ideas. In China, the government might drive the research agenda—rural educational outcomes, say, or international trade negotiations—but researchers are expected to produce rigorous results that stand up to outside scrutiny.
Not so in the Arab world. Despite a historic scholarly tradition, and a vigorous cohort of contemporary thinkers, the intellectual institutions in Arab countries are today almost universally subordinated to state control. As the dictators of the 1950s matured and grew stronger, they feared—correctly—that universities would nurture political dissent and that students were susceptible to free-thinking. (Even today, groups like the Muslim Brotherhood induct most of their leaders into politics through university student unions.) So they dispatched intelligence officers to control what professors taught, researched, and published, and to curtail student activism of a political flavor.
To the extent that think tanks, institutes, and journals were allowed to exist at all, they became either government mouthpieces or patronage sinecures. Plenty of individual scholars have continued to work in an independent vein, and many do research or publish work outside the influence of the ruling regime. For the most part, though, they do so abroad; those who speak openly in their home countries have often encountered gross repression, like the Egyptian sociologist Saad Eddin Ibrahim, an establishment thinker who was imprisoned in 2000 for taking foreign grant money. His prosecution—despite his close ties to the ruling dictator’s family— served as a reminder to other scholars not to stray too far from the state’s goals.
The obstacles to researchers who hope to confront the region’s problems run even deeper than that: in most Arab nations, even basic data on public issues like water consumption, childhood education, and women’s health are treated as state secrets. So are government budgets, and anything to do with the military, the police, and industry. Egypt still tightly guards access to land registries running as far back as the Ottoman Era; Lebanon famously hasn’t conducted a census since 1932, and refuses to release any government data about the population size or its religious composition. International agencies like the World Bank are allowed to conduct surveys and research as part of development aid projects, but only on the condition that they keep the data confidential.
As a result, great Arab capitals like Damascus, Beirut, Baghdad, and Cairo, once among the world’s great centers of learning, have suffered a systematic impoverishment of intellectual life, especially in the realms in which it is now most needed.
IN MANY WAYS Seteney Shami’s career illuminates those challenges precisely. She left her native Jordan first for the American University of Beirut and then to earn an anthropology PhD from the University of California at Berkeley. Even as a young scholar, she knew she might face problems at home simply for writing about matters of identity among her own ethnic group, the minority Circassians, so she had her dissertation removed from public circulation. In the 1990s, she returned to the Middle East and built a well-regarded graduate program in anthropology at Jordan’s Yarmouk University—one of few rigorous graduate-level social sciences departments in the region. The experiment was short-lived, collapsing after only a few years when government patronage hires swamped the university faculty. Several scholars, including Shami, left.
She spent the next decade at US-funded foundations, working in Cairo for the Population Council and later in New York for the Social Science Research Council. She found that the scholars with whom she collaborated—and to whom she sometimes gave grants—relished the networks they built and the ideas that flowed when they had a chance to work with colleagues from different Arab countries, as well as in Europe and the United States. But there was a hitch: When the grant money ended, so did the network. In the West, such a change wouldn’t matter so much; the scholars would always have their home institutions to fall back on. Not so for a scholar in Jordan or Egypt, whose home institution might be far from supportive of his or her work.
“There is no institutional incentive to produce research in most Arab universities,” says Sari Hanafi, a sociologist at the American University of Beirut who is on the board of the new council. Hanafi conducted his own study of academic elites in the region, and discovered that most regional research was confined to safe descriptive projects—“production that will not question religious authority or the political system.” Essentially, it’s scholarship that doesn’t produce any new knowledge, or offer the possibility of change.
So a group of scholars including Shami and Hanafi began to conceive of something that would rectify the problem. Their vision isn’t a “think tank” in the Washington sense, but really almost a substitute for what universities do, creating a new and permanent forum to research, talk about, and solve serious social problems, outside government influence.
They secured funding from the Swedish government and later from Canada, the Carnegie Corporation, and the Ford Foundation. Among Arab countries, only Lebanon had laws that allowed a foreign international organization to operate free from government interference; even here, it took two full years for ACSS to clear all the red tape. In a way, the group’s timing was fortuitous: By the time the organization came online in 2011, with a few dozen scholars participating, the Arab uprisings were in full swing.
The first projects illustrate the flair ACSS is bringing to the sometimes stuffy world of social science. One study focuses on inequality, mobility, and development, and has brought together economists and other “hard” social scientists to explore issues like the impact of refugees from Syria’s civil war. A second, called “New Paradigms Factory,” unabashedly seeks to midwife the region’s next big ideas, asking scholars to challenge its dominant notions of sovereignty, national identity, and government power. The third major project, “Producing the Public in Arab Societies,” probes minority identity, community organizing, and alternative media sources—sensitive issues that governments steer away from, but that will be key to any new ordering of Arab civic life.
SOME AMONG the inaugural crop of researchers worry that despite its forward-looking visions, the window for an experiment like the Arab Council for the Social Sciences might already be closing. Two years ago, when Shami recruited her first team of scholars and grant recipients, a wave of optimism was washing over the Arab world. “It was a time to be brave, to test the boundaries of free expression,” she said.
Now, many of the same old restrictions have returned: Egyptian secret police, who had backed off after dictator Hosni Mubarak’s fall, resumed their open scrutiny of intellectuals after the military coup in July. Qatar, Bahrain, and the United Arab Emirates, fearing revolts from their own citizens, have also tightened their approach to academic inquiry, shutting down local research institutes and banning critical academics from attending conferences and teaching at international universities. Several of the scholars who met in Beirut as part of the “Producing the Public” working group declined to speak on the record even in general terms about the ACSS, fearing that any publicity at all might interfere with their work.
The first ACSS projects will take several years to yield results, and Shami says the long-term impact will depend, in part, on whether Arab governments continue to actively persecute intellectuals they perceive as critics. Meanwhile, Shami says, the group intends to go where it is most acutely needed—funding the edgiest and most relevant scholarship, and pushing for more open access to archives and data. To make up for the lack of regional peer-reviewed journals, it might begin publishing research itself. With the grandiose-sounding goal of transforming the entire discourse about the Arab world, Shami is aware that ACSS might fail; but she’s not interested, she said, “in biding our time and holding annual conferences.” If it proves necessary, she said, ACSS might create its own university.
Already, though, ACSS has had some visible impact. Omar Dahi, an economist at Hampshire College in Massachusetts, won a grant from the new council to study the way Syrian refugees are supporting themselves and how they are transforming the nations to which they’ve fled. He’s spending the semester in Beirut, and he expects to produce a series of articles that will bring rigor to a heated and topical policy debate. The council, he says, allows Arab scholars “to answer their own questions, and not only the questions being asked in North American and Western European academia.”
“It’s not easy, and ACSS alone is not going to be able to do it,” Dahi said. “But you have to start somewhere.”
Supporters of ousted President Mohammed Morsi protested at the Republican Guard building in Nasr City, Cairo. AP PHOTO/HASSAN AMMAR
IS DEMOCRACY POSSIBLE in the Middle East? When observers worry about the future of the region, it’s in part because of the dispiriting political narrative that has held sway for much of the last half century.
The conventional wisdom is that secular liberalism has been all but wiped out as a political idea in the Middle East. The strains of the 20th century—Western colonial interference, wars with Israel, windfall oil profits, impoverished populations—long ago extinguished any meaningful tradition of openness in its young nations. Totalitarian ideas won the day, whether in the form of repressive Islamic rule, capricious secular dictatorships, or hereditary oligarchs. As a result, the recent flowerings of democracy are planted in such thin soil they may be hopeless.
This understanding shapes policy not only in the West, but in the Middle East itself. The American government approaches “democracy promotion” in the Middle East as if it’s introducing some exotic foreign species. Reformists in the Arab world often repeat the canard that politicized Islam is incompatible with democracy to justify savage repression of religious activists. And even after the revolts that began in 2010, a majority of the power brokers in the wider Middle East govern as if popular forces were a nuisance to be placated rather than the source of sovereignty.
An alternative strain of thinking, however, is starting to turn those long-held assumptions on their head. Historians and activists are unearthing forgotten chapters of the region’s history, and reassessing well-known figures and incidents, to find a long, deep, indigenous history of democracy, justice, and constitutionalism. They see the recent uprisings in the Arab world as part of a thread that has run through its story for more than a century—and not, as often depicted, a historical fluke.
The case is most clearly and recently laid out in a new book called “Justice Interrupted: The Struggle for Constitutional Government in the Middle East” by Elizabeth F. Thompson, a historian at the University of Virginia, who tries to provide a scholarly historical foundation to a view gaining traction among activists, politicians, and scholars.
Thompson sees the thirst for justice and reform blossoming as long as 400 years ago, when the region was in the hands of the Ottoman Empire. In the generations since, bureaucrats, intellectuals, workers, and peasants have seized on the language of empire, law, and even Islam to agitate for rights and due process. Though Thompson is an academic historian, she sees her work as not just descriptive but useful, helping Arabs and Iranians revive stories that were deliberately suppressed by political and religious leaders. “A goal of this book is to give people a toolkit to take up strands of their own history that have been dropped,” Thompson said in an interview.
Not everyone agrees with her view: Canonical Middle Eastern history, exemplified by Albert Hourani’s 1962 study “Arabic Thought in the Liberal Age,” holds that liberalism did flourish briefly, but was extinguished as a meaningful force in the early years of the Cold War. Even today Hourani’s analysis is invoked to argue that there’s no authentic democratic current to fuel contemporary Arab politics.
But Thompson’s work resonates with a host of Middle Eastern academics, as well as activists, who are advocating new forms of government and who see their efforts as consistent with local culture and history. It may offer a way out of the pessimism gripping many Arab political activists today, finding connections between apparently disparate reformist forces in the region, and political ideas that are often seen as irreconcilably opposed. Most intriguing, she finds elements of this constitutional liberalism even within fundamentalist Islamist movements that democratizers most worry about. These threads suggest a possible way forward, a way to build a constitutional, democratic consensus on indigenous if often overlooked traditions. Islamists and secular Arabs, it turns out, have found common ground in the past, even written constitutions together. The same could happen again now.
NO ONE , including Thompson, would claim that democracy and individual freedom have been the main driver of Middle Eastern politics. Before World War I, almost the entire region lay under the dominion of absolute monarchs claiming a mandate from God—either the Ottoman Sultan, or the Shah of Iran. Later, Western colonial powers divided up the region in search of cheap resources and markets for their goods.
Yet lost in this history of despots and corrupt dealers is a long stream of democratizing ideas, sometimes percolating from common citizens and sometimes from among the ruling elite. In the 19th and 20th centuries, western countries were beginning to move away from authoritarian monarchies and toward the belief that more people deserved legal rights. During this same time period in the Middle East, a similar conversation about law, sovereignty, and democracy was taking place, encompassing everything from the role of religion in the state to the right of women to vote.
Although authoritarian governments largely won the day, Thompson argues that the story doesn’t end there: Instead, she weaves together a series of biographies to trace the persistence of more liberal notions of Middle Eastern society. She begins with an Ottoman civil servant named Mustafa Ali who, in 1599, wrote a passionate memo exhorting the Sultan to reform endemic corruption and judicial mismanagement, because injustices were causing subjects to revolt—thus making the empire less profitable.
From 1858 to 2011, a series of leaders—most of them politicians and also prolific writers—amassed substantial public followings and pushed, though usually without success, for constitutional reforms, transparent accountable governments, and the institutions key to a sustainable democracy. Thompson was surprised, she said, to find the case for liberal democracy and rights in the writings of Iranian clerics, Zionist Jews, Palestinian militants, and early Arab Islamists.
With support from the Maronite church, a group of Lebanese peasants formed a short-lived breakaway mountain republic in 1858, dedicated to egalitarian principles. The blacksmith who led the revolt, Tanyus Shahin, insisted on fair taxation and equal protection of the law. His followers took over the great estates and evicted the landlords, but their main demand was for legal equality between peasants and landowners.
An Egyptian colonel named Ahmed Urabi led a revolt against the Ottoman ruler in 1882, inaugurating a tradition of mass revolt that had its echo in Tahrir Square in 2011. Urabi in his memoir recounts that when the Ottoman monarch dismissed his demands for popular sovereignty in their final confrontation, Urabi replied: “We are God’s creation and free. He did not create us as your property.” Decades later, in 1951, Akram Hourani rallied 10,000 peasants to resist Western colonialism and local corruption in Syria. Eventually, he and his followers in the Baath Party were sidelined by generals who turned the party into a military vehicle.
Some of the stories that Thompson tells are less obscure, like those of the founders of modern Turkey—the one sizable Islamic democracy to emerge from the former Ottoman empire or the Iraqi Communist Party, which had its heyday in the decade after World War II, and whose constitutional traditions remain an important force today even if the party itself is almost completely irrelevant.
Perhaps most encouragingly, in a region known for clashes of absolutes, she finds an encouraging strain of compromise—in particular in the early 20th century, when secular nationalists negotiated with Islamists in Syria to hammer out a constitution they could both support. It was swept aside when France took over in 1923.
“The Middle East is going to see these crises in Tahrir and Taksim and Iran until it can get back to a moment of compromise, which existed a hundred years ago with Islamic liberalism, where you can have your religion and your democracy, too,” Thompson said.
Thompson said she was surprised to find support for constitutionalism and due process in the writings of Hassan El-Banna, the founder of the Muslim Brotherhood, and even Sayyid Qutb, the ideologue whose writings inspired Al Qaeda. They believed that consensual constitutions could achieve even their religious aims, without disenfranchising citizens who opposed them.
Some of the characters in this tale have largely vanished to history. Others remain hotly contested symbols in today’s politics. The name of Halide Edib, a feminist and avatar of Turkish nationalism in the early 1900s, is still invoked by the governing Islamist party as well as its secular critics. In Egypt, which enjoyed a period of boisterous liberal parliamentary politics between the two world wars, activists today are trying to revive the writings of early Islamists who believed that an accountable constitutional state, with rights for all, would be better than theocracy.
IN THOMPSON’S VIEW , this world did not simply vanish: It lives on in contemporary Arab political thought, most interestingly in Islamist politics.
It’s easy to assume that religiously driven movements are all antidemocratic—and indeed, some have proven so in practice, like the ayatollahs in Iran or the Muslim Brotherhood in Egypt. But Thompson offers a more nuanced view, showing that many of these religious movements have internalized central elements of liberal discourse. The Muslim Brothers wanted to dominate Egypt, but they attempted to do so not by fiat but through a new constitution and a free-market economy.
Princeton historian Max Weiss says his own study of the Levant backs Thompson’s central argument that constitutionalism thrives in the Middle East: For more than a century, a powerful contingent of thinkers, activists, and politicians in the region have embraced rule of law, constitutional checks and balances, and liberal economics. Even when they’ve lost the political struggles of the day, they’ve remained active, shaped institutions like courts and universities, and provided an important pole within national debates.
For those in power, “constitutional” government can often be used as a fig leaf: Nathan Brown, an expert on Islamism and Arab legal systems at The George Washington University, observes that leaders like the monarchs in the Persian Gulf have often wielded constitutions as just another means of extending their absolute rule. And they’re not alone: Egyptian judges, Syrian rebels, and Gulf sheikhs often use law and constitution to “entrench and regularize authoritarianism, not to limit it,” he says.
But among the people themselves, there is a longstanding hope for the rule of law rather than the rule of generals, or of imams. Knowing this history is important, Thompson argues, because it establishes that democracy is a local tradition, with roots among secular as well as religious Middle Easterners. Reformers, liberals, even otherwise conservative advocates for transparency and human rights are often tainted as “foreign” or “Western agents,” imposing alien ideas on Middle Eastern culture. This slur is especially potent given the West’s checkered history in the region, which more often than not involved intervention on behalf of despots rather than reformers.
Even if democracy is far from winning the race, its supporters can take courage from how many Middle Easterners have demanded it in their own vernacular. As Thompson’s book demonstrates, it’s very much a local legacy to claim.
[Originally published in The Boston Globe Ideas section.]
BEIRUT — Alex, a Swiss bicyclist and Internet geek, thought he’d get a welcome break from his work as a computer engineer and a teacher when he moved to Damascus for a year to study Arabic. It was January 2011, a few months before the Arab uprisings spread to Syria.
But once he was there, Alex noticed with irritation that he couldn’t access Facebook and a seemingly random assortment of websites. Some Google search results were blocked, especially if they turned up pages containing forbidden terms like “Israel.”
He developed tricks to navigate the Internet freely, and sharpened his online evasion skills. If the government was so heavily monitoring and censoring Web surfing, he reasoned, it was surely spying on Internet users in other ways as well. He beefed up ways to encrypt his e-mail and Skype, and learned how to scour his own computer for remote eavesdropping software.
These skills ended up being more than just a personal hobby. When Syrians began to demonstrate against the regime of Bashar Assad, Alex found that his techniques were of urgent use to the friends he had made in the cafes of Damascus. Syrians were turning to activism, and they needed help.
“What was before a nuisance for me was now a danger to my friends,” said Alex, who didn’t want his last name published so as not to endanger any of his Syrian contacts.
Alex ended up spending two years in Beirut training Syrian antiregime activists on how to encrypt their data and protect their phones and laptops from the secret police, in what turned into a full-time job. Alex had become one of a small and secretive group of Internet security experts who work not with governments or companies but with individuals, teaching dissidents the skills they need to evade regime surveillance. Internet activists estimate there are about a hundred technical experts worldwide who work directly with dissidents.
As surveillance steps up and activists get more wired, the practical challenges for these digital security experts offer a unique glimpse of the frontline struggle between free speech and government control, or, as many of them put it, between freedom and authoritarianism. And with surveillance more than ever a concern for Americans at home, the knowledge of these security activists casts a revealing light on the peculiar role of the United States, home of both a powerful tech sector that has generated some of the most skillful evaders of surveillance and a government with an unparalleled ability to peer into our activities.
Indeed, even the people who know how to keep e-mail secret from the Syrians or Iranians say that it would be difficult to make sure the American government cannot eavesdrop on you. “It’s hard to find a service that it isn’t vulnerable to the CIA or NSA,” Alex said in an interview in Beirut. “It’s easier if you’re here, or in Syria.”
ACTIVISTS IN AUTHORITARIAN states face a range of basic problems when they sit down at a computer. They need to communicate privately in an environment where the regime likely runs the local Internet service. They may want to send news about domestic problems to international audiences; they may want to mobilize their fellow citizens for a cause the regime is trying to suppress. Whatever they do, they need to keep themselves out of trouble, and also avoid endangering their collaborators by unwittingly revealing their identities to the government
Tech experts like Alex offer them a mix of standard security protocols and tools designed specifically with lone activists in mind. The first step is “threat modeling.” Where does the danger come from and what are you trying to hide? A well-known dissident might not be worried about revealing her identity, but might want to protect the content of her phone conversations or e-mails. A relatively unknown activist might be more concerned with hiding her online identity, so that the government won’t connect her real-life identity to her blog posts.
Users new to the world of surveillance and evasion must master a new set of tools. There are proxy servers that allow access to blocked websites without tracking users’ browsing history or revealing their IP address. Security trainers teach activists how to encrypt all their data and communications. And because circumstances change, Web security advocates emphasize the importance of multiple, redundant channels—different e-mails, messaging programs and social media platforms—so that when one is compromised, there are other alternatives.
A repressive regime like Bashar Assad’s can effectively stymie dissent with crude old-fashioned ruses. On one occasion, the government arrested a rebel doctor while he was logged in to his Skype account. Agents posed as the doctor, sending all his contacts a file that supposedly contained a list of field hospitals. Instead, it installed program called a keylogger that allowed the Syrians to monitor everything the doctor’s contacts did on their computers.
Alex warns all the activists he trains that all their encryption measures could come to naught if they are caught, like the doctor was, while their computer is running—or if they give up their encryption password under interrogation. “They can always torture you for your password, and then all your data is compromised,” he said. There’s no foolproof protection against that.
Though these security measures can go a long way, consultants also find themselves needing to balance the effort it takes with the unique urgency of some of the dissidents’ lives. In the heat of violent conflict, encryption doesn’t always take priority. “Many of them are just too busy to care, to follow all the disciplined procedures,” Alex said. “It got to the point where it felt useless to teach them how to encrypt Skype when thousands of tons of TNT were falling from the sky.”
AS ACTIVISTS have tapped online resources in their struggles, a range of security specialists have sprung up to assist them. Some, like Alex, are independent operators; many of them arose loosely around a single crisis and then expanded their efforts.
In response to Tehran’s Web censorship in 2009, a group of Iranian-Americans established an organization called Access Now to train human rights groups and other organizations on more secure communications. In the four years since it has expanded worldwide and now sends technical specialists to work with activists in the former Soviet Union, the Middle East, and Africa. It also acts as a lobbying group, pressing for uncensored access to the Internet. “Access to an unfiltered and unsurveilled Internet is a human right,” says Katherine Maher, the group’s spokeswoman. “We should have the rights to free speech and assembly online as we have offline in the real world.”
A few years later, when the Arab uprisings began, activists again faced crucial concerns about technology and surveillance. Activists throughout the Arab world planned demonstrations online, and used social media as a major artery of communication. In Egypt, the government was so desperate to thwart the protest movement that in January 2011 it briefly cut off the entire nation’s Internet. Telecomix, a freewheeling collective that began in response to privacy concerns in Europe, was one of many groups that helped build workarounds so that Egyptians could communicate with one another and with the outside world in the early days of the uprising.
In Egypt, Alix Dunn cofounded a sort of nerd-wonk research group called The Engine Room in early 2011 to study and improve the ways that activists get tech support from the small community of available experts. “There are people who got really excited because all of a sudden IT infrastructure suddenly became part of something so political,” Dunn said. “They could be geeky and politically supportive at the same time.”
The advice is not always technical. For instance, in Egypt, Alaa Abdel Fattah, one of the country’s first bloggers and later a strategist for the 2011 uprising, championed a strategy of complete “radical openness.” He convinced other activists that they should assume that any meetings or communications could probably be monitored by the secret police, so activists should assume they’re always being overheard. Secret planning for protests should take place person to person, off the grid; in all other matters, activists should be completely open and swamp the secret police with more information than they could process. In the early stages of Egypt’s revolution, that strategy arguably worked; activists were able to outwit the authorities, starting marches in out-of-the-way locations before police could get there.
GIVEN THE RECENTrevelations about the US government’s online surveillance programs, it’s striking to note that much of the effort to improve international digital security for dissidents has been spurred by aid from the US government. The month after the Arab uprisings began, the US Department of State pledged $30 million in “Internet Freedom” grants; most of them have gone, directly or indirectly, to the sort of activist training that Alex was doing in Damascus.
In some ways, the latest American surveillance revelations haven’t changed the calculus for activists on the ground. Maher notes that almost all the State Department-funded training instructs activists around the world to assume that their communications are being intercepted. (Her organization doesn’t take any US government funding.)
“It’s broadly known that almost every third-party tool that you can take is fundamentally compromised, or could be compromised with enough time and computing power,” Maher said.
But there are new wrinkles. Some of the safest channels for dissidents have been Skype and Gmail—two services to which the US government has apparently unfettered access. It’s virtually impossible for a government like Iran’s to break the powerful encryption used by these companies. Alex, the trainer who worked with Syrians, says that a doctor in Aleppo doesn’t need to worry about the NSA listening to Skype calls, but an activist doing battle with a US corporation might.
Officially, American policy promotes a surveillance-free Internet around the world, although Washington’s actual practices have undercut the credibility of the US government on this issue. How will Washington continue to insist, for example, that Iranian activists should be able to plan protests and have political discussions online without government surveillance, when Americans cannot be sure that they are free to do the same?
For activists grappling with real-time emergencies in places like Syria or long-term repression in China, Russia, and elsewhere, the latest news doesn’t change their basic strategy—but it may make the outlook for Internet freedom darker.
“These revelations set a terrible precedent that could be used to justify pervasive surveillance elsewhere,” Maher said. “Americans can go to the courts or their legislators to try and challenge these programs, but individuals in authoritarian states won’t have these options.”
[Originally published in The Boston Globe Ideas section.]
EVER SINCE AMERICANS had to briefly ration gas in 1973, “energy independence” has been one of the long-range goals of US policy. Presidents since Richard Nixon have promised that America would someday wean itself of its reliance on foreign oil and gas, which leaves us vulnerable to the outside world in a way that was seen as a gaping hole in America’s national security. It also handcuffs our foreign policy, entangling America in unstable petroleum-producing regions like the Middle East and West Africa.
Given the United States’ huge appetite for fuel, energy independence has always seemed more of a dream than a realistic prospect. But today, nearly four decades later, energy independence is starting to loom in sight. Sustained high oil prices have made it economically viable to exploit harder-to-reach deposits. Techniques pioneered over the last decade, with US government support, have made it possible to extract shale oil more efficiently. It helps, too, that Americans have kept reducing their petrochemical consumption, a trend driven as much by high prices as by official policy. Total oil consumption peaked at 20.7 million barrels per day in 2004. By 2010, the most recent year tracked in the CIA Factbook, consumption had fallen by nearly a tenth.
Last year, the United States imported only 40 percent of the oil it consumed, down from 60 percent in 2005. And by next year, according to the US Energy Information Administration, the United States will need to import only 30 percent of its oil. That’s been driven by an almost overnight jump in domestic oil production, which had remained static at about 5 million barrels per day for years, but is at 7 million now and will be at 8.5 million by the end of 2014. If these trends continue, the United States will be able to supply all its own energy needs by 2030 and be able to export oil by 2035. In fact, according to the government’s latest projections, the country is on track to become the world’s largest oil producer in less than a decade.
Yet as this once unimaginable prospect becomes a realistic possibility, it’s far from clear that it will solve all the problems it was supposed to. As much as boosters hope otherwise, energy independence isn’t likely to free America from its foreign policy entanglements. And at worst, say some skeptics who specialize in energy markets, it might create a whole new host of them, subjecting America to the same economic buffeting that plagues most oil exporters, and handing China even more global influence as the world’s behemoth consumer.
As much as the shift brings opportunities, however, it is also likely to open the United States up to liabilities we have not yet had to face. The consequences may be both good and bad, enriching and destabilizing for US interests—but they will certainly have a major impact on our geopolitics, in ways that the policy world is only just beginning to understand.
WHEN RICHARD NIXON was president, America consumed about one-third of the world’s oil, importing about 8.4 million barrels per day chiefly from the Middle East. The status quo hummed along until the Arab-Israeli war of 1973. The United States sent weapons to Israel, and the Arab states retaliated with a six-month oil embargo, refusing to sell oil to America. It was the only time in history that the “oil weapon” was effectively used, and it made a permanent impression on the United States.
Over time, the American response to the embargo came to include three major initiatives that still shape energy policy today. First, the government promoted lower oil consumption by pushing coal and natural gas power plants, home insulation, and mileage standards for cars. Second, the country drilled for more of its own oil. Third, and perhaps most important from a foreign-policy standpoint, the United States promoted a unified global oil market in which any country had the practical means to buy oil from any other. That meant that even if some countries couldn’t do business with each other—say, Iran and the United States—it wouldn’t affect the overall price and availability of oil. Other countries could fill in the gap.
The dreams of energy independence crossed party lines. Though liberals and conservatives differ on the means—how much we should rely on new drilling versus energy conservation—both parties have endorsed the quest. It was one of the few issues on which Presidents Carter and Reagan agreed.
America has made steady progress over the years, to the point where the nation’s total oil consumption has actually begun to drop. As this has happened, the high cost of global energy has also made it profitable to increase domestic production of natural gas and oil. A few months ago, both the US Energy Information Administration and the International Energy Agency predicted that if current production trends continue, the United States will overtake Saudi Arabia and Russia as the world’s largest oil producer in 2017.
Taken together, our slowing appetite and booming production mean that with a suddenness that has surprised many observers, the prospect of energy independence—technically speaking, at least—looms in the windshield.
Energy independence looks different today, however, than it did in the oil-shocked 1970s. For one thing, the energy market is a linchpin of the world order, and any big shift is likely to have costs to stability. Some analysts have warned that America’s growing oil production will create a glut that lowers prices, eating up the profits of oil countries and destabilizing their regimes. (That’s in the short term, anyway; worldwide, oil demand is still rising fast.) Falling prices mean that countries that depend on oil will face sudden cash shortages. It’s easy to imagine how destabilizing that could be for a natural-resource power like Russia, for the monarchs of the Persian Gulf, or for the dictators in Central Asia. No matter how distasteful their rule, the prospect of an unruly transition, or worse still, a protracted conflict, in any of those countries could cause havoc.
In the long term, this is not necessarily a bad thing: Weakening oppressive or corrupt governments could ultimately be beneficial for the people of those countries. And a shift in the balance of power away from the Gulf monarchies of OPEC and toward the United States could have a democratizing effect. In any event, though, lower oil prices and a dynamic energy market make the current stable order unpredictable.
China’s economic rise has also changed the global energy equation. For now, China is largely without its own petroleum supplies and is replacing the United States as the largest importer. As China steps into the United States’ shoes as the world’s largest oil customer, it will gain influence in oil-producing regions as American influence wanes. It might also feel compelled to invest more heavily in an aggressive navy, fearing that the United States will no longer shoulder the responsibility of policing shipping lanes in the Persian Gulf and elsewhere—a costly security service that America pays for but which benefits the entire network of global trade.
Domestically, there’s also the “resource curse,” which afflicts countries that depend too heavily on extracted commodities like minerals or petroleum. Such industries don’t add much value to a society beyond the price the commodity fetches at market, and that price is notoriously fickle, meaning fortunes and jobs rise and fall with swings in global prices. The resource curse often implies corruption and autocracy as well. But economists are less concerned about that, since the United States already has an effective government and laws to thwart corruption, and because oil will still make up a minuscule overall share of the economy. Last year oil and gas extraction amounted to just 1.2 percent of the American gross domestic product.
THERE ARE STILL plenty of people who think that energy self-sufficiency will be an unalloyed good. Jay Hakes, who has pursued the goal as an energy official under the last three Democratic presidents, says that America will reap countless political and economic dividends. It will help the trade deficit, give American companies and workers benefits when oil prices are high, and insulate the country from supply shocks. It will also give Washington wider latitude when dealing with oil-producing countries, on which it will depend less. “There are some downsides,” he acknowledges, “but they’re outweighed by all the positives.”
One benefit that self-sufficiency won’t bring, it seems clear, is a sudden independence from the politics of the Middle East. The region produces about half the world’s oil, and Saudi Arabia alone has so much oil that it can raise its capacity at a moment’s notice to make up for a shortfall anywhere else in the world.
Already, America is largely independent of Middle Eastern oil as a consumer: Only about 15 percent of our supply comes from the region. But we do depend on a stable world market—even more so if we become a net exporter ourselves. So even if we don’t buy Saudi oil, we’ll still need a stable Saudi regime that can add a few million barrels a day to world flows, at a moment’s notice, to offset a disruption somewhere else.
Michael Levi, a fellow at the Council on Foreign Relations and author of the book “The Power Surge: Energy, Opportunity, and the Battle for America’s Future,” believes that the biggest risk of achieving a goal like energy independence is complacency: Without the pressures that importing oil has brought, we may have little reason to innovate our way out of fossil fuels altogether. The policies themselves have achieved a great deal of good, he points out—stabilizing the world’s energy markets, reducing consumption, and pushing us beyond “independence” toward renewable sources like wind and solar power (though today these still make up a vanishingly small portion of the US energy supply).
Levi argues that an American oil bonanza could easily remove the political incentives for long-term planning and sacrifice. “I get scared that we’ll become complacent and make foolish decisions because we believe we’ve become energy independent,” Levi says. Energy independence was a useful slogan to motivate America, but in reality, a sensible energy policy has to balance a plethora of competing concerns, from geopolitics and the environment to consumer demand and fuel’s importance to the economy.
“The real way to be energy independent,” he said, “is actually to not use oil.”
Blast barriers in Baghdad, 2008. DONOVAN WYLIE / MAGNUM PHOTOS
[Originally published in The Boston Globe Ideas.]
BEIRUT — Everything that people love and hate about cities stems from the agglomeration of humanity packed into a tight space: the traffic, the culture, the chance encounters, the anxious bustle. Along with this proximity come certain feelings, a relative sense of security or of fear.
Over the last 13 years I have lived in a magical succession of cities: Boston, Baghdad, New York, and Beirut. They all made lovely and surprising homes—and they all were distorted to varying degrees by fear and threat.
At root, cities depend on constant leaps of faith. You cross paths every hour with people of vastly different backgrounds and trust that you’ll emerge safely. Each act of violence—a mugging, a murder, a bombing—erodes that faith. Over time, especially if there are more attacks, a city will adapt in subtle but profound and insidious ways.
The bombing last week was a shock to Boston, and a violation. As long as it’s isolated, the city will recover. But three people have died, and more than 170 have been wounded, and the scar will remain. As we think about how to respond, it’s worth also considering what happens when cities become driven by fear.
NEW YORK, WASHINGTON, and to a lesser extent the rest of America have exchanged some of the trappings of an open society for extra security since the Sept. 11 attacks. Metal detectors and guards have become fixtures at government buildings and airports, and it’s not unusual to see a SWAT team with machine guns patrolling in Manhattan. Police conduct random searches in subways, and new buildings feature barriers and setbacks that isolate them from the city’s pedestrian life.
Baghdad and Beirut, however, are reminders of the far greater changes wrought by wars and ubiquitous random violence. A city of about 7 million, low-slung Baghdad sprawls along the banks of the Tigris River. It’s the kind of place where almost everyone has a car, and it blends into expansive suburbs on its fringes. When I first arrived in 2003, the city was reeling from the shock-and-awe bombing and the US invasion. But it was the year that followed that slowly and inexorably transformed the way people lived. First, the US military closed roads and erected checkpoints. Then, militants started ambushing American troops and planting roadside bombs; the ensuing shootouts often engulfed passersby. Finally, extremists and sectarian militias began indiscriminately targeting Iraqi civilians and government personnel—in queues at public buildings, at markets, in mosques, virtually everywhere. Order crumbled.
Baghdad became a city of walls. Wealthy homeowners blocked their own streets with piles of gravel. Jersey barriers sprang up around every ministry, office, and hotel. As the conflict widened, entire neighborhoods were sealed off. People adjusted their commutes to avoid tedious checkpoints and areas with frequent car bombings. Drive times doubled or tripled. Long lines, with invasive searches, became an everyday fact of life. The geography of the city changed. Markets moved, even the old-fashioned outdoor kind where merchants sell livestock or vegetables. Entire sub-cities sprang up to serve Shia and Sunni Baghdadis who no longer could travel through each other’s areas.
Simple civic pleasures atrophied almost overnight. No one wanted to get blown up because they insisted on going out for ice cream. The famed riverfront restaurants went dormant; no more live carp hammered with a mallet and grilled before our eyes. The water-pipe joints in the parks went out of business. Most of the social spaces that defined the city shut down. Booksellers fled Mutanabe Street, the intellectual center of the city with its antique cafes. The amusement park at the Martyrs Monument shut its gates. Hotel bar pianists emigrated. Dust settled over the playgrounds and fountains at the strip of grassy family restaurants near Baghdad University.
In Beirut, where I moved with my family earlier this year, a generation-long conflict has Balkanized the city’s population. Here, most groups no longer trust each other at all. From 1975 to 1991 the city was split by civil war, and people moved where they felt safest. A cosmopolitan city fragmented into enclaves. Christians flocked to East Beirut, spawning a dozen new commercial hubs. Shiites squatted in the village orchards south of Beirut, and within a decade had built a city-within-a-city almost a million strong—the Dahieh, or “the Suburb.” The original downtown became a demilitarized zone, its Arabesque arcades reduced to rubble, and today has been rebuilt as a sterile, Disney-like tourist and office sector. Ras Beirut, my neighborhood, deteriorated from proudly diverse (and secular) to “Sunni West Beirut,” although it still boasts pockets of stubborn coexistence. In today’s Beirut, my block, where a Druze warlord lives across the street from a church and subsidizes the parking fees of his Shia, Sunni, and Christian neighbors, is a stark exception.
Mixing takes place, but tentatively, and because of frequent outbreaks of violence over the years, Beirutis have internalized the reflexes to fight, defend, and isolate. The result is a city alight with street life, cafes, and boutiques but which can instantaneously shift to war footing. One friend ran into his bartender on a night off at a checkpoint with bandoliers of bullets strapped to his chest. Even when the city appears calm, most people have laid in supplies in case an armed flare-up forces them to stay in their homes for a week. My friend’s teenage daughter keeps a change of clothes in her schoolbag in case she can’t return to her house. Uniformed private guards are everywhere, in every park, on every promenade, at every mall.
WITHIN A FEW weeks of our move to Beirut this year, my 5-year-old son traded his old fantasy, in which he played the doctor and assigned us roles as patients and nurses, for a new one: security guard. While we were setting up for a yard party, he arranged a few plastic chairs by the door. “I’ll check people here,” he declared. He also asked me a lot of questions about the heavily armed soldiers who stand watch on our street: “Will the army shoot me if I make a mistake?”
This is not the childhood he would have in Boston, even after this week. War-molded cities are nightmare reflections of failed states, places where government has gone into free fall, police don’t or can’t do their jobs, and normal life feels out of reach. Beirut is a kind of warning: Physically it appears normal on most days. But trust is gone. The public sphere feels wobbly and impermanent.
Boston is still lucky, with assets that Beirut lost generations ago; it has functional institutions, old communities with tangled but shared histories, and unifying cultural traditions. Boston has police that can get a job done and a baseball team that ties together otherwise divided corners of the city. One lesson of the city I live in now is that circumstances can sever these lifelines faster than we expect. Our connections require continued, perhaps redoubled, care. Without trust, a city can still be a magnificent place to live. Until all at once, it isn’t.
Syrian rebel fighters posed for a photo after several days of intense clashes with the Syrian army in Aleppo, Syria, in October. (AP: NARCISO CONTRERAS)
[Originally published in The Boston Globe Ideas.]
THE NEWS FROM SYRIA keeps getting worse. As it enters its third year, the civil war between the ruthless Assad regime and groups of mostly Sunni rebels has taken nearly 100,000 lives and settled into a violent, deadly stalemate. Beyond the humanitarian costs, it threatens to engulf the entire region: Syria’s rival militias have set up camp beyond the nation’s borders, destabilizing Turkey, Lebanon, and Jordan. Refugees have made frontier areas of those countries ungovernable.
United Nations peace talks have never really gotten off the ground, and as the conflict gets worse, voices in Europe and America, from both the left and right, have begun to press urgently for some kind of intervention. So far the Obama administration has largely stayed out, trying to identify moderate rebels to back, and officially hoping for a negotiated settlement—a peace deal between Assad’s regime and its collection of enemies.
Given the importance of what’s happening in Syria, it might seem puzzling that the United States is still so much on the sidelines, waiting for a resolution that seems more and more elusive with each passing week. But it is also becoming clear that for America, there’s another way to look at what’s happening. A handful of voices in the Western foreign policy world are quietly starting to acknowledge that a long, drawn-out conflict in Syria doesn’t threaten American interests; to put it coldly, it might even serve them. Assad might be a monster and a despot, they point out, but there is a good chance that whoever replaces him will be worse for the United States. And as long as the war continues, it has some clear benefits for America: It distracts Iran, Hezbollah, and Assad’s government, traditional American antagonists in the region. In the most purely pragmatic policy calculus, they point out, the best solution to Syria’s problems, as far as US interests go, might be no solution at all.
If it’s true that the Syrian war serves American interests, that unsettling insight leads to an even more unsettling question: what to do with that knowledge. No matter how the rest of the world sees the United States, Americans like to think of themselves as moral actors, not the kind of nation that would stand by as another country destroys itself through civil war. Yet as time goes on, it’s starting to look—especially to outsiders—as if America is enabling a massacre that it could do considerably more to end.
For now, the public debate over intervention in America has a whiff of hand-wringing theatricality. We could intervene to staunch the suffering but for circumstances beyond our control: the financial crisis, worries about Assad’s successor, the lingering consequences of the Iraq war. These might explain why America doesn’t stage a costly outright invasion. But they don’t explain why it isn’t sending vastly more assistance to the rebels.
The more Machiavellian analysis of Syria’s war helps clarify the disturbing set of choices before us. It’s unlikely that America would alter the balance in Syria unless the situation worsens and protracted civil war begins to threaten, rather than quietly advance, core US interests. And if we don’t want to wait for things to get that bad, then it is time for America’s policy leaders to start talking more concretely—and more honestly—about when humanitarian concerns should trump our more naked state interests.
MANY AMERICAN observers were heartened when the Arab uprisings spread to Syria in the spring of 2011, starting with peaceful demonstrations against Bashar al-Assad’s police state. Given Assad’s long and murderous reign, a democratic revolution seemed to offer hope. But the regime immediately responded with maximum lethality, arresting protesters and torturing some to death.
Armed rebel groups began to surface around the country, harassing Assad’s military and claiming control over a belt of provincial cities. Assad has pursued a scorched earth strategy, raining shells, missiles, and bombs on any neighborhood that rises up. Rebel areas have suffered for the better part of a year under constant strafing and sniper fire, without access to water, health care, or electricity. Iran and Russia have kept the military pipeline open, and Assad has a major storehouse of chemical weapons. While some rebel groups have been accused of crimes, the regime is disproportionately responsible for the killing, which earlier this year passed the 70,000 mark by a United Nations estimate that close observers consider an undercount.
As the civil war has hardened into a bloody, damaging standoff, many have called for a military intervention, pressing for the United States to side with one of the moderate rebel factions and do whatever it takes to propel it to victory. Liberal humanitarians focus on the dead and the millions driven from their homes by the fighting, and have urged the United States to join the rebel campaign. The right wants intervention on different grounds, arguing that the regional security implications of a failed Syria are too dangerous to ignore; the country occupies a significant strategic location, and the strongest rebel coalition, the Nusra Front, is an Al Qaeda affiliate. Given all those concerns, both sides suggest that it’s only a question of when, not if, the United States gets drawn in.
“Syria’s current trajectory is toward total state failure and a humanitarian catastrophe that will overwhelm at least two of its neighbors, to say nothing of 22 million Syrians,” said Fred Hof, an ambassador who ran Obama’s Syria policy at the State Department until last year, when he quit the administration and became a leading advocate for intervention. His feelings are widely shared in the foreign policy establishment: Liberals like Princeton’s Anne-Marie Slaughter and conservatives like Fouad Ajami have made the interventionist case, as have State Department officials behind the scenes.
Intervention is always risky, and in Syria it’s riskier than elsewhere. The regime has a powerful military at its disposal and major foreign backers in Russia and Iran. An intervention could dramatically escalate the loss of life and inflame a proxy struggle into a regional conflagration.
And yet there’s a flip side to the risks: The war is also becoming a sinkhole for America’s enemies. Iran and Hezbollah, the region’s most persistent irritants to the United States and Israel, have tied up considerable resources and manpower propping up Assad’s regime and establishing new militias. Russia remains a key guarantor of the government, costing Russia support throughout the rest of the Arab world. Gulf monarchies, which tend to be troublesome American allies, have invested small fortunes on the rebel side, sending weapons and establishing exile political organizations. The more the Syrian war sucks up the attention and resources of its entire neighborhood, the greater America’s relative influence in the Middle East.
If that makes Syria an unattractive target for intervention, so too do the politics and position of the combatants. For now, jihadist groups have established themselves as the most effective rebel fighters—and their distaste for Washington approaches their rage against Assad. Egos have fractured the rebellion, with new leaders emerging and falling every week, leaving no unified government-in-waiting for outsiders to support. The violent regime, meanwhile, is no friend to the West.
“I’ll come out and say it,” wrote the American historian and polemicist Daniel Pipes, in an e-mail. “Western powers should guide the conflict to stalemate by helping whichever side is losing. The danger of evil forces lessens when they make war on each other.”
Pipes is a polarizing figure, best known for his broadsides against Islamists and his critique of US policy toward the Middle East, which he usually says is naive. But in this case he’s voicing a sentiment that several diplomats, policy makers, and foreign policy thinkers have expressed to me in private. Some are career diplomats who follow the Syrian war closely. None wants to see the carnage continue, but one said to me with resignation: “For now, the war is helping America, so there’s no incentive to change policy.”
Analysts who follow the conflict up close almost universally want more involvement because they are maddened by the human toll—but many of them see national interests clearly standing in the way. “Russia gets to feel like it’s standing up to America, and America watches its enemies suffer,” one complained. “They don’t care that the Syrian state is hollowing itself out in ways that will come back to haunt everyone.”
IS IT EVER ACCEPTABLE to encourage a war to continue? In the policy world it’s seen as the grittiest kind of realpolitik, a throwback to the imperial age when competing powers often encouraged distant wars to weaken rivals, or to keep colonized nations compliant. During the Cold War the United States fanned proxy wars from Vietnam to Afghanistan to Angola to Nicaragua but invoked the higher principle of stopping the spread of communism, rather than admitting it was simply trying to wear out the Soviet Union.
In Syria it’s impossible to pretend that the prolonging of the civil war is serving a higher goal, and nobody, even Pipes, wants the United States to occupy the position of abetting a human-rights catastrophe. But the tradeoffs illustrate why Syria has become such a murky problem to solve. Even in an intervention that is humanitarian rather than primarily self-interested, a country needs to weigh the costs and risks of trying to help against the benefit we might realistically expect to bring—and it’s a difficult decision to get involved when those potential costs include threats to our own political interests.
So just what would be bad enough to induce the United States to intervene? An especially egregious massacre—a present-day Srebenica or Rwanda—could fan such outrage that the White House changes its position. So too would a large-scale violation of the Chemical Weapons Convention—signed by most states in the world, but not Syria. But far more likely is that the war simmers on, ever deadlier, until one side scores a military victory big enough to convince the outside powers to pick a winner. The White House hopes that with time, rebels more to its liking will gain influence and perhaps eclipse the alarming jihadists. That could take years. Many observers fear that Assad will fall and open the way to a five- or ten-year civil war between his successor and a well-armed coalition of Islamist militias, turning Syria into an Afghanistan on the Euphrates. The only thing that seems likely is that whatever comes next will be tragic for the people of Syria.
Because this chilly if practical logic is largely unspoken, the current hands-off policy continues to bewilder many American onlookers. It would be easier to navigate the conversation about intervention if the White House, and the policy community, admit what observers are starting to describe as the benefits of the war. Only then can we move forward to the real moral and political calculations at stake: for example, whether giving Iran a black eye is worth having a hand in the tally of Syria’s dead and displaced.
For those up close, it’s looking unhappily like a trip to a bygone era. Walid Jumblatt, the Lebanese Druze warlord, spent much of the last two years trying fruitlessly to persuade Washington and Moscow to midwife a political solution. Now he’s given up. Atop the pile of books on his coffee table sits “The Great Game,” a tale of how superpowers coldly schemed for centuries over Central Asia, heedless of the consequences for the region’s citizens. When he looks at Syria he sees a new incarnation of the same contest, where Russia and America both seek what they want at the expense of Syrians caught in the conflict.
“It’s cynical,” he said in a recent interview. “Now we are headed for a long civil war.”
A street poster from Cairo that reads, “My God, my freedom, O my country.” Photo: NEMO.
CAIRO — Every night, Egypt’s current comedic sensation, a doctor hailed as his country’s Jon Stewart, lambastes the nation’s president on TV, mocking his authoritarian dictates and airing montages that reveal apparent lies. On talk shows, opposition politicians hold forth for hours, excoriating government policy and new Islamist president Mohammed Morsi. Protesters use the earthiest of language to compare their political leaders to donkeys, clowns, and worse. Meanwhile, the president’s supporters in the Muslim Brotherhood respond in kind on their new satellite television station and in mass counter-rallies.
Before Egypt’s uprising two years ago, this kind of open debate about the president would have been unthinkable. For nearly three decades, former president Hosni Mubarak exerted near total control over the public sphere. In the twilight of his term, he imprisoned a famous newspaper editor who dared to publish speculation about the ailing president’s declining health. No one else touched the story again.
To Western observers, the freewheeling back-and-forth in Egypt right now might sound like the flowering of a young open society, one of the revolution’s few unalloyed triumphs. But amid the explosion of debate, something less wholesome has begun to arise as well. Though speech is far more open, it now carries a new and different kind of risk, one more unpredictable and sudden. Islamist officials and citizens have begun going after individuals for crimes such as blasphemy and insulting the president, and vaguer charges like sedition and serving foreign interests. The elected Islamist ruling party, the Muslim Brotherhood, pushed a new constitution through Egypt’s constituent assembly in December that expanded the number of possible free speech offenses—including insults to “all prophets.”
Worryingly, a recent report showed that President Morsi—a Brotherhood member, and Egypt’s first-ever genuinely elected, civilian leader—has invoked the law against insulting the presidency far more frequently than any of the dictators who preceded him, and has even directed a full-time prosecutor to summon journalists and others suspected of that crime.
The Muslim Brotherhood, as it rises to power, is playing host to conflicting ideas. It wants the United States to view it as a tolerant modern movement that doesn’t arbitrarily silence critics, but at the same time it needs to show its political base of socially conservative constituents in rural Egypt that it won’t tolerate irreligious speech at home. And it wants to argue that despite its religious pedigree, it is behaving within the constraints of the law.
For the time being, Egypt’s proliferating free expression still outstrips government efforts to shut it down. But as the new open society engenders pushback, what’s happening here is in many ways a test case for Islamist rule over a secular state. What’s at stake is whether Islamists—who are vying for elected power in countries around the Muslim world—really only respect the rules until they have enough clout to ignore them.
The text on this Cairo street poster reads, “As they breathe, they lie.” Photo: NEMO
EGYPTIANS ARE RENOWNED throughout the Arab world for jokes and wordplay, as likely to fall from the mouth of a sweet potato peddler as a society journalist. Much of daily life takes place in the crowded public social spaces where people shop, drink hand-pressed sugarcane juice, loiter with friends, or picnic with their families. But under the stifling police state built by Mubarak, that vitality was undercut by fear of the undercover police and informants who lurked everywhere, declaring themselves at sheesha joints or cafes when the conversation veered toward politics.
As a result, a prudent self-censorship ruled the day. State security officials had desks at all the major newspapers, but top editors usually saved them the trouble, restraining their own reporters in advance. In 2005, when one publisher took the bold step of publishing a judge’s letter critical of the regime, he confiscated the cellphones of all his editors and sequestered them in a conference room so they couldn’t tip off authorities before the paper reached the streets.
It wasn’t technically illegal to be a dissident in Egypt; that the paper could be published at all was testament to the fact that some tolerance existed. Egypt’s system was less draconian and violent than the police states in Syria and Iraq, where dissidents were routinely assassinated and tortured. But the limits of public speech were well understood, and Egyptians who cared to criticize the state carefully stayed on the accepted side of the line. Activists would speak out about electoral fraud by the ministry of the interior or against corruption by businesspeople, for example, but would carefully refrain from criticizing the military or Mubarak’s family. Political life as we understand it barely existed.
Egypt’s uprising marked an abrupt break in this long cultural balancing act. For the first time, millions of Egyptians expressed themselves freely and in public, openly defying the intelligence minions and the guns of the police. It was shocking when people in the streets called directly for the fall of the regime. Within weeks, previously unimaginable acts had become commonplace. Mubarak’s effigy hung in Tahrir Square. Military generals were mocked as corrupt, sadistic toadies in cartoons and banners. Establishment figures called for trials of former officials and limits on renegade security officials.
In the two years since, free speech has spread with dizzying speed—on buses, during marches, around grocery stalls, everywhere that people congregate. Today there are fewer sacred cows, although even at the peak of revolutionary fervor few Egyptians were willing to risk publicly impugning the military, which was imprisoning thousands without any due process. (An elected member of parliament faced charges when he compared the interim military dictator to a donkey.)
Mohammed Morsi was inaugurated in June, after a tight election that pitted him against a former Mubarak crony. Morsi campaigned on a promise to excise the old regime’s ways from the state, and on a grandiose Islamist platform called “The Renaissance.” His regime has fared poorly in its efforts to take control of the police and judiciary. Nor has it made much progress on its sweeping but impractical proposals to end poverty and save the Egyptian economy. It has proven easier to talk about Islamic social issues: allegations of blasphemy by Christians and atheist bloggers; alcohol consumption and the sexual norms of secular Egyptians; and the idea, widely held among Brotherhood supporters, that a godless cabal of old-regime supporters is secretly plotting to seize power.
Before it won the presidency, the Muslim Brotherhood emphasized it had been fairly elected; the party was Islamist, it said, but from the pragmatic, democratic end of the spectrum. But in recent months, there’s been more than a whiff of Big Brother about the Brotherhood. Supposed volunteers attacked demonstrators outside Morsi’s presidential palace—and then were videotaped turning over their victims to Brotherhood operatives. Allegations of torture, illegal detention, and murder by state agents pile up uninvestigated.
As revolutionaries and other critical Egyptians have turned their ire from the old regime to the new, the Brotherhood also has begun targeting political speech. The new constitution, authored by the Brotherhood and forced through Egypt’s constituent assembly in an overnight session over the objections of the secular opposition and even some mainstream religious clerics, criminalized blasphemy and expanded older statutes against insults to leaders, state institutions like the courts, and religious figures. Popular journalists have been threatened with arrest, while less famous individuals, including children improbably accused of desecrating a Koran, have been thrown into detention. Morsi’s presidential advisers regularly contact human rights activists and journalists to challenge their reports, a level of attention and pressure previously unknown here.
In addition to the old legal tools to limit free expression, which are now more heavily used by the Islamists than they were by Mubarak, the new constitution has added criminal penalties for insulting all religions and empowers courts to shut down media outlets that don’t “respect the sanctity of the private lives of citizens and the requirements of national security.”
The Egyptian government began an investigation of TV comedian Baseem Yousef but dropped its charges after a public outcry.
Egyptian human rights monitors have tracked dozens of such cases, including three that were filed by the president’s own legal team. Gamal Eid at the Arab Network for Human Rights Information charted 40 cases that prosecuted political critics for what amounted to dissenting speech in the first 200 days of Morsi’s regime. That’s more, he claims, than during Mubarak’s entire reign, and more charges of insulting the president than were filed since 1909, when the law was first written.
IT’S A WELL-KNOWN PRECEPT in politics that times of transition are the most unstable, and that the fight to establish civil liberties carries risks. The current speech crackdown may just be an expected symptom of the shift from an effective authoritarian state to competitive politics. Mubarak, of course, had less need to prosecute a population that mostly kept quiet.
It could also be a sign of desperation on the part of the Brotherhood, as it struggles to rule without buy-in from the police and state bureaucracy. Or it could, more alarmingly, mark a transition to a genuine new era of censorship in the most populous Arab country, this time driven as much by the Islamist cultural agenda as by the quest to keep a grip on power.
It is that last prospect that makes the path Egypt takes so important. By dint of its size and cultural heft, the country remains a major influence across the Arab world, and both in Egypt and elsewhere, the Muslim Brotherhood is at the front lines of political Islam—trying to balance the cultural conservatism of its rank-and-file supporters with the openness the world expects from democratic society.
There are signs that the Brotherhood wants to at least make gestures toward Western norms, though it remains hard to gauge exactly how open an Egypt its members would like to see. At one point the government began an investigation of Baseem Yousef, the Jon Stewart-like TV comedian, but abruptly dropped its charges in January after a public outcry.
During the wave of bad publicity around the investigation, one of President Morsi’s advisers issued a statement claiming that the state would never interfere in free speech—so long as citizens and the press worked to raise their “level of credibility.”
“Human dignity has been a core demand of the revolution and should not be undermined under the guise of ‘free speech,’” presidential adviser Esam El-Haddad said in a statement that placed ominous boundaries on the very idea of free speech that it purported to advance. “Rather, with freedom of speech comes responsibility to fellow citizens.”
What scares many people, is how they define “responsibility.” A widely watched video clip portrays a Salafi cleric lecturing his followers about how Egypt’s new constitution will allow pious Muslims to limit Christian freedoms and silence secular critics (the cleric, Sheikh Yasser Borhami, is from a more fundamentalist current, separate from but allied with the Brotherhood). When critics look at the Brotherhood’s current spate of investigations and threatened prosecutions, they see the political manifestation of the same exclusionary impulse: the polarizing notion that the Islamists’ actions are blessed by God and, by implication, that to criticize them is sacrilege.
Modern Islamism hasn’t reckoned with this implicit conflict yet, even internally. Officially, one current of the Brotherhood’s ideology prioritizes social activism over politics, and eschews coercion in religious matters. But another, perhaps more popular strain in Brotherhood thinking agitates for a religious revolution in people’s daily lives, and that strain appears to be driving the behavior of the Brothers suddenly in charge of the nation. Their fervor is colliding squarely with the secular responsibility of running a state like Egypt, which for all its shortcomings has real institutions, laws, and a civil society that expects modern freedoms and protections. The first stage of Egypt’s transition from military dictatorship has ended, but the great clash between religious and secular politics is just beginning to unfold.
Christia Fotini with a Syrian girl in a camp for Internally Displaced Persons (IDPs) in Syria in the village of Atmeh.
[Originally published in The Boston Globe.]
WHAT IS a civil war, really?
At one level the answer is obvious: an internal fight for control of a nation. But in the bloody conflicts that split modern states, our policy makers often understand something deeper to be at work. The vengeful slaughter that has ripped apart Bosnia, Rwanda, Syria, and Yemen is most often seen as the armed eruption of ancient and complex hatreds. Afghanistan is embroiled in a nearly impenetrable melee between Pashtuns and smaller ethnic groups, according to this thinking; Iraq is split by a long-suppressed Sunni-Shia feud. The coalitions fighting these wars are seen as motivated by the deepest sort of identity politics, ideologies concerned with group survival and the essence of who we are.
This view has long shaped America’s engagement with countries enmeshed in civil war. It is also wrong, argues Fotini Christia, an up-and-coming political scientist at MIT.
In a new book, “Alliance Formation in Civil Wars,” Christia marshals in-depth studies of the recent wars in Afghanistan, Iraq, and Bosnia, along with empirical data from 53 civil conflicts, to show that in one civil war after another, the factions behave less like enraged siblings and more like clinically rational actors, switching sides and making deals in pursuit of power. They might use compelling stories about religion or ethnicity to justify their decisions, but their real motives aren’t all that different from armies squaring off in any other kind of conflict.
How we understand civil wars matters. Most civil wars drag on until they’re resolved by a foreign power, which in this era almost always includes the United States. If she’s right, if we’re mistaken about what motivates the groups fighting in these internecine free-for-alls, we’re likely to misjudge our inevitable interventions—waiting too long, or guessing wrong about what to do.
CIVIL WARS ALWAYS have loomed large in the collective consciousness. Americans still debate theirs so vociferously that a blockbuster film about Abraham Lincoln feels topical 150 years after his death. Eastern Europe saw several years of ferocious killing in the round of civil wars that followed World War II.
Such wars have been understood as fights over differences that can’t be resolved any other way: fundamental questions of ideology, identity, creed. A disputed border can be redrawn; not so an ethnic grudge. In the last two decades, identity has become the preferred explanation for persistent conflicts around the world, from Chechnya to Armenia and Azerbaijan to cleavages between Muslims and Christians in Nigeria.
This thinking allows for a simple understanding, and conveniently limits the prospect for a solution. Any identity-based cleavage—Jew vs. Muslim, Bosnian vs. Serb, Catholic vs. Orthodox—is so profoundly personal as to be immutable. The conventional wisdom is best exemplified by a seminal 1996 paper by political scientist Chaim Kaufmann, “Possible and Impossible Solutions to Ethnic Civil Wars,” which argues that bitterly opposed populations will only stop fighting when separated from each other, preferably by a major natural barrier like a river or mountain range.
During the 1990s, this sort of ethnic determinism drove American policy toward Bosnia and Rwanda. It was popularized by Robert Kaplan’s book “Balkan Ghosts,” which was read in the Clinton White House and presented the wars in the former Yugoslavia as just the latest chapter in an insoluble, four-century ethnic feud. Like Kaufmann, Kaplan suggested that the grievances in civil wars could only be managed, never reconciled.
After 9/11, policy makers in Washington continued to view civil wars through this prism, talking about tribes and sects and ethnic groups rather than minority rights, systems of government, and resource-sharing. That view was so dominant that President Bush’s team insisted on designing Iraq’s first post-Saddam governing council with seats designated by sect and ethnicity, against the advice of Iraqis and foreign experts. It became a self-fulfilling prophecy as Iraq’s ethnic civil war peaked in 2006; things settled down only after death squads had cleansed most of Iraq’s mixed neighborhoods, turning the country into a patchwork of ethnically homogenous enclaves. Similarly, this thinking has shaped US policy in Afghanistan, where the military even sent anthropologists to help its troops understand the local culture that was considered the driving factor in the conflict.
Christia grew in up in the northern Greek city of Salonica in the 1990s, with the Bosnian war raging just over the border. “It was in our neighborhood and we discussed it vividly every night over dinner,” she says. The question of ethnicity seized her imagination: Were different peoples doomed to conflict by incompatible identities? Or were the decision-makers in civil wars working on a different calculus from their emotional followers? As a graduate student at Harvard, Christia flew to Afghanistan and tried to turn a dispassionate political scientist’s eye to the question of why warlords behave the way they do.
Christia spent years studying these warlords, the factional leaders in a civil war that broke out in the late 1970s. As a graduate student and later as a professor, she returned to Afghanistan to interview some of the nastiest war criminals in the country. She concluded that culture and identity, while important for their adherents, did not seem to factor into the motives of the warlords themselves, and specifically not in their choices of wartime allies. Despite the powerful rhetoric about ethnic alliances forged in blood, warlords repeatedly flipped and switched sides. They used the same language—about tribe, religion, or ethnicity—whether they were fighting yesterday’s foe or joining him.
If ethnicity, religion, and other markers of identity didn’t matter to warlords, Christia asked, what did? It turns out the answer was simple: power. After studying the cases of Afghanistan, Bosnia, and Iraq in intricate detail, Christia built a database of 53 conflicts to test whether her theory applied more widely. She ran regression analyses and showed that it did: Warlords adjusted their loyalties opportunistically, always angling for the best slice of the future government. It’s not quite as simple as siding with the presumed winner, she says: It’s picking the weakest likely winner, and therefore the one most likely to share power with an ally.
In this model of warlord behavior, the many factions in a civil war are less like Cain and Abel and more like the mafia families in “The Godfather” trilogy. Loyalties follow business interests, and business interests change; meanwhile, the talk about family and blood keeps the foot soldiers motivated. In Bosnia, one Muslim warlord joined forces with the Serbs after the Serbs’ horrific massacre of Muslims at Srebenica, and justified his switch by saying that the central government in Sarajevo was run by fanatics while he represented the true, moderate Islam. In case after case of intractable civil wars—Afghanistan, Lebanon, Iraq, the former Yugoslavia—Christia found similar patterns of fluid alliances.
“The elites make the decision, and then sell it to the people who follow them with whatever narrative sticks,” Christia said. “We’re both Christians? Or we’re both minorities? Or we’re both anti-communist? Whatever sticks.”
CHRISTIA’S WORK has been received with great interest, though not all her academic colleagues agree with her conclusions. Critics say identity is more important in civil wars than she gives it credit for, and we ignore it at our peril. Roger Petersen, an expert on ethnic war and Eastern Europe who is a colleague of Christia’s at MIT and supervised her dissertation, argues that in some conflicts, identity—ethnic, religious, or ideological—is truly the most important factor. Leaders might make a pact with the devil to survive, but once a conflict heads to its conclusion, irreconcilable conflicts often end with a fight to the death. Communists and nationalists fought for total victory in Eastern Europe’s civil wars, with no regard to their fleeting coalitions of opportunity against foreign occupiers during World War II. More recently, Bosnia’s war only ended after the country had split into ethnically cleansed cantons.
Christia acknowledges that her theory needs further testing to see if it applies in every case. She is currently studying how identity politics play out at most local level in present-day Syria and Yemen.
If it holds up, though, Christia’s research has direct bearing on how we ought to view the conflict today in a nation like Syria. The teetering dictatorship is the stronghold of the minority Allawite sect in a Sunni-majority nation. And leader Bashar Assad has rallied his constituents on sectarian grounds, saying his regime offers the only protection for Syria’s minorities against an increasingly Sunni uprising. But Syria’s rebellion comprises dozens of armed factions, and Christia suggests that these militants, which run the gamut of ethnic and sectarian communities, will be swayed more by the prospect of power in a post-Assad Syria than by ethnic loyalty. That would mean the United States could win the loyalty of different fighting factions by ignoring who they are—Sunni, Kurd, secular, Armenian, Allawite—and by focusing instead on their willingness to side with America or international forces in exchange for guns, money, or promises of future political power.
For America, civil wars elsewhere in the world might seem like somebody else’s problem. But in reality we’re very likely to end up playing a role: Most civil wars don’t end without foreign intervention, and America is the lone global superpower, with huge sway at the United Nations. Christia suggests that Washington would do well to acknowledge early on that it will end up intervening in some form in any civil war that threatens a strategic interest. That doesn’t necessarily mean boots on the ground, but it means active funding of factions and shaping of the alliances that are doing the fighting. In a war like Syria’s, that means the United States has wasted precious time on the sidelines.
Despite her sustained look at the worst of human conflict, Christia says she considers herself an optimist: People spend most of their history peacefully coexisting with different groups, and only a tiny portion of the time fighting. And once civil wars do break out, the empirical evidence shows that hatreds aren’t eternal. “If identities mattered so much,” she says, “you wouldn’t see so much shifting around.”
[Originally published in The Boston Globe Ideas.]
ROCKETS AND MORTARS have stopped flying over the border between Gaza and Israel, a temporary lull in one of the most intractable, hot-and-cold wars of our time. The hostilities of late November ended after negotiators for Hamas and Israel—who refused to talk face-to-face, preferring to send messages via Egyptian diplomats—agreed to a rudimentary cease-fire. Their tenuous accord has no enforcement mechanism and doesn’t even nod to discussing the festering problems that underlie the most recent crisis. Both sides say they expect another conflict; experience suggests it’s just a question of when.
Generations of negotiators have cut their teeth trying to forge a peace agreement between Israel and the Palestinians, and their failures are as varied as they are numerous: Camp David, Madrid, the Oslo Accords, Wye River, Taba, the Road Map. For diplomats and deal-makers around the world—even those with no particular stake in Middle East peace—Israel and Palestine have become the ultimate test of international negotiations.
For Guy Olivier Faure, a French sociologist who has dedicated his career to figuring out how to solve intractable international problems, they’re something else as well: an almost unparalleled trove of insights into how negotiations can go wrong.
For more than 20 years, Faure has studied not only what makes negotiations around the world succeed, but how they break down. From Israel and the Palestinians to the Biological Weapons Convention protocol to the ongoing talks about Iran’s nuclear program, it’s far more common for negotiations to fail than to work out. And it’s from these failures, Faure says, that we can harvest a more pragmatic idea of what we should be doing instead. “In order to not endlessly repeat the same mistakes, it is essential to understand their causes,” he says.
TODAY’S INTERNATIONAL order turns on successful negotiation. When we think about what’s right in the world, we’re often thinking about the results of agreements like the START treaties, which ended the nuclear arms race between the United States and the Soviet Union; the Geneva Conventions, which govern the conduct of war; or even the General Agreement on Tariffs and Trade, drafted in 1948, which still underpins globalized free trade.
But in negotiations over the most vexing international problems—a hostage situation, a war between a central government and terrorist insurgents, a new multinational agreement—such successes are few and far between. Failure is the norm. Understandably, experts tend to focus on the wins. From US presidents to obscure third-party diplomats, negotiators pore over rare historical successes for tips rather than face the copious and dreary overall record.
Faure wants to change that focus. As an expert he straddles two worlds: He studies diplomacy academically as a sociologist at the Sorbonne, in Paris, and has also trained actual negotiators for decades, at the European Union, the World Trade Organization, and UN agencies. Over his career, he has produced 15 books spanning all the different theories behind negotiation, and ultimately concluded that negotiations that failed, or simply sputtered out inconclusively, were the most interesting. Each failure had multiple causes, but it was possible to compile a comprehensive list, and from that, consistent patterns.
“Unfinished Business” takes a look at what happened during a number of high-profile failures, and examines the underlying conditions of each set of talks: trust, cultural differences, psychology, the role of intermediaries, and outsiders who can derail negotiations or overload them with extraneous demands.
One of Faure’s insights concerns the mindset of negotiators—a factor negotiators themselves often believe is irrelevant, but which Faure and his colleagues believe can often determine the outcome. Incompatible values on the two sides of the table, he says, are much harder to bridge than practical differences, like an argument over a boundary or the mechanics of a cease-fire. As Faure says, “A quantity can be split, but not a value.” This is what Faure saw at work when the Palestinians and Israelis embarked on a rushed negotiation at the Egyptian seaside resort of Taba during Bill Clinton’s final month in office. The two sides had already reached an impasse at a lengthier negotiation in 2000 at Camp David. With the end of his presidency looming and Israeli elections coming up, Clinton summoned them back to the table for a no-nonsense session he hoped would bring speedy closure to disputes over borders, Jerusalem, and refugees. The Palestinians, however, felt that the two sides simply didn’t share the same view of justice and weren’t truly aiming at the same goal of two sovereign states—and so didn’t feel driven to make a deal. That mismatch of long-term beliefs, Faure says, doomed the talks.
There are other warning signs that emerge as patterns in failed talks. Time and again, parties embark on tough negotiations already convinced they will fail—a defeatism that becomes a self-fulfilling prophesy. In interviewing professional negotiators, Faure and his colleagues found that they often don’t pay that much attention to the practical aspects of how to run a negotiation—a surprising lapse.
Faure and his team have found that a well-planned process is one of the best predictors of success, and that many negotiations are terrible at it. When the European Union and the United States talked to Iran about its nuclear program, various European countries kept adding extraneous issues to the talks, for instance linking Iran’s behavior with nukes to existing trade agreements. The additions made the negotiations unwieldy, and provoked crises over matters peripheral to the actual subject. In the case of the mediation over Cyprus, the Greek and Turkish sides didn’t bother coming up with any tangible proposed solution to negotiate over, instead talking vaguely about a Swiss model. Negotiations failed in part because neither side knew what that would mean for Cyprus.
Ultimately, Faure argues, mistrust and inflexibility tangle up negotiators more than any other factor. Negotiators often end up demonizing the other side, and as a result might embark on a process that by its structure encourages failure. For instance, Israeli and Palestinian reliance on mediators to ferry messages—even between delegations in the same resort—maximizes misunderstandings and minimizes the possibility that either side will sense a genuine opening.
WHAT EMERGES FROM Faure’s work, overall, is that the outlook for negotiations is usually pretty bleak—certainly bleaker than Faure himself prefers to highlight. In some cases, he suggests that diplomats should put off an outright negotiation until they’ve dealt with gaps in trust and cultural communication, or until the conflict feels “ripe” for solution to the parties involved. There’s no point, he suggests, in embarking on a negotiation if all the stakeholders are convinced it’s a waste of time—indeed, a failed negotiation can sometimes exacerbate a problem.
The most promising scenarios occur when both sides are suffering under the status quo, which creates what social scientists call a “mutually hurting stalemate,” with soldiers or civilians dying on both sides, and a “mutually enticing opportunity” if there’s a peace agreement or a prisoner swap. In that case, a decent deal will give both sides a chance to genuinely improve their lot.
Unfortunately for the many whose hopes are riding on negotiations, the truly challenging international problems of our age don’t always come with a strong incentive to compromise. In military conflicts, there is little incentive to resolve matters when a conflict is lopsided in one side’s favor (Shia versus Sunni in Iraq, Israel versus Hamas, the Taliban versus the United States in Afghanistan). The same holds in broader international agreements: They’re complicated and intractable largely because the states involved are—no matter what they say—quite comfortable with the status quo. Think about climate change: The biggest gas-guzzlers and polluters, the ones whose assent matters the most for a carbon-reduction treaty, are often the last states that will pay the price for rising oceans. Meanwhile, the poorer nations whose populations are most at the mercy of sea levels or changing weather have little clout. Just as it’s easy for a relatively secure Israel to stand pat on the Palestinian question, there are few immediate consequences for the United States and China if they sit out climate talks.
It’s not all bleak news. Even in cases where negotiations appear hamstrung—like climate change and Palestine—there are, Faure points out, plenty of other reasons to continue negotiating. Negotiations are a form of diplomacy, dialogue, and recognition, and even in failure can serve some other interests of the parties involved. But—as the impressive historical record of failed international agreements shows—it’s naive to think that they will always yield a solution.
[Originally published in The Boston Globe.]
During the presidential campaign, two issues often seemed like the only foreign policy topics in the entire world: the Middle East and China. Those are unquestionably important: The wider Middle East contains most of the world’s oil and, currently, much of its conflict; and China is the world’s manufacturing base and America’s primary lender. But there are a host of other issues that are going to demand Washington’s sustained attention over the next four years, and don’t occupy anywhere near the same amount of Americans’ attention.
You could call them the icebergs, largely hidden challenges that lie in wait for the second Obama administration. Like all of us, when it comes to priorities, the people in Washington assume that the thing that comes to mind first must be the most important. The recent crises or tensions with Afghanistan, Benghazi, and China make these feel like the whole story. But in fact they are really just a few chapters, and the ones we’re ignoring completely may actually have the most surprises in store.
If the administration wants to stay ahead of the game, here’s what it will need to spend more of its time and energy dealing with in the coming four years.
Europe’s recovery needs to be managed, and that requires global cooperation and money.
Washington and China, along with the International Monetary Fund and the World Bank, will have to be closely involved, and that won’t happen without American leadership. Though the European crisis has already been a front-burner problem for two years, in the United States it barely cracks the public agenda except as a rhetorical bludgeon: “That guy wants to turn America into Greece!” But Europe’s importance to the global economy, and to America, is staggering: It’s the world’s largest economic bloc, worth $17 trillion, and it’s the US’s largest trading partner. If Europe goes down, we all go down.
Climate change. No politician likes to talk about climate change. It’s depressing news. It’s become highly partisan in this country, and it has no obvious solution even for those who understand the threat. It requires discussion of all kinds of hugely complex, dull-sounding science. When we do talk about it as a political issue, it’s largely as a domestic one: saving energy, dealing with the increasing fury and frequency of storms like Hurricane Sandy, investing in new infrastructure.
In fact, climate change is a massive foreign policy issue as well. On the preventive side, any emissions reduction requires cooperation across borders—between small numbers of powerful nations, like America and China, along with massive worldwide accords like the failed Kyoto Protocol. The responses will often need to be global as well. Rising oceans and temperatures have no regard for national boundaries, and most of the world’s population lives near soon-to-be-vulnerable coastlines. Entire cities might have to move, or be rebuilt, often across
borders. Sandy could cost the American Northeast close to $100 billion when all is said and done (current damage estimates already top $50
billion). Imagine the price of climate-proofing the cities where most of the world lives—Mumbai, Shangahi, Lagos, Alexandria, and so on. Climate change, if unaddressed, could well become an American security issue, propelling unrest and failed states that will spur threats against the US.
Pakistan. Like our tendency to obsess over shark attacks rather than, say, the more significant risk of getting hit by a car, we often find our foreign policy elite preoccupied with rare, dramatic potential threats rather than actual banal ones. You’ll keep hearing about Iran, which might one day have a bomb and which emits noxious rhetoric while supporting well-documented militant groups like Hezbollah. What we really need to hear more and do more about, however, is a regional power that already has nukes (90 to 120 warheads), that is reportedly planning for battlefield bombs that are easier to misplace or steal, and that sponsors rogue terrorist groups that have been regularly killing people in Afghanistan and India for years.
That country is Pakistan. Power there is split among an unstable cast of characters: a dictatorial military, super-empowered Islamic fundamentalists, and a corrupt civilian elite. A significant portion of its huge population has been radicalized, and can easily flit across borders with Iran, Afghanistan, and India. Pakistan isn’t a potential problem; it’s a huge actual problem, a driver of war in Afghanistan, a sponsor of killers of Americans, and perennially, the only actor in the world that actively poses the threat of nuclear war. (The hot war between India and Pakistan in Kargil in 1999 was the first active conflict between two nuclear powers. It’s not talked about much, but remains a genuine nightmare scenario.) Pakistan is also a huge recipient of American aid. We need to find leverage and work to contain, restrain, and stabilize Pakistan.
Transnational crime and drugs. When it comes to violence in the world, foreign-policy thinkers tend to think first about wars, militaries, and diplomacy. But to save money and lives, it would be smarter to think about drugs. In much of the world, the resources spent and lives lost to criminal syndicates in the drug war rival the costs of traditional conflict. Narco-states in the Andes and, increasingly, Central America, make life miserable for their own inhabitants. Criminal off-the-book profits symbiotically feed international crime and terrorism. And in every region of the world, drugs provide the economic engine and financing for militias and terrorist groups; they fuel innumerable security problems, such as human trafficking, illicit weapons sales, piracy, and smuggling. Ultimately, wherever the drug business flourishes, it tends to corrode state authority, leaving vast ungoverned swaths of territory and promoting political violence and weak policing.
The United States pays a lot of attention to this problem in Afghanistan and Mexico, but it’s a drain on resources in corners of the globe that get less attention, from Southeast Asia to Africa. Washington needs to approach the international illegal drug trade like the globalized, multifaceted problem that it is, requiring international law enforcement cooperation but also smart economic solutions to change the market, including legalization.
Mexico. It feels almost painfully obvious, but it’s been a long time since a US president has prioritized our next-door neighbor. Our economies are inextricably linked. America’s supposed problem with illegal immigration is actually the organic
development of a fluid shared labor market across the US-Mexico border. Meanwhile, the distant war in Afghanistan eats up an enormous amount of resources while another conflict races on next door: Mexico’s increasingly violent drug war. Since 2006, it has claimed 50,000 lives, and the violence regularly spills over the border. Washington has collaborated piecemeal with Mexico’s government, but this is a regional conflict, involving criminal syndicates indifferent to jurisdiction. The United States needs to persuade Mexico to pursue a less violent, more sustainable strategy to counter the drug gangs, and then partner with the government there wholeheartedly.
The dangerous Internet. Cyber security might sound like a boondoogle for defense contractors looking for more money to spend on a ginned-up threat. Yet in the last year we’ve seen the real-world consequences of cyber attacks on Iran’s nuclear program, apparently orchestrated by the
United States and Israel, and an effective cyber response apparently by Iran that hobbled Saudi Arabia’s oil industry. Harvard’s Joseph Nye points out that cyber espionage and crime already pose serious transnational threats, and recent developments show how war and terrorism will spill into our online networks, potentially threatening everything from our power supply to our personal data.
The US budget. Elementary economics usually begins with the discussion of guns vs. butter: You can’t pay for everything given limited resources, so do you eat or defend yourself? For generations, America has had the luxury of not really having to choose: The economy has mostly boomed since
World War II, meaning we never had to cut anything fundamentally important. But America now faces a contracting global economy and a world in which it increasingly has to share resources with other rising powers. This is unfamiliar, and unhappy, territory: America’s next defense and foreign affairs budgets will probably be the first since the Second World War to require serious downsizing at a time when there are actual credible threats to the United States.
The Americans who reelected President Obama didn’t care that much about his foreign policy, according to polls. And, perhaps fittingly, Obama dealt with the rest of world during his first term with competence and caution rather than with flair and executive drive. His impressive focus on Al Qaeda hasn’t been mirrored so far in the rest of his national security policy, made by a team better known for its meetings than for setting clear priorities.
In the wake of a decisive reelection, Obama will have the political latitude to shape a more creative and forward-thinking foreign policy in his second term. If he does, he’ll have to work around both deeply divided legislators and a constrained budget: We simply can’t pay for everything, from land wars to cyber threats to sea walls to protected American industries. The priorities the next administration chooses—and its ability to pass any budget—will dramatically shape the kind of foreign influence America yields over the next four years.
Cops say they figure out a suspect’s intentions by watching his hands, not by listening to what comes out of his mouth. The same goes for American foreign policy. Whatever Washington may be saying about its global priorities, America’s hands tend to be occupied in the Middle East, site of all America’s major wars since Vietnam and the target of most of its foreign aid and diplomatic energy.
How to handle the Middle East has become a major point in the presidential campaign, with President Obama arguing for flexibility, patience, and a long menu of options, and challenger Mitt Romney promising a tougher, more consistent approach backed by open-ended military force.
Lurking behind the debate over tactics and approach, however, is a challenge rarely mentioned. The broad strategy that underlies American policy in the region, the Carter Doctrine, is now more than 30 years old, and in dire need of an overhaul. Issued in 1980 and expanded by presidents from both parties, the Carter doctrine now drives American engagement in a Middle East that looks far different from the region for which it was invented.
President Jimmy Carter confronted another time of great turmoil in the region. The US-supported Shah had fallen in Iran, the Soviets had invaded Afghanistan, and anti-Americanism was flaring, with US embassies attacked and burned. His new doctrine declared a fundamental shift. Because of the importance of oil, security in the Persian Gulf would henceforth be considered a fundamental American interest. The United States committed itself to using any means, including military force, to prevent other powers from establishing hegemony over the Gulf. In the same way that the Truman Doctrine and NATO bound America’s security to Europe’s after World War II, the Carter Doctrine elevated a crowded and contested Middle Eastern shipping lane to nearly the same status as American territory.
In 2012, we look back on a recent level of American engagement with the Middle East never seen before. Even the failures have been failures from which we can learn. The decade that began with the US invasion of Afghanistan and ended with a civil war in Syria holds some transformative lessons, ones that could point the next president toward a new strategy far better suited to what the modern Middle East actually looks like—and to America’s own values.
President Carterissued his new doctrine in what would turn out to be his final State of the Union speech in January 1980. America had been shaken by the oil shocks of the 1970s, in which the Arab-dominated OPEC asserted its control, and also by the fall of the tyrannical Mohammad Reza Pahlavi, Shah of Iran, who had been a stalwart security partner to the United States and Israel.
Nearly everyone in America and most Western economies shared Carter’s immediate goal of protecting the free flow of oil. What was significant was the path he chose to accomplish it. Carter asserted that the United States would take direct charge of security in this turbulent part of the world, rather than take the more indirect, diplomatic approach of balancing regional powers against each other and intervening through proxies and allies. It was the doctrine of a micromanager looking to prevent the next crisis.
Carter’s focus on oil unquestionably made sense, and the doctrine proved effective in the short term. Despite more war and instability in the Middle East, America was insulated from oil shocks and able to begin a long period of economic growth, in part predicated on cheap petrochemicals. But in declaring the Gulf region an American priority, it effectively tied us to a single patch of real estate, a shallow waterway the same size as Oregon, even when it was tangential, or at times inimical, to our greater goal of energy security. The result has been an ever-increasing American investment in the security architecture of the Persian Gulf, from putting US flags on foreign tankers during the Iran-Iraq war in the 1980s, to assembling a huge network of bases after Operation Desert Storm in 1991, to the outright regime-building effort of the Iraq War.
In theory, however, none of this is necessary. America doesn’t really need to worry about who controls the Gulf, so long as there’s no threat to the oil supply. What it does need is to maintain relations in the region that are friendly, or friendly enough, and able to survive democratic changes in regime—and to prevent any other power from monopolizing the region.
The Carter Doctrine, and the policies that have grown up to enforce it, are based on a set of assumptions about American power that might never have been wholly accurate. They assume America has relatively little persuasive influence in the region, but a great deal of effective police power: the ability to control major events like regional wars by supporting one side or even intervening directly, and to prevent or trigger regime change.
Our more recent experience in the Middle East has taught us the opposite lesson. It has become painfully clear over the last 10 years that America has little ability to control transformative events or to order governments around. Over the past decade, when America has made demands, governments have resolutely not listened. Israel kept building settlements. Saudi Arabia kept funding jihadis and religious extremists. Despots in Egypt, Syria, Tunisia, and Libya resisted any meaningful reform. Even in Iraq, where America physically toppled one regime and installed another, a costly occupation wasn’t enough to create the Iraqi government that Washington wanted. The long-term outcome was frustratingly beyond America’s control.
When it comes to requests, however, especially those linked to enticements, the recent past has more encouraging lessons. Analysts often focus on the failings of George W. Bush’s “freedom agenda” period in the Middle East; democracy didn’t break out, but the evidence shows that no matter how reluctantly, regional leaders felt compelled to respond to sustained diplomatic requests, in public and private, to open up political systems. It wasn’t just the threat of a big stick: Egypt and Israel weren’t afraid of an Iraq-style American invasion, yet they acceded to diplomatic pressure from the secretary of state to liberalize their political spheres. Egypt loosened its control over the opposition in 2005 and 2006 votes, while Israel let Hamas run in (and win) the 2006 Palestinian Authority elections. Even prickly Gulf potentates gave dollops of power to elected parliaments. It wasn’t all that America asked, but it was significant.
Paradoxically, by treating the Persian Gulf as an extension of American territory, Washington has reduced itself from global superpower to another neighborhood power, one than can be ignored, or rebuffed, or hectored from across the border. The more we are committed to the Carter Doctrine approach, which makes the military our central tool and physical control of the Gulf waters our top priority, the less we are able to shape events.
The past decade, meanwhile, suggests that soft power affords us some potent levers. The first is money. None of the Middle Eastern countries have sustainable economies; most don’t even have functional ones. The oil states are cash-rich but by no means self-sufficient. They’re dependent on outside expertise to make their countries work, and on foreign markets to sell their oil. Even Israel, which has a real and diverse economy, depends on America’s largesse to undergird its military. That economic power gives America lots of cards to play.
The second is defense. The majority of the Arab world, plus Israel, depends on the American military to provide security. In some cases the protection is literal, as in Bahrain, Qatar, and Kuwait, where US installations project power; elsewhere, as in Saudi Arabia, Egypt, and Jordan, it’s indirect but crucial. (American contractors, for instance, maintain Saudi Arabia’s air force.) America’s military commitments in the Middle East aren’t something it can take or leave as it suits; it’s a marriage, not a dalliance. A savvier diplomatic approach would remind beneficiaries that they can’t take it for granted, and that they need to respond to the nation that provides it.
The Carter Doctrineclearly hasn’t worked out as intended; America is more entangled than ever before, while its stated aims—a secure and stable Persian Gulf, free from any outside control but our own—seem increasingly out of reach. A growing, bipartisan tide of policy intellectuals has grappled with the question of what should replace it, especially given our recent experience.
One response has been to seek a more morally consistent strategy, one that seeks to encourage a better-governed Middle East. This idea has percolated on the left and the right. Alumni of Bush’s neoconservative foreign-policy brain trust, including Elliott Abrams, have argued that a consistent pro-democratic agenda would better serve US interests, creating a more stable region that is less prone to disruptions in the oil supply. Voices on the left have made a similar argument since the Arab uprisings; they include humanitarian interventionists like Anne-Marie Slaughter at Princeton, who argue for stronger American intervention in support of Syria’s rebels. Liberal fans of development and political freedoms have called for a “prosperity agenda,” arguing that societies with civil liberties and equitably distributed economic growth are not only better for their own citizens but make better American allies.
Then there’s a school that says the failures of the last decade prove that America should keep out of the Middle East almost entirely. Things turn out just as badly when we intervene, these critics argue, and it costs us more; oil will reach markets no matter how messy the region gets. This school includes small-footprint realists like Stephen Walt at Harvard and pugilistic anti-imperial conservatives like Andrew Bacevich at Boston University. (Bacevich argues that the more the US intervenes with military power to create stability in the oil-producing Middle East, the more instability it produces.)
While the realists think we should disentangle from the region because the US can exert strategic power from afar, others say we should pull back for moral reasons as well. That’s the argument made over the last year by Toby Craig Jones, a political scientist at Rutgers University who says that the US Navy should dissolve its Fifth Fleet base so it can cut ties with the troublesome and oppressive regime in Bahrain. America’s military might guarantees that no power—not Iran, not Iraq, not the Russians—can sweep in and take control of the world’s oil supply. Therefore, the argument goes, there’s no need for America to attend to every turn of the screw in the region.
What’s clear, from any of these perspectives, is that the Carter Doctrine is a blunt tool from a different time. It’s now possible, even preferable, to craft a policy more in keeping with the modern Middle East, and also more in line with American values. It might sound obvious to say that Washington should be pushing for a liberalized, economically self-sufficient, stable, but democratic Middle East, and that there are better tools than military power to reach those aims. In fact, that would mark a radical change for the nation—and it’s a course that the next president may well find within his power to plot.
[Originally published in The Boston Globe Ideas section.]
The case that a political term has outlived its usefulness
[Originally published in The Boston Globe Ideas section.]
To watch the Arab world’s political transformation over the past year has been, in part, to track the inexorable rise of Islamism. Islamist groups—that is, parties favoring a more religious society—are dominating elections. Secular politicians and thinkers in the Arab world complain about the “Islamicization” of public life; scholars study the sociology of Islamist movements, while theologians pick apart the ideological dimensions of Islamism. This March, the US Institute for Peace published a collection of essays surveying the recent changes in the Arab world, entitled “The Islamists Are Coming: Who They Really Are.”
From all this, you might assume that “Islamism” is the most important term to understand in world politics right now. In fact, the Islamist ascendancy is making it increasingly meaningless.
In Tunisia, Libya, and Egypt, the most important factions are led overwhelmingly by religious politicians—all of them “Islamist” in the conventional sense, and many in sharp disagreement with one another over the most basic practical questions of how to govern. Explicitly secular groups are an exception, and where they have any traction at all they represent a fragmented minority. As electoral democracy makes its impact felt on the Arab world for the first time in history, it is becoming clear that it is the Islamist parties that are charting the future course of the Arab world.
As they do, “Islamist” is quickly becoming a term as broadly applicable—and as useless—as “Judeo-Christian” in American and European politics. If important distinctions are emerging within Islamism, that suggests that the lifespan of “Islamist” as a useful term is almost at an end—that we’ve reached the moment when it’s time to craft a new language to talk about Arab politics, one that looks beyond “Islamist” to the meaningful differences among groups that would once have been lumped together under that banner.
Some thinkers already are looking for new terms that offer a more sophisticated way to talk about the changes set in motion by the Arab Spring. At stake is more than a label; it’s a better understanding of the political order emerging not just in the Middle East, but around the world.
THE TERM “ISLAMIST” came into common use in the 1980s to describe all those forces pushing societies in the Islamic world to be more religious. It was deployed by outsiders (and often by political rivals) to describe the revival of faith that flowered after the Arab world’s defeat in the 1967 war with Israel and subsequent reflective inward turn. Islamist preachers called for a renewal of piety and religious study; Islamist social service groups filled the gaps left by inept governments, organizing health care, education, and food rations for the poor. In the political realm, “Islamist” applied to both Egypt’s Muslim Brotherhood, which disavowed violence in its pursuit of a wealthier and more powerful Islamic middle class, and radical underground cells that were precursors to Al Qaeda.
What they had in common was that they saw a more religious leadership, and more explicitly Islamic society, as the antidote to the oppressive rule of secular strongmen such as Hafez al-Assad, Hosni Mubarak, and Saddam Hussein.
Over the years, the term “Islamist” continued to be a useful catchall to describe the range of groups that embraced religion as a source of political authority. So long as the Islamist camp was out of power, the one-size-fits-all nature of the term seemed of secondary importance.
But in today’s ferment, such a broad term is no longer so useful. Elections have shown that broad electoral majorities support Islamism in one flavor or another. The most critical matters in the Arab world—such as the design of new constitutional orders in Egypt, Tunisia, and Libya—are now being hashed out among groups with competing interpretations of political Islam. In Egypt, the non-Islamic political forces are so shy about their desire to separate mosque from government that many eschew the term “secular,” requesting instead a “civil” state.
In Tunisia’s elections last fall, the Islamist Ennahda Party—an offshoot of the Muslim Brotherhood—swept to victory, but is having trouble dealing with its more doctrinaire Islamist allies to the right. In Libya, virtually every politician is a socially conservative Muslim. The country’s recent elections were won by a party whose leaders believe in Islamic law as a main reference point for legislation and support polygamy as prescribed by Islamic sharia law, but who also believe in a secular state—unlike their more Islamist rivals, who would like a direct application of sharia in drafting a new constitutional framework.
In Egypt, the two best-organized political groups since the fall of Mubarak have been the Muslim Brotherhood and the Salafi Noor Party—both “Islamist” in the broad sense, but dramatically different in nearly all practical respects. The Brotherhood has been around for 84 years, with a bourgeois leadership that supports liberal economics and preaches a gospel of success and education. The rival Salafi Noor Party, on the other hand, includes leaders who support a Saudi-style extremist view of Islam that holds the religious should live as much as possible in a pre-modern lifestyle, and that non-Muslims should live under a special Islamic dispensation for minorities. A third Islamist wing in Egypt includes the jihadists—the organization that assassinated President Anwar Sadat in 1981, which has officially renounced violence and has surfaced as a political party. (Its main agenda item is to advocate the release of “the blind sheikh,” Omar Abdel-Rahman imprisoned in the United States as the mastermind of the 1993 World Trade Center bombing.)
“ISLAMIST” MIGHT BE an accurate label for all these parties, but as a way to understand the real distinctions among them it’s becoming more a hindrance than a help. A useful new terminology will need to capture the fracture lines and substantive differences among Islamic ideologies.
In Egypt, for example, both the Muslim Brotherhood and the Salafis believe in the ultimate goal of a perfect society with full implementation of Islamic sharia. Yet most Brothers say that’s an abstract and unattainable aim, and in practice are willing to ignore many provisions of Islamic law—like those that would limit modern finance, or those that would outright ban alcohol—in the interest of prosperity and societal peace. The Salafis, by contrast, would shut down Egypt’s liquor industry and mixed-gender beaches, regardless of the consequences for tourism or the country’s Christian minority.
There’s a cleavage between Islamists who still believe in a secular definition of citizenship that doesn’t distinguish between Muslims and non-Muslims, and those who believe that citizenship should be defined by Islamic law, which in effect privileges Muslims. (Under Saudi Arabia’s strict brand of Islamist government, the practice of Christianity and Shiite Islam is actually illegal.) And there’s the matter of who would interpret religious law: Is it a personal matter, with each Muslim free to choose which cleric’s rulings to follow? Or should citizens be legally required to defer to doctrinaire Salafi clerics?
Many thinkers are trying to craft a new language for the emerging distinctions within Islamism. Issandr El Amrani, who edits The Arabist blog and has just started a new column for the news site Al-Monitor about Islamists in power, suggests we use the names of the organizations themselves to distinguish the competing trends: Ikhwani Islamists for the establishment Muslim Brothers and organizations that share its traditions and philosophy; Salafi Islamists for Salafis, whose name means “the predecessors” and refers to following in the path of the Prophet Mohammed’s original companions; and Wasati Islamists for the pluralistic democrats that broke away from the Brotherhood to form centrist parties in Egypt.
Gilles Kepel, the French political scientist who helped popularize the term “Islamist” in his writings on the Islamic revival in the 1980s, grew dissatisfied with its limits the more he learned about the diversity within the Islamist space. By the 1990s, he shifted to the more academic term “re-Islamification movements.” Today he suggests that it’s more helpful to look at the Islamist spectrum as coalescing around competing poles of “jihad,” those who seek to forcibly change the system and condemn those who don’t share those views, and “legalism,” those who would use instruments of sharia law to gradually shift it. But he’s still frustrated with the terminology’s ability to capture politics as they evolve. “I’ve tried to remain open-eyed,” he said.
It’s also helpful to look at what Islamists call themselves, but that only offers a perfunctory guide, since many Islamists consider religion so integral to their thinking that it doesn’t merit a name. Others might seek for domestic political reasons to downplay their religious aims. For example, Turkey’s ruling party, a coterie of veteran Islamists who adapted and subordinated their religious principles to their embrace of neoliberal economics, describes itself as a party of “values,” rather than of Islam. In Libya, the new government will be led by the personally conservative technocrat Mahmoud Jibril; though his party could be considered “Islamist” in the traditional sense, it’s often identified as secular in Western press reports, to distinguish it from its more religious rivals. Jibril himself prefers “moderate Islamic.”
The efforts to come up with a new language to talk about Islamic politics are just beginning, and are sure to evolve as competing movements sharpen their ideologies, and as the lofty rhetoric of religion meets the hard road of governing. The importance of moving beyond “Islamism” will only grow as these changes make themselves felt: What we call the “Islamic world” includes about a quarter of the world’s population, stretching from Muslim-majority nations in the Arab world, along with Turkey, Pakistan, and Indonesia, to sizable communities from China to the United States. For Islam, the current political moment could be likened to the aftermath of 1848 in Europe, when liberal democracy coalesced as an alternative to absolute monarchy. Only after that, once virtually every political movement was a “liberal” one, did it become important to distinguish between socialists and capitalists, libertarians and statists—the distinctions that have seemed essential ever since.
[Originally published in The Boston Globe.]
Right now, the Islamic world is in the midst of a grand experiment. After decades facing an unappetizing choice among secular dictatorship, monarchy, and Iranian-style theocracy, nations across the region are grappling with how to build genuinely modern governments and societies that take into account the Islamist principles shared by a majority of voters.
As they do, a shadow hangs over their prospects. Islamic nations in the Middle East on the whole have underperformed their counterparts in the West. Asian nations that were poorer than the Arab world at the beginning of the Cold War have overtaken the Middle East. And promising experiments with democracy have been few and far between.
The question of why is a contentious one. Has the Islamic world been held back by its treatment at the hands of history? Or could the roots of the problem lie in its shared religion—in the Koran, and Islamic belief itself?
A provocative new answer is emerging from the work of Timur Kuran, a Turkish-American economist at Duke University and one of the most influential thinkers about how, exactly, Islam shapes societies. In a growing body of work, Kuran argues that the blame for the Islamic world’s economic stagnation and democracy deficit lies with a distinct set of institutions that Islamic law created over centuries. The way traditional Islamic law handled finance, inheritance, and incorporation, he argues, held back both economic and political development. These practices aren’t inherent in the religion—they emerged long after the establishment of Islam, and have partly receded from use in the modern era. But they left a profound legacy in many societies where Islam held sway.
Kuran’s critics think he unfairly impugns religious law. Pakistani scholar Arshad Zaman argues that Kuran misunderstands the very nature of Islamic law and business practice, which elevate worthwhile economic goals such as income equality and social justice above growth. Others argue that the harm suffered at the hands of legacy Western colonial powers is far more important in explaining why the Muslim world is struggling today.
Kuran himself sees his work as coming from a sympathetic perspective: He wants to combat the argument that Islam is incompatible with modernity and liberty, a notion he decries as “one of the most virulent ideas of our time.” He worries about anti-Islamic sentiment from outside the religion, as well as the rigid and defensive posture of some orthodox Islamists. (He pointedly avoids discussing his own faith. “I write as a scholar,” he says.)
Thanks in part to this careful navigation, Kuran’s scholarship gives economists, and perhaps political leaders in the Middle East, a way to talk about the Islamic world’s problems without resorting to crude stereotypes or heightened “clash of civilizations” rhetoric. In Kuran’s analysis, Islam itself is neither the problem nor the solution; indeed, most of the rigid practices have long been supplemented by or in some cases abandoned for more Western models.
However, his work does carry stark implications for countries such as Egypt, Libya, and Tunisia, whose emerging political futures are likely to be shaped by Islamist majorities or pluralities. A democratic renaissance could paradoxically lead to more stagnation if it imposes calcified institutions of Islamic yesteryear on modern society. If Kuran is right, the nations of the Arab Spring face a conundrum: The institutions most in keeping with societies’ religious principles could be the ones most likely to hold it back.
While Europe suffered centuries of decline and intellectual darkness, the early Islamic world bubbled with vitality. Competing schools of Islamic jurisprudence produced texts still consulted as references today, while merchants and caliphs left copious written records for future scholars to study. In the course of his work, Kuran was able to comb through business records and commercial ledgers spanning more than a millennium.
What he found, he says, was that two legal traditions pervasive in the Islamic world became especially limiting: the laws governing the accumulation of capital, and those governing how institutions were organized. The growth of capital was limited by laws of inheritance and Islamic partnership, which required that large fortunes and enterprises be split up with each passing generation. The waqf, or Islamic trust, had even greater ramifications, because it determined the structure of most social relationships and had wide-ranging consequences for civil society.
Under Islamic law, the trust—rather than the corporation—is the most common legal unit of organization for entities outside the government. Until modern times, cities, hospitals, schools, parks, and charities were all set up and governed by the immutable deed of an Islamic trust. Under its terms, the founder of the trust donates the land or other capital that funds it in perpetuity, and sets its rules in the deed. They can never be altered or amended. The waqf was developed by Islamic scholars in the centuries after the religion was established, drawing on Koranic principles barring usury and demanding justice in business. (It is not an institution stipulated by the Koran itself.) Much like a trust in the West, a waqf is not “governed” so much as executed. It is also limited in what it can do. A waqf is prohibited from engaging in politics, which means it cannot form coalitions, pool its resources with other organizations, or oppose the state.
Drawing on voluminous study of the mechanisms of money, power, and law going back to the 7th century founding of Islam, Kuran draws a picture of nations whose rulers wielded central and often highly authoritarian power, and faced little challenge from either business owners or a waqf-bound civil society.
Over time, he argues, this structure led to a radically different social system than the one that arose in the West. There, the rise of the corporation created a vehicle for prosperity and a civilian counterweight to state power—an institution that could adapt and grow, survive from one generation to the next, and pay benefits to its shareholding owners, who are thus motivated to steer it toward expansion and influence. Nonprofit corporations enjoy similar flexibility and freedom of action, though they don’t have shareholders.
“In the West, you had universities, unions, churches, organized as corporations that were free to make coalitions, engage in politics, advocate for more freedoms, and they became a civil society,” Kuran said in an interview. “Democracy is a system of checks and balances. It can’t develop if a population is passive.”
In modern times, Islamic nations have adopted Western institutions like corporations and banks to manage their affairs. Municipalities and private enterprise are now more commonly incorporated rather than set up as trusts. But the trust remains pervasive, especially in the realm of social services: Hospitals, schools, and aid societies are still almost always trusts rather than corporations—a factor that correlates with their quiescence in balancing state power.
In focusing on the specific legal institutions of Islamic civic life, Kuran’s thesis directly targets those “apologists” who blame the economic and political problems of the Middle East solely on colonialism and other outside forces. He also takes aim at essentialists who hold Islam as a religion responsible for the problems of Islamic countries. In fact, he argues, Islamic states that have embraced modernization programs and gone through the sometimes painful process of adopting new institutions, as Turkey and Indonesia have, have had great success in developing both democracy and economic prosperity widely shared among citizens.
While some critics attack Kuran from an Islamic perspective, like Arshad Zaman, others share his approach but dispute his findings. Maya Shatzmiller, a historian at Western University in Canada, believes that the real specific causes of economic growth are particular to the circumstances of each individual region, and that by focusing on some notional qualities common to the entire Islamic world, his work generically indicts Islam without offering real insight into the economic problems of individual Middle Eastern states.
Kuran believes the evidence of a gap between the Islamic world and the West is undeniable and merits serious examination of what those countries have in common. And it’s patronizing, he writes, to suggest that the Islamic world will be offended by a vigorous debate on the subject.
Kuran’s approach has influenced other social scientists to use similar tools in the hope of offering more precise and useful answers. Eric Chaney, a Harvard economist, uses the mathematical modeling of econometrics to pinpoint the historical factors that correlate with lagging democratization and development in the Islamic world. Jared Rubin, an economist at Chapman University in California, studies the effect of technologies like the printing press on economic disparities. Jan Luiten van Zanden, a renowned Dutch historian, has begun a deep comparative study of the organization of cities in the Islamic world and the West.
For the new architects of Islamic politics, Kuran’s work offers a clear blueprint, though perhaps a difficult one to follow. It suggests that states heavily reliant on Islamic law may need to reformulate their approach, extending Western-style rules to organize their nonstate entities: banks, companies, nonprofits, political parties, religious societies. Over time, this will seed a more empowered civic society and ultimately pull greater numbers of citizens into the fabric of political life.
It’s a challenge, however. Authoritarian states are unlikely to promote reforms that will weaken their control. And the resurgence of Islamist politics has created a new wave of support for a more doctrinaire application of Islamic law and traditions.
Another barrier to reform is the slow pace of cultural change. Once modern institutions are in place, Kuran warns, it takes a long time for their use to become widespread and for people to trust them. Simply put, for an institution to grow powerful and influential, whether it’s a bank or a political party, it needs to build support from a large, trusting public of strangers. Much of the Middle East still operates on smaller units, in which customers or citizens expect to know who’s running the company or institution that serves them. For example, Kuran points out that despite the prevalence of banks, only one in five families in the Arab world actually has a bank account. (By comparison, three in five Turkish families have bank accounts, and nearly every US family does.)
Most broadly, change requires a shift in the constraints on civil society. In recent decades, Middle Eastern regimes have systematically destroyed any opposition and kept rigid control over the media, official religious groups, and any body that might develop a political identity, from university faculty to labor unions. The most effective dissent survived deep within mosques, where even the most repressive police states hesitated to go.
In the long run, to end the cycle of autocracy and violence, the Islamic world will need space for civil society to grow outside the constraints of the state and the mosque. Only then will citizens grow accustomed to making decisions that have traditionally been made on their behalf. And breaking the old habits, on the street and in election booths, will likely take time.
“The state itself,” Kuran says, “cannot change the way people relate to each other.”
[Published in The Boston Globe Ideas section.]
When President Obama and Mitt Romney cross swords on defense policy, it can sound like a schoolyard fight: Who loves the military more? Who is tougher? Who would lead a more muscular America?
This is the way we expect candidates to talk about defense: in terms of power, force, even national pride. But increasingly, when it comes to the role the Department of Defense actually plays for the nation, it misses the point. Over the past decade, the Pentagon has become far more complex than the conversation about it would suggest. What “military” means has changed sharply as the Pentagon has acquired an immense range of new expertise. What began as the world’s most lethal strike force has grown into something much more wide-ranging and influential.
Today, the Pentagon is the chief agent of nearly all American foreign policy, and a major player in domestic policy as well. Its planning staff is charting approaches not only toward China but toward Latin America, Africa, and much of the Middle East. It’s in part a development agency, and in part a diplomatic one, providing America’s main avenue of contact with Africa and with pivotal oil-producing regimes. It has convened battalions of agriculture specialists, development experts, and economic analysts that dwarf the resources at the disposal of USAID or the State Department. It’s responsible for protecting America’s computer networks. In May of this year, the Pentagon announced it was creating its own new clandestine intelligence service. And the Pentagon has emerged as a surprisingly progressive voice in energy policy, openly acknowledging climate change and funding research into renewable energy sources.
The huge expansion of the Pentagon’s mission has, not surprisingly, rung plenty of alarm bells. In the policy sphere, critics worry about the militarization of American foreign policy, and the fact that much of the world—especially the most volatile and unstable parts—now encounters America almost exclusively in the form of armed troops. Hawkish critics worry that the Pentagon’s ballooning responsibilities are a distraction from its main job of providing a focused and prepared fighting force. But this new reality will be with us for a while, and in the short term it creates an opportunity for the next president. Super-empowered and quickly deployable, the Pentagon has become a one-stop shop for any policy objective, no matter how far removed from traditional warfare.
That means the next administration will have ample room to shape the priorities, and even perhaps to reimagine the mission of a Pentagon that plays a leading role in areas from language research to fighting the drug trade. And it means that voters will need to consider the full breadth of its capabilities when they hear candidates talk about “defense.”
In campaigning so far, neither candidate has seriously engaged with the real challenges of steering the most diverse and powerful entity under his control. For both Obama and Romney, the most central question about foreign policy—and even some of their domestic priorities—may be how creatively and effectively they can use the Pentagon to further their aims.
The current balance of power in Washington runs counter to most of American history. Traditionally, the United States has related to the world chiefly through diplomats: A civilian president set the policy, civilian envoys worked to implement it, and gunboats stepped in only when diplomacy failed. Indeed, until World War II, the Department of State outranked Defense in size as well as influence. That began to change in the 1940s, first with the huge mobilization of World War II and then the Cold War. Funding and power began to accumulate permanently in the Pentagon.
In the decade since 9/11, the Pentagon has undergone another transformation. The military was asked to fight two complex wars in Afghanistan and Iraq, while also engaging in a sprawling operation dubbed the Global War on Terror. In practice, this meant soldiers and other troops were asked to design nation-building operations on the fly; produce the kind of pro-democracy propaganda that decades earlier was the province of the Voice of America; and do police, intelligence, and development work in conflict zones that had long bedeviled experts in far more stable locales. In Iraq, the Pentagon was essentially expected to provide the full gamut of services normally offered by a national government. Army commanders in provincial outposts dispensed cash grants to business start-ups, supervised building renovations, managed police forces, and built electricity plants.
As American involvement in those wars winds down, we are left with a Department of Defense that has become Washington’s default tool for getting things done in the world. Unlike diplomats, who serve abroad for limited stints and who can refuse to work in dangerous places, military personnel have to go where ordered, and stay as long as the government needs them. They haven’t always succeeded, leaving any number of failed governance projects in their wake. But it’s understandable why the White House has turned more and more often to warriors. The military is undeniably good at taking action: A lieutenant colonel can spend a hundred thousand dollars on a day’s notice to dig a well or refurbish a mayor’s office or rebuild a village market. In contrast, civilian USAID specialists operating under the agency’s rules would take months, or even years, to put out bids and hire a local subcontractor to do the same job.
“The president who comes into office and thinks about what he wants to do, when he looks around for capabilities he tends to see someone in uniform,” says Gordon Adams, an American University political scientist and expert in the defense budget. “The uniformed military are really the only global operational capacity the president has.”
And that capacity stretches into some surprising domains. The Pentagon maintains an international rule of law office staffed with do-gooder lawyers. It has trained and deployed agriculture battalions. Its regional commands, as well as its war-fighting generals in Afghanistan and Iraq, have tapped hundreds of economists, anthropologists, and other field experts as unconventional military assets. Its special operators conduct the kind of clandestine operations once reserved for the CIA, but also do a lot of in-the-field political advising for local leaders in unstable countries. The US Cyber Command runs a kind of geek tech shop in charge of protecting America’s computer networks. The world’s most high-tech navy runs counter-piracy missions off the coast of Somalia, essentially serving as a taxpayer-funded security force for private shipping companies. Much of drug policy is executed by the military, which is in charge of intercepting drug shipments and has been the key player in drug-supplying countries like Colombia.
With little fanfare, the Pentagon—currently the greatest single consumer of fossil fuels in all of America, accounting for 1 percent of all use—has begun promoting fuel efficiency and alternate energy sources through its Office of Operation Energy Plans and Programs. Using its gargantuan research and development budget, and its market-making purchasing power, the Defense Department has demanded more efficient motors and batteries. Its approach amounts to a major official policy shift and huge national investment in green energy, sidestepping the ideological debate that would likely hamstring any comparable effort in Congress.
This huge expansion of what the Department of Defense does is not the same thing as a runaway military, though there are critics who see it that way. At the height of the Cold War, the United States dedicated far more of its budget to defense—around 60 percent, compared to 20 percent now. It is more a matter of vast “mission creep.” Inevitably, it is to the Pentagon that the government will turn when it faces urgent, unexpected needs: Hurricane Andrew, the 2005 tsunami in Asia, propaganda in the Islamic world. Men and women in camouflage uniforms can be found helping domestic law enforcement pursue cattle rustlers in North Dakota using loaned military drones, or working with Afghan farmers to increase crop yields.
Paul Eaton, a major general in the US Army who retired in 2006 and now advises a Washington think tank called the National Security Network, describes a meeting he attended in Kampala, Uganda, this May, convened by the American general in charge of the Africa Command, or AfriCom. The top commanders of 35 militaries on the continent gather every other year, hash out policy matters, and forge personal ties.
“It was as much diplomacy and politics as anything else,” Eaton said. “Nobody could give me an example of the State Department doing anything like that.”
What SHOULD a president do about this metamorphosed Pentagon? Or more practically, what should be done with it? A question to watch for in the coming presidential debates is whether either candidate is willing to discuss reorienting the Pentagon toward its core mission of armed defense, shedding its new capacities in the interest of keeping it focused or saving money. Neither candidate has suggested so far that he will.
Pentagon cutbacks are politically difficult. No president likes to argue against national defense, and Pentagon spending by design sprawls across congressional districts, creating a built-in bipartisan lobby against cuts. But a president who tried to return the Pentagon to a more strictly military mission could expect at least some support from the Department of Defense itself. Many career officers view the extra missions with dismay, fearing that the Pentagon will get worse at fighting wars as it spends more and more time patrolling cyberspace, organizing diplomatic retreats, and deploying agricultural battalions to train farmers in war zones. Eaton, whose three children all serve in the armed forces, has been a vocal critic of the new military, and thinks the best thing the next administration could do is defund and shut down all the niche capacities that have sprung up since 9/11.
The past two US defense secretaries, including George W. Bush appointee Robert Gates, have also expressed concern about the department’s expansion. In a 2008 speech, while still in office, Gates ripped into the “creeping militarization” of foreign policy, expressing concern that the Pentagon was like an “800-pound gorilla” taking over the intelligence community, foreign aid, and diplomacy in conflict zones. Both Gates and his successor, Leon Panetta, have vociferously advocated for a bigger, better-funded State Department more capable of deploying around the world, conducting diplomacy in hot zones, and dispensing emergency relief and development aid.
It’s hard to imagine the Pentagon shedding capacity anytime soon. As the wars in Iraq and Afghanistan subside, the Defense Department appears likely to keep most of its enormous budget. During a period when most branches of government will be struggling to survive budget cutting, the Pentagon will more than ever have the global reach and the policy planning muscle to set the agenda and execute foreign policy.
So in the next several months, we should be on the lookout for specific ideas from candidates about what do with this excess power—at the least, an acknowledgment that it exists. Domestically, the Pentagon has the opportunity to shape university research priorities; it influences White House policy planning anytime a crisis erupts in a new place. Abroad, the military can do considerable good by using its money and expertise to improve quality of life, burnishing America’s reputation as a font of positive development rather than just counter-terrorism and counter-insurgency.
But while it lasts, the breadth of the current military presents grave challenges, not least for a democratic country that in principle, if not always in policy, opposes military dictatorships around the world. Even Pentagon insiders worry about this dissonance. Whatever good our deployments can do, it will be harder to promote civilian ideals so long as our foreign policy wears a uniform.
[Originally published in The Boston Globe.]
President Obama and his presumptive challenger Mitt Romney agree on at least one important matter: the world these days is a terrifying place. Romney talks about the “bewildering” array of threats; Obama about the perils of nuclear weapons in the wrong hands. They differ only on the details.
A bipartisan emphasis on threats from outside has always been a hallmark of American foreign-policy thinking, but it has grown more widespread and more heightened in the decade since 9/11. General Martin E. Dempsey, chairman of the Joint Chiefs of Staff, captured the spirit when he spoke in front of Congress recently: “It’s the most dangerous period in my military career, 38 years,” he said. “I wake up every morning waiting for that cyberattack or waiting for that terrorist attack or waiting for that nuclear proliferation.”
The unpredictability and extremism of America’s enemies today, the thinking goes, makes them even more threatening than our old conventional foes. The old enemy was distant armies and rival ideologies; today, it’s a theocracy with missiles, or a lone wolf trying to detonate a suitcase nuke in an American city.
But what if the entire political and foreign policy elite is wrong? What if America is safer than it ever has been before, and by focusing on imagined and exaggerated dangers it is misplacing its priorities?
That’s the bombshell argument put forth by a pair of policy thinkers in the influential journal Foreign Affairs. In an essay entitled “Clear and Present Safety: The United States Is More Secure Than Washington Thinks,” authors Micah Zenko and Michael A. Cohen argue that that American policy leaders have fallen into a nearly universal error. Across the ideological board, our leaders and experts genuinely believe that the world has gotten increasingly dangerous for America, while all available evidence suggests exactly the opposite: we’re safer than we’ve ever been.
“The United States faces no serious threats, no great-power rival, and no near-term competition for the role of global hegemon,” Cohen says. “Yet this reality of the 21st century is simply not reflected in US foreign policy debates or national security strategy.”
It might seem that the extra caution couldn’t hurt. But Zenko and Cohen argue that excessive worry about security leads America to focus money and attention on the wrong things, sometimes exacerbating the problems it seeks to prevent. If Americans and their leaders recognized just how safe they are, they would spend less on the military and more on the slow nagging problems that undermine our economy and security in less dramatic ways: creeping threats like refugee flows, climate change, and pandemics. More important, the United States would avoid applying military solutions to non-military problems, which they argue has made containable problems like terrorism worse. In effect, they argue, the United States should keep a pared-down military in reserve for traditional military rivals. The bulk of America’s security efforts could then be spent on remedies like policing and development work — more appropriate responses to the terrorism and global crime syndicates that understandably drive our fears.
Why should Americans feel so secure right now? Zenko and Cohen write that a calm appraisal of global trends belies the danger consensus. There are fewer violent conflicts than at almost any point in history, and a greater number of democracies. None of the states that compete with America come close to matching its economic and military might. Life expectancy is up, and so is prosperity. As vulnerable as the nation felt in the wake of 9/11, American soil is still remarkably insulated from attack.
Nonetheless, more than two-thirds of the members of the Council on Foreign Relations — as good a cross-section of the foreign-policy brain trust as there is — said in a 2009 Pew Survey that the world today was as dangerous, or even more so, than during the Cold War. Other surveys of experts and opinion-makers showed the same thing: the overwhelming majority of experts believe the world is becoming more dangerous. Zenko and Cohen claim, essentially, that the entire foreign policy elite has fallen prey to a long-term error in thinking.
“More people have died in America since 9/11 crushed by furniture than from terrorism,” Zenko says in an interview. “But that’s not an interesting story to tell. People have a cognitive bias toward threats they can perceive.”
The paper’s authors are, in effect, skewering their own peers. Zenko is a Council on Foreign Relations political scientist with a PhD from Brandeis. Cohen (a colleague of mine at The Century Foundation, and a previous contributor to Ideas) worked as a speechwriter in the Clinton administration and has been a mainstay in the thinktank world for a decade.
Zenko and Cohen point out that there are plenty of good-faith reasons that experts tend to overestimate our national risk. A raft of psychological research from the last two decades that shows human nature is biased to exaggerate the threat of rare events like terrorist attacks and underestimate the threat from common ones like heart attacks. And security policy in general is extremely risk-averse: we expect our military and intelligence community to tolerate no failures. A cabinet secretary who pledged to reduce terrorist attacks to just a few per year would not last long in the job: the only acceptable goal is zero.
Electoral politics, of course, is greatest driver of what Zenko and Cohen call “domestic threat-mongering.” In an endless contest for votes, Republicans do well by claiming to be tough in a scary world. Democrats adopt the same rhetoric in order to shield themselves from political attacks. Both sides see an advantage in a politically risk-averse strategy. If a minor threat today turns into a sizeable one tomorrow, better to have sounded the alarm early than to have appeared naïve or feckless.
But Zenko and Cohen make the politically uncomfortable argument that it’s wrong to govern based on the prospect of unlikely but extreme events. Instead of marshalling our resources for the 1 percent risk of a nuclear jihadist, as Vice President Dick Cheney argued we should, we should really set our security policy based on the 99 percent of the time when things go America’s way.
They point to the number of wars between major states and the number of people killed in wars every year, both of which have been steadily declining for decades. They also point to the historical, systematic growth of the global economy and spread of financial and trade links, which have undergirded an unprecedented period of peace among rival great powers and within the West.
Their thinking follows in the footsteps of a small but persistent group of contrarian security scholars, who have noted the post-9/11 spike in America’s already long history of threat exaggeration. Best known among them is Ohio State University political scientist John Mueller, who has argued that American alarmism about terrorism can cause more harm to our well-being and national economy than terrorism itself. In that vein, Zenko and Cohen claim that America’s over-militarization prompts avoidable wars and has in fact created far more problems than it solves, from terrorist blowback to huge drains on the Treasury.
The implications, as they see it, are clear: spend less money on the military; spend more on the boring, international initiatives that actually make America safe and powerful. Zenko’s favorite example is loose nukes, which pose hardly any threat today but were a real cause for concern as the Soviet Union collapsed in the early 1990s, leaving poorly secured weapons across an entire hemisphere.
“There was a solution: limit nuclear stockpiles and secure them,” Zenko says. “We took common-sense steps, none of which involved the US military. We send contractors to these facilities in Russia, and they say ‘The fence doesn’t work, the cameras don’t work, the guards are drunk.’ It’s cheap, and it works. This is what keeps us safe.”
Not everyone agrees. Robert Kagan, the most influential proponent of robust American power, argues that America is safe today precisely because it throws its military might around. President Obama said he relied for his most recent state of the union address on Kagan’s newest book, “The World America Made.” Mackenzie Eaglen, a defense expert at the American Enterprise Institute, says America benefits even when it appears to overreact. According to this thinking, even if the Pentagon designs the military for improbable threats and deploys at the drop of the hat, it is performing a service keeping the world stable and deterring would-be rivals. Like most establishment defense thinkers, Eaglen believes American dominance could easily and quickly come to an end without this kind of power projection.
“Power abhors a vacuum,” she says. “If we don’t fill it, others will, and we won’t like what that looks like.”
Other critics of Zenko and Cohen’s argument, like defense policy writer Carl Prine, say the comforting data about declining wars and violent deaths is misleading. Today’s circumstances, they argue, don’t preclude something drastic happening next — say, if China, or even a rising power like Brazil, veers into an unpredictably bellicose path and clashes violently with American interests. Simply put, safe today doesn’t mean safe tomorrow.
Even if Zenko and Cohen are right, however, and the big-military crowd is wrong, it is nearly impossible to imagine a spirit of “threat deflation” taking hold in American politics. The already alarmist expert community that shapes US government thinking was further electrified by 9/11. Anyone in either party who argues for a leaner military, or pruning back intelligence infrastructure, risks being portrayed as inviting another attack on the homeland. The result is an unshakeable institutional inertia.
To get a sense of what they’re up against, listen to James Clapper, the director of national intelligence, presenting the “Worldwide Threat Assessment” to Congress, as his office does every year. The latest version, in January this year, reads like a catalogue of nightmares, describing macabre possibilities ranging from Hezbollah sleeper cells attacking inside America to a cyberattack that could turn our own infrastructure into murderous drones. Are these science fiction visions, or simply the wise man’s anticipation of the next war? It’s impossible to know, but one thing is striking: there’s no attempt in the intelligence czar’s report to rank the threats or assess their real likelihood. He simply and clinically presents every possible, terrifying thing that could happen, and signs off. It’s truly frightening reading.
This is what Zenko dismissively deems the “threat smorgasbord.” It makes clear how high the stakes are in planning for war and terrorist attacks, and how much emotional power the issue has. The reasonable thing to do might simply never be politically palatable. The current debate about Iran’s nuclear intentions is a perfect example, says Eaglen, who debated Cohen about his thesis on Capitol Hill in April.
“I can say Iran’s a threat, Michael can say it’s not,” she says. “But if he gets it wrong, we’re in trouble.”
Thanassis Cambanis, a fellow at The Century Foundation, is the author of “A Privilege to Die: Inside Hezbollah’s Legions and Their Endless War Against Israel” and blogs at thanassiscambanis.com. He is an Ideas columnist.
[Originally published in The Boston Globe, subscribers only.]
CAIRO — It might have seemed naïve to an outsider, but one of the great hopes among the revolutionaries who humiliated Egyptian dictator Hosni Mubarak more than a year ago was that the country’s strongman regime would finally yield to a democratic variety of voices. Both Islamic revolutionaries and secular liberals spoke up for modern ideas of pluralism, tolerance, and minority rights. Tahrir Square was supposed to turn the page on a half-century or more of one-party rule, and open the door to something new not just for Egypt, but for the Arab world: a genuine diversity of opinion about how a nation should govern itself.
“We disagree about many things, but that is the point,” one of the protest organizers, Moaz Abdelkareem, said in February 2011, the week that Mubarak quit. “We come from many backgrounds. We can work together to dismantle the old regime and make a new Egypt.”
A year later, it is still far from clear what that new Egypt will look like. The country awaits a new constitution, and although a competitively elected parliament sat in January for the first time in contemporary Egyptian history, it is still subordinate to a secretive military regime. Like all transitions, the struggle against Egyptian authoritarianism has been messy and complex. But for those who hoped that Egypt would emerge as a beacon of tolerant, or at least diverse, politics in the Arab world, there has been one big disappointment: It’s safe to say that one early casualty of the struggle has been that spirit of pluralism.
“I do not trust the military. I do not trust the Muslim Brothers,” Abdelkareem says in an interview a year later. In the past year, he helped establish the Egyptian Current, a small liberal party that wants to bring direct democracy to Egyptian government. Despite his inclusive principles, he’s urged the dismissal from public life of major political constituencies with whom he disagrees: former regime supporters, many Islamists, old-line liberals, and supporters of the military. He shrugs: “It’s necessary, if we’re going to change.”
He’s not the only one who has left the ideal of pluralism behind. A survey of the major players in Egyptian politics yields a list of people and groups who have worked harder to shut down their opponents than to engage them. The Muslim Brotherhood — the Islamist party that is the oldest opposition group in Egypt, and the one with by far the most popular support — has run roughshod over its rivals, hoarding almost every significant procedural power in the legislature and cutting a series of exclusive deals with the ruling generals. Secular liberals, for their part, have suggested that an outright coup by secular officers would be better than a plural democracy that ended up empowering bearded fundamentalists who disagree with them.
When pressed, most will still say they want Egypt to be the birthplace of a new kind of Arab political culture — one in which differences are respected, minorities have rights, and dissent is protected. However, their behavior suggests that Egypt might have trouble escaping the more repressive patterns of its past.
IN A COUNTRY that had long barred any meaningful politics at all, Tahrir’s leaderless revolution begat a brief but golden moment of political pluralism. Activists across the spectrum agreed to disagree — this, it was widely believed, was the very practice that would lead Egypt from dictatorship to democracy. During the first month of maneuvering after Mubarak resigned in February 2011, Muslim Brotherhood leaders vowed to restrain their quest for political power; socialists and liberals emphasized due process and fair elections. The revolution took special pride in its unity, inclusiveness, and plethora of leaders: It included representatives of every part of society, and aspired to exclude nobody.
“Our first priority is to start rebuilding Egypt, cooperating with all groups of people: Muslims and Christians, men and women, all the political parties,” the Muslim Brotherhood’s most powerful leader, Khairat Al-Shater, told me in an interview last March, a year ago. “The first thing is to start political life in the right, democratic way.”
Within a month, however, that commitment had begun to fray. Jostling factions were quick to question the motives and patriotism of their rivals, as might be expected from political movements trying to position themselves in an unfolding power struggle. More surprising, and more dangerous, has been the tendency of important groups to seek the silencing or outright disenfranchisement of competitors.
The military sponsored a constitutional referendum in March 2011 that supposedly laid out a path to transfer power to an elected, civilian government, but which depended on provisions poorly understood by Egyptian voters. The Islamists sided with the military, helping the referendum win 77 percent of the votes, and leaving secular liberal parties feeling tricked and overpowered. The real winner turned out to be the ruling generals, who took the win as an endorsement of their primacy over all political factions. The military promptly began rewriting the rules of the transition process.
With the army now the country’s uncontested power, some leading liberal political parties entered negotiations with the generals over the summer to secure one of their primary goals — a secular state — in a most illiberal manner: a deal with the army that would preempt any future constitution written by a democratically selected assembly.
The Muslim Brotherhood responded by branding the liberals traitors and scrapping its conciliatory rhetoric. The Islamists, with huge popular support, abandoned their initial promise of political restraint and instead moved to contest all seats and seek a dominant position in the post-Mubarak order. The Brotherhood now holds 46 percent of the seats in parliament, and with the ultra-Islamist Salafists holding another 24 percent, the Brotherhood effectively controls enough of the body to shut down debate. Within the Brotherhood, Khairat Al-Shater has led a ruthless purge of members who sought internal transparency and democracy — and is now considered a front-runner to be Egypt’s next prime minister.
The army generals in charge, meanwhile, have been using state media to demonize secular democracy activists and street protesters as paid foreign agents, bent on destroying Egyptian society in the service of Israel, the United States, and other bogeymen.
Since the parliament opened deliberations in January, the rupture has been on full and sordid display. The military has sought legal censure against a liberal member of parliament, Zyad Elelaimy, because he criticized army rule. State prosecutors have gone after liberals and Islamists who have voiced controversial political positions. Islamists and military supporters have also filed lawsuits against liberal politicians and human rights activists, while the military-appointed government has mounted a legal and public relations campaign against civil society groups.
THE CENTRAL QUESTION for Egypt’s future is whether these increasingly intolerant tactics mean that the country’s next leaders will govern just as repressively as its last. Scholars of political transition caution that for states shedding authoritarian regimes, it can take years or decades to assess the outcome. Still, there are some hallmarks of successful transitions that Egypt appears to lack. States do better if they have an existing tradition of political dissent or pluralism to fall back on, or strong state institutions independent of the political leadership. Egypt has neither.
“Transitions are always messy, and the Egyptian one is particularly messy,” said Habib Nassar, a lawyer who directs the Middle East and North Africa program at the International Center for Transitional Justice in New York. “To be honest, I’m not sure I see any prospects of improvement for the short term. You are transitioning from dictatorship to majority rule in a country that never experienced real democracy before.”
Outside the parliament’s early sessions in January, liberal demonstrators chanted that the Muslim Brotherhood members were illegitimate “traitors.” In response, paramilitary-style Brotherhood supporters formed a human cordon that kept protesters from getting close enough to the parliament building to even be heard by their representatives.
At a rally for the revolution’s one-year anniversary in Tahrir Square, Muslim Brotherhood leaders preached and made speeches from a high stage to celebrate their triumph. It was unclear whether they were referring to the revolution, or to their party’s dominance at the polls. It was too much for the secular activists. “This is a revolution, not a party,” some chanted. “Leave, traitors, the square is ours not yours,” sang others.
Hundreds of burly brothers linked arms, while a leader with a microphone seemed to taunt the crowd. “You can’t make us leave,” he said. “We are the real revolution.” In response, outraged members of the secular audience tore apart the Brotherhood’s sound system and pelted the stage with water bottles, corn cobs, rocks, even shards of glass.
The Arab world is watching closely to see what happens in the next several months, when Egypt will write a new constitution and elect a president in a truly competitive ballot, and the military will cede power, at least formally. Even in the worst-case scenario, it’s worth remembering that Egypt’s next government will be radically more representative than Mubarak’s Pharaonic police state.
Sadly, though, political discourse over the last year has devolved into something that looks more like a brawl than a negotiation. If it continues, the constitution drafting process could end up more ugly than inspiring. The shape of the new order will emerge from a struggle among the Islamists, the secular liberals, and the military — all of whom, it now appears, remain hostage to the culture of the regime they worked so hard to overthrow.
Wanted: A grand strategy
In search of a cohesive foreign policy plan for America
By Thanassis Cambanis
NOVEMBER 13, 2011
GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION
President Obama campaigned on the promise of change, and ended up with a world much fuller of it than he and his advisers expected. In the three years since he was elected, the foreign-policy landscape has shifted dramatically, not least because the financial crisis has spread worldwide and the Arab world has risen up against its autocratic rulers.
GREG KLEE/GLOBE STAFF PHOTO ILLUSTRATION
When the unexpected occurs in the realm of foreign policy–like this year’s Arab revolts, or a hypothetical military crisis in Asia–the nation’s leaders don’t always have ready-made plans on the shelf, and don’t have the luxury of time to start crafting a policy from scratch. What they can rely on, however, is what’s known as a grand strategy. A term from academia, “grand strategy” describes the real-world framework of basic aims, ideals, and priorities that govern a nation’s approach to the rest of the world. In short, a grand strategy lays out the national interest, in a leader’s eyes, and says what a state will be willing to do to advance it. A grand strategy doesn’t prescribe detailed solutions to every problem, but it gives a powerful nation a blueprint for how to act, and brings a measure of order to the rest of the world by making its expectations more clear.
In the absence of a clear road map for how, and why, America should be engaging with the world, a number of big-picture thinkers have recently rushed into the gap, filling the pages of the most prominent journals in the field and putting grand strategy at the center of the conversation at influential institutions from the National Defense University to America’s top policy schools.
What are the competing visions for an American grand strategy for the 21st century? All the proposed new strategies try to deal with a world in which America still holds the preponderance of power, but can no longer dominate the entire globe. The two-way Cold War contest is over, but it will be some time before another pretender to power–whether China, Russia, India, or the European Union–becomes a meaningful rival. One simple version has America stepping back from the world, husbanding its resources by projecting power at an imperial remove rather through attempts to micromanage the affairs of far-flung foreign nations. Another has it acting more like a multinational corporation, delegating authority to allies most of the time, but involving itself deeply and decisively wherever its interests are threatened. And some unapologetic America-firsters argue the United States can do better by openly trying to dominate the world, rather than by negotiating with it.
Whatever grand strategy emerges to guide 21st century America, the answer is likely to grow out of America’s history, rather than markedly depart from it. All successful American grand strategies–manifest destiny, Wilsonian idealism and self-determination, Cold War-era containment–were driven in part by a sense of American exceptionalism, the notion that America “stands tall” and acts as a beacon in the world. But they also included a dose of Machiavelli as well, nakedly seeking to contain security threats against America, while using international allies to further American interests.
One influential strain of thinking about grand strategy comes from the realm of small-l liberal realism. Liberal realists are unsentimental in their desire to see America maximize its power, but also restrained about how much America should get involved. They want America to reap more and spend less, in both financial and military terms. Their ideas tend to dominate at policy schools, where much of the applied thinking about foreign affairs comes from, and seem to be getting a thorough hearing within Hillary Rodham Clinton’s State Department.
As a potential grand strategy, the front-runner emerging from these realist thinkers isoff-shore balancing. Essentially, it advocates keeping America strong by keeping the rest of the world off balance. Military intervention, in this line of thinking, should always be a last resort rather than a first move; America’s military should lurk over the horizon, more powerful if it’s on the minds of its rivals rather than their territory. This grand strategy prescribes a “divide and conquer” approach, advocating that Washington use its diplomatic and commercial power to balance rising powers against one another, so that none can dominate a single region and proceed to threaten America. A prominent group of theorists has embraced this idea, including John Mearsheimer at the University of Chicago and Stephen Walt at Harvard University’s Kennedy School.
“Our first recourse should be to have local allies uphold the balance of power, out of their own self-interest,” Walt wrote in the most recent edition of The National Interest. “Rather than letting them free ride on us, we should free ride on them as much as we can, intervening with ground and air forces only when a single power threatens to dominate some critical region.”
A slightly different version could be called “internationalist tinkering”: It argues that America should focus on rebuilding its international relationships and coalitions, while reserving America’s right on occasion to act decisively and alone. Writing this summer in Foreign Affairs, another Boston-area political scientist, Daniel W. Drezner of the Fletcher School of Law and Diplomacy at Tufts University, argued that America can quite effectively maintain its international power by relying on cooperation and gentle persuasion most of the time, and force when challenged by other countries. This approach, Drezner believes, will have the result of “reassuring allies and signaling resolve to rivals.” An example is America’s investment in closer relations with Asian states, putting China on notice and forcing it to adjust its expansionist foreign policy. Drezner argues that although Obama might not have articulated it well, he is already driven by this approach, which Drezner calls “counterpunching.”
Princeton University’s G. John Ikenberry makes a similar argument in his latest book, “Liberal Leviathan.” America, he argues, can and should police the world order so long as it abides by the same rules as other nations. As a sort of first among equals, America holds the balance of power but cannot overtly dominate the world. It’s a position that demands considerable care to maintain, and requires investment in alliances, institutions like the UN, and international regimes like the ones that govern trade and certain types of crime.
Another line of thinking comes from scholars and writers who are more pessimistic about America’s prospects; they see an empire at the beginning of a long period of decline, and believe America needs to dramatically curtail its international engagements and get its own house in order first. It could be called the school of empire in eclipse, and its proponents have been labeled declinists. In effect, they argue that America’s economy and domestic infrastructure are collapsing, and the nation’s global influence will follow, unless America concentrates its resources on rebuilding at home. Many of the most influential thinkers in this strain aren’t against a robust foreign policy, but they argue that it’s impractical to worry too much about the rest of the world until we address the problems at home. The late historian Tony Judt was the one of the leading exponents of this view, and the writer George Packer has taken up many of the same themes. A folk version of this thinking drives much of Thomas Friedman’s writing, and underpins his most recent book, “That Used to Be Us: How America Fell Behind in the World It Invented and How We Can Come Back,” which he wrote with Michael Mandelbaum, an American foreign policy expert at Johns Hopkins University.
On the opposite end of the spectrum, neo-imperialism holds that America’s grand strategy needs only be more brash and more demanding. (Neo-imperialist thinkers are often called “global dominators” as well.) The recent mistake, in this view, is that America asks too little of the world, and thereby invites frustrating challenges. Mitt Romney’s claim that he “won’t apologize for America” reflects this view, which has its intellectual underpinnings in the writings of conservatives such as Robert Kagan, Niall Ferguson, Robert Kaplan, and Charles Krauthammer. Ferguson, a British historian, has made the provocative suggestion that America should pick up where the British Empire left off, directly managing the entire world’s affairs. In this view, even long and expensive entanglements like the ones in Iraq and Afghanistan aren’t major setbacks: Militarism, these expansionists argue, has propelled American economic growth and international influence for centuries, and has a long future as long as we aren’t shy about using the force we have.
A grand strategy has to match means with goals; it can’t merely assert American power, but needs to account for America’s own depth of resources. The precarious state of America’s economy and the wear and tear on its military suggest that any successful new strategy would allow for a modest period of retrenchment, one in which America continues to fancy itself the world’s leader but adopts the tone of a hard-boiled CEO rather than a field marshal–Jack Welch rather than George Patton.
Bush’s strategy foundered for those reasons; he boldly and clearly asserted that America would secure itself through preemptive war and the spread of democracy, but found that he simply didn’t have the resources to deliver–and that the world didn’t respond as he hoped to our prodding and aspirations.
America’s limits have grown apparent as it has discovered a surprising shortage of leverage, even over close allies. The liberal grand strategists are feeding into a process already underway in the Obama administration to systematize foreign policy into a coherent framework, something more akin to a grand strategy than the jumble of policies that has marked America’s foreign policy for the least three years. The conservatives, meanwhile, are looking for intellectual toeholds among the Republican presidential contenders.
Earlier this year a top Obama aide seemed to belittle the very idea of a grand strategy as a simplistic “bumper sticker,” something that reduced the world’s complexity to a slogan. But, in a sense, that’s exactly the point of having one. To be truly helpful in time of crisis, a grand strategy must be based on incredibly thorough and detailed thinking about how America will rank its competing interests, and what tools it might use to project power in the rest of the world. But it also demands simplicity: a principle, even a simple sentence, reflecting our values as well as our interests, based on right as well as might, and as clear to America’s enemies as it is to the American electorate.
My latest column in The Boston Globe.
Fundamentalism’s rise just shows that the secular state has won, says political scientist Olivier Roy
If you read the news, it can feel impossible to escape the growing influence of religious fundamentalism. Christian evangelicals set the talking points of the Republican primaries, and America’s leaders speak about God with an insistence that might have made the authors of the Constitution blush. Religious parties have become steady players in the politics of secular Europe. Meanwhile, Islamists from Morocco to Malaysia campaign to eliminate all barriers between the mosque and the state. With religion so ubiquitous in politics, it sometimes appears that the world may be crossing the threshold from the age of secularism into one of religious states.
But what if all this religious activism and faith-based politics herald not a new dominance but a passing into irrelevance? That’s the counterintuitive assertion of Olivier Roy, a French political scientist and influential authority on Islam, who has now turned his eye on the growth of fundamentalism across all religions.
As Roy argued in the book “Holy Ignorance,” published late last year, old-line religions with organized power structures are in a global decline, overtaken by fundamentalist sects such as Islamic Salafis and Christian Pentecostals, which make a charismatic appeal to the individual. But in fact, Roy argues, this trend suggests that religion has effectively given up the struggle for power in the public square. Despite the extremist rhetoric of televangelists in the American Midwest or the Persian Gulf, these growing religions are really just a reaction to the triumph of secularism. They demand a total personal commitment from their adherents: abstinence, sobriety, frequent prayer. But, though their leaders may talk of making America an officially Christian nation or of establishing pure Islamic states in places like Egypt, they carry none of the real institutional or military weight of established religions in history.
“The religious movements are here, they have an agenda,” Roy says. “I never deny that. I say that political ideologies based on religion don’t work. You can’t manage a country in the name of a religion.” In other words, as he interprets it, the world is not in the grip of a clash of civilizations; religious movements are just growing more strident because they are detached from the culture of nations. An age of vocal religious fundamentalists, contrary to our expectations, may also be an age of safely secular states.
If Roy’s hypothesis proves correct, it stands to change international politics by making us much less concerned about religious extremism. It could give secular forces new power to deal with fundamentalist organizations like the Muslim Brotherhood; since their religious agendas are not likely to dominate any nation’s political life, they could be dealt with on a purely material level. At home, American policy makers could rest assured that extreme fundamentalist positions on gay rights or teaching evolution in schools would inevitably be leavened by the political realities of a pluralistic, secular society. For those who treasure secular governance and separation of church and state, Roy’s hypothesis–if the 21st century bears it out–would mean there is far less in the world to fear.
In “Holy Ignorance,” Roy presents what reads like an alternate history of the last half century, narrating some familiar facts but drawing counterintuitive conclusions. Fundamentalist religion has been on the rise worldwide since the 1950s. In the Islamic world the fastest growing sect is the Salafis, who advocate a return to 7th-century piety. Among Christians, evangelical Christian sects like Mormons, Pentecostals, and Jehovah’s Witnesses are the fastest growing, while ultra-Orthodox Jews are gaining within the Jewish community. Almost nowhere, with the exception of Ayatollah Khomeini’s revolution in Iran, has the fundamentalist revival taken the form of a single religious group, unified behind one leader, trying to take over a country. Instead, ascendant fundamentalists are a fractious group. Moreover, their growth comes at a time when pluralistic, secular politics predominate, and religious institutions have less power than ever before. In a borderless, globalized world, the only religions that grow are exactly those that sever themselves from a specific national or cultural context, and appeal to universal humanity. Such religions might sound louder or more shrill, but they’re also by definition less significant to the political and cultural life of nations.
Roy’s argument has grown out of decades studying the interaction between religion, politics and culture. As a young man, Roy writes, he was jarred by the arrival of a born-again Christian in his religious youth group. It was his first encounter with charismatic religion. After he established himself as an authority on the Islamic world, Roy began to argue by the 1990s that Islamic fundamentalists would not manage to win over their own societies, despite their violent heyday; his 1994 book, “The Failure of Political Islam,” was criticized by some as premature, but now most scholars agree with its conclusions.
From today’s vantage point, religious politicians and commentators seem like a force to be reckoned with, but it’s instructive to compare them to the mainline religious institutions that used to wield power. During the Middle Ages, institutional religions possessed tremendous real-world power. The Catholic Church decided which leaders had legitimacy and triggered massive wars. With the Industrial Revolution, Christian missionary activity inextricably intertwined with European colonization. Even as recently as 1960, the Vatican held such sway that American voters feared John F. Kennedy would take orders from the pope; his edicts set the parameters of European social policy well into the 20th century. Meanwhile, in the Islamic world, until colonialism eroded the authority of the Ottoman Empire in the 19th century, the supposedly all-powerful caliphs and their governors could be overthrown when religious scholars withdrew their support.
Gradually, however, secularism took root. After the French Revolution, the number of European leaders claiming a divine right to rule dwindled. Even the caliphate in Istanbul began to look like a secular empire with vestigial religious trappings. Modern nation states were more concerned with holding territory and controlling populations than with matters of faith. Roy argues that even when they invoked religion to gain legitimacy, modern states were engaged in an inherently political quest, in which religion was marginalized.
It was the loss of direct religious power that prompted the rise of what Roy calls “pure” religions: sects that appeal to the universal individual. Old-line religions talked explicitly about power and considered themselves more powerful than states. As they declined, Roy says, the public began to move to groups that were more charismatic and focused on faith in the private sphere–Mormons, Pentecostals, Tablighis, Salafis, ultra-Orthodox Jews. We call these sects “fundamentalist” because they claim to return to the founding texts of their faith, stripping out the practices that had accreted in the centuries since the founding of their respective religions.
This return to basics has visceral appeal to those seeking a faith. But millions of born-again evangelicals, seeking salvation from a mosaic of political backgrounds, are unlikely to develop the clout the Vatican once had. Look closely at the surge of Salafists around the Islamic world, and you’ll find extreme fragmentation, with rivalries between the followers of competing sheikhs often taking priority over concern with unbelievers. The religions that are actually growing and winning converts are resolutely not political monoliths in the making.
In Roy’s view, the pure new religions employ simplistic ideas that sound universal and appeal to new converts, but whose vigor can be hard to sustain generation to generation. These charismatic, fundamentalist religions depend on constant renewals of faith, Roy says, and depend on the zeal of new converts for their growth and dynamism. Fundamentalist sects can afford to take extreme positions on belief, Roy says, precisely because they have none of the constraints of governance and authority.
Not everyone is convinced by Roy’s interpretation. Even if there are hardly any real theocracies left in the world, religious hard-liners still influence political life directly in countless places. One need look no further than the role of born-again Christian activists in the Republican Party, or the growing enrollment of Salafist political parties in Egypt.
Karen Barkey, a Columbia University sociologist, argues that Roy has overlooked the resilience of fundamentalism. Yes, secular culture has largely taken the reins of power, but it’s a two-way process; religion has fought back, and tailors its message and approach to local constraints. Roy is too optimistic when he claims that fundamentalism always wears itself out, Barkey says. Religion is asserting itself within evolving cultures, including democratic ones, and will continue to have great influence. “Some forms of religious fundamentalism may well be disappearing into the ether of abstraction,” Barkey wrote in a barbed essay contesting Roy’s view in Foreign Affairs, “but in most cases, religion, culture, and politics are still meeting on the ground.”
For those watching that clash, too, it can be hard to credit Roy’s interpretation. In phenomena like the rise of the evangelical right in American politics and Islamic fundamentalists in the Arab world, Roy sees a sign of defeated religion that has been pushed from the political playing field. But others argue that religion crucially shapes politics in ways that scholars, most of whom are secular, don’t well understand. Grandees of political science recently convened an initiative to systematically increase their field’s study of religion, and published a volume through Columbia University Press this year called “Religion and International Relations Theory.”
Still many scholars agree with Roy that the importance of religious extremists has been overstated. Alan Wolfe, director of the Boisi Center for Religion and American Public Life at Boston College, says that Roy provides a convincing riposte to those like Jerry Falwell, who called for religion to directly shape legislation. Wolfe believes Roy’s assertion that religious influence on politics will continue to wane.
The implications of Roy’s theory are stark. If he is correct, American policy makers can step away from the entire framework of “the clash of civilizations” and “the rise of Islam,” and finally begin to deal with political actors at face value. Iran’s ayatollahs, for example, are better understood as extreme nationalists rather than religious fanatics, Roy has written. While religion matters to Hindu nationalists in India, Shas Party members in Israel, and followers of Hezbollah in Lebanon, it’s their specific nationalist aims that have made them politically powerful players. In political terms, their faith is essentially irrelevant.
Most of all, Roy’s theory offers the possibility of no longer seeing our conflicts as religious wars. In 2003, Lieutenant General William Boykin scandalized secular Americans when he described battling a Muslim fighter and said, “I knew that my God was bigger than his. I knew that my God was a real God, and his was an idol.” If Roy is correct, such rhetoric is simply beside the point; as our century wears on, extreme religious belief will more and more be just a fringe phenomenon, unworthy of the attention of governments.