ive

Why the underlying drivers of change in the Middle East haven’t changed


Editors’ Note: In a recent interview with Foreign Policy Interrupted, Tamara Wittes was asked about how the situation in the Middle East has changed since she published her 2008 book Freedom’s Unsteady March. Five years after the Arab uprisings and the start of the Syrian civil war, and a year and a half after the Islamic State captured Mosul (along with the world’s attention), Tamara says that many of the same fundamental dynamics in the Middle East are still at work.

The situation in the region has changed so dramatically since then, but I think that the fundamental insights that informed that book remain true. The underlying drivers of change in the Middle East are still there in terms of the demographic drivers, the economic drivers, the technological drivers that I described in the book; they are all still present.

Although there’s a lot of disorder and a lot of violence, and that leads people on the ground to prioritize security and to search for security in different ways, that doesn’t mean that they’re going to be satisfied. It doesn’t mean that the, “well, at least it’s not ISIS” line is going to suffice for governments in the Middle East for very long.

The underlying drivers of change are still present, the pressure for change is still present, and a lot of those pressures are about the simple fact of individual empowerment. Expectations shifted, and people, individuals, have the ability to act in ways that they didn’t before. States and governments have to accommodate that. It’s affecting politics all over the world, and the Middle East is not immune.

So the question becomes: how are governments going to learn to accommodate that and turn it into a strength? I think that the United States does have a really important role to play there. There were mistakes that the Bush administration made—setting aside his vision of Iraq, which has of course been very thoroughly discussed and assessed. But even in terms of non-military intervention to try to advance reform, the critique I made in the book is that the Bush administration was overly focused on political process and elections in particular. I think that one of the other lessons that has come of recent years is that the United States and other Western countries get very focused on political institutions and think, well, if we set up a judicial system, and we set up a parliament, and a constitution, then the gears in the machine sort of start to turn, the states start to function. Look at the rebuilding efforts in Afghanistan, for example.

But what we see in the Middle East today is that formal institutions aren’t enough. People have to have trust in the institutions, and people and communities have to have sufficient agreement on the basic rules of the game to make those institutions legitimate and authoritative. And that’s what’s missing in a lot of places around the region right now, that there isn’t enough dialogue and debate and ultimately negotiated agreement on the basic rules of the game. So I think that the challenge for the United States and others who care about stability in the Middle East going forward is how to help cultivate platforms for that kind of dialogue, and how to help cultivate the skills and the mechanisms for resolving very fundamental questions about how government should be organized and what should be the role of religion and politics, and what’s the balance between individual rights and collective identity.

These are big, big, questions, and right now, in too many places, they are being fought over violently. But the questions still have to be answered, and so the challenge is helping develop ways to do that, to do it peacefully.

      
 
 




ive

Forging New Partnerships: Implementing Three New Initiatives in the Higher Education Act

       




ive

The African Americans: Many Rivers to Cross

The African Americans: Many Rivers to Cross, a six-hour series, written and presented by Professor Henry Louis Gates, Jr., examines the evolution of the African-American people, as well as the political strategies, and religious and social perspectives they developed — shaping their own history, culture and society against unimaginable odds. The series moves through five…

       




ive

The future of extractive industries’ governance in Latin America and the Caribbean

       




ive

Using extractive industry data to fight inequality & strengthen accountability: Victories, lessons, future directions for Africa

With the goal of improving the management of oil, gas, and mineral revenues, curbing corruption, and fighting inequality, African countries—like Ghana, Kenya, Guinea, and Liberia—are stepping up their efforts to support good governance in resource-dependent countries. Long-fought-for gains in transparency—including from initiatives like the Extractive Industries Transparency Initiative (EITI)—have helped civil society and other accountability…

       




ive

2020 and beyond: Maintaining the bipartisan narrative on US global development

It is timely to look at the dynamics that will drive the next period of U.S. politics and policymaking and how they will affect U.S. foreign assistance and development programs. Over the past 15 years, a strong bipartisan consensus—especially in the U.S. Congress—has emerged to advance and support U.S. leadership on global development as a…

       




ive

Five observations on President Trump’s handling of Ukraine policy

Over the past two weeks, a CIA whistleblower’s complaint, a White House record of a July 25 telephone conversation between President Donald Trump and Ukrainian President Volodymyr Zelenskiy, and texts exchanged by American diplomats have dominated the news and raised questions about the president’s handling of policy toward Ukraine. Here are five observations: First, President…

       




ive

Did Zelenskiy give in to Moscow? It’s too early to tell

For more than five years, Russia has used its military and proxy forces to wage a low-intensity but still very real war in eastern Ukraine. Newly-elected Ukrainian President Volodymyr Zelenskiy would like to end that conflict. On October 1, he announced an agreement based on the “Steinmeier Formula” to advance a settlement. Angry crowds took…

       




ive

Five months into Ukrainian President Zelenskiy’s term, there are reasons for optimism and caution

How do Ukrainians assess the performance and prospects of President Volodymyr Zelenskiy, now five months in office, as he tackles the country’s two largest challenges: resolving the war with Russia and implementing economic and anti-corruption reforms? In two words: cautious optimism. Many retain the optimism they felt when Zelenskiy swept into office this spring, elected…

       




ive

The gender and racial diversity of the federal government’s economists

The lack of diversity in the field of economics – in addition to the lack of progress relative to other STEM fields – is drawing increasing attention in the profession, but nearly all the focus has been on economists at academic institutions, and little attention has been devoted to the diversity of the economists employed…

       




ive

Transformative Investments: Remaking American Cities for a New Century

Editor's Note: This article was the first published in the June 2008 World Cities Summit edition of ETHOS.

At the dawn of a new century, broad demographic, economic and environmental forces are giving American cities their best chance in decades to thrive and prosper. The renewed relevance of cities derives in part from the very physical characteristics that distinguish cities from other forms of human settlement: density, diversity of uses and functions, and distinctive design.

Across the United States (U.S.), a broad cross section of urban practitioners—private investors and developers, government officials, community and civic leaders—are taking ambitious steps to leverage the distinctive physical assets of cities and maximise their economic, fiscal, environmental and social potential.

A special class of urban interventions—what we call “transformative investments”—is emerging from the millions of transactions that occur in cities every year. The hallmark of transformative investments is their catalytic nature and seismic impact on markets, on people, on the city landscape and urban possibilities—far beyond the geographic confines of the project itself.

Recognising and replicating the magic of transformative investments, and making the exception become the norm is important if U.S. cities are to realise their full potential.

THE URBAN MOMENT
The U.S. is undergoing a period of dynamic change, comparable in scale and complexity to the latter part of the nineteenth century. Against this backdrop, there is a resurgence in the importance of cities due to their fundamental and distinctive physical attributes.

Cities offer a broad range of physical choices—in neighbourhoods, housing stock, shopping venues, green spaces and transportation. These choices suit the disparate preferences of a growing population that is diverse by race, ethnicity and age.

Cities are also rich with physical amenities—mixed-use downtowns, historic buildings, campuses of higher learning, entertainment districts, pedestrian-friendly neighbourhoods, adjoining rivers and lakes—that are uniquely aligned with preferences in a knowledge-oriented, post-industrial economy. A knowledge economy places the highest premium on attracting and retaining educated workers, and an increasing proportion of these workers, particularly young workers, value urban quality of life when making their residential and employment decisions.

Finally, cities, particularly those built in the nineteenth and early twentieth centuries, are compactly constructed and laid out along dense lines and grids, enhancing the potential for the dynamic, random, face-to-face human exchange prized by an economy fuelled by ideas and innovation. Such density also makes cities perfect agents for the efficient delivery of public services as well as the stewardship of the natural environment.

Each of these elements—diversity, amenities and density—distinguishes cities from other forms of human settlement. In prior generations, these attributes were devalued in a nation characterised by the single family house, the factory plant, cheap gas, and environmental profligacy. In recent history, many U.S. cities responded by making the wrong physical bets or by replicating low-density, suburban development—further eroding the very strengths that make cities distinctly urban and competitive.

Yet, the U.S., a nation in demographic and economic transition, is revaluing the quality of life uniquely offered by cities and urban places, potentially altering the calculus by which millions of American families and businesses make location decisions every year.

DELIVERING "CITYNESS": THE RISE OF TRANSFORMATIVE INVESTMENTS
Across the U.S., a practice of city building is emerging that builds on the re-found value and purpose of the urban physical landscape, and recognises that cities thrive when they fully embrace what Saskia Sassen calls “cityness”.1

The move to recapture the American city can be found in all kinds of American cities: global cities like New York, Los Angeles and Chicago that lie at the heart of international trade and finance; innovative cities like Seattle, Austin and San Francisco that are leading the global economic revolution in technology; older industrial cities like Cleveland, Pittsburgh and Rochester that are transitioning to new economies; fast-growing cities like Charlotte, Phoenix and Dallas that are regional hubs and magnets for domestic and international migration.

The new urban practice can also be found in all aspects or “building blocks” of cities: in the remaking of downtowns as living, mixed-use communities; in the creation of neighbourhoods of choice that are attractive to households with a range of incomes; in the conversion of transportation corridors into destinations in their own right; in the reclaiming of parks and green spaces as valued places; and in the revitalisation of waterfronts as regional destinations, new residential quarters and recreational hubs.

Yet, as the new city building practice evolves, it is clear that a subset of urban investments are emerging as truly “transformative” in that they have a catalytic, place-defining impact, creating an entirely new logic for portions of the city and a new set of possibilities for economic and social activity.

We define these transformative investments as “discrete public or private development projects that trigger a profound, ripple effect of positive, multi-dimensional change in ways that fundamentally remake the value and/or function of one or more of a city’s physical building blocks”.

This subset of urban investments share important characteristics:

  • On the economic front, transformative investments uncover the hidden value in a part of the city, creating markets in places where markets either did not exist or were only partially realised.
  • On the fiscal front, transformative investments dramatically enhance the fiscal capacity of local governments, generating revenues through the rise in property values, the growth in city populations, and the expansion of economic activity.
  • On the cognitive front, transformative investments redefine the identity and image of the city. They effectively “re-map” previously forgotten or ignored places by residents, visitors and workers. They create nodes of new activities and new places for people to congregate.
  • On the environmental front, transformative investments enable cities to achieve their “green” potential by cleaning up the environmental residue from prior industrial uses or urban renewal efforts, by enabling repopulation at greater densities to occur and by providing residents, workers and visitors with transportation alternatives.
  • On the social front, transformative investments have the potential, while not always realised, to alter the opportunity structure for low-income residents. When carefully designed, staged and leveraged, they can expand the housing, employment and educational opportunities available to low-income residents and overcome the racial, ethnic and economic disparities that have inhibited city performance for decades.
DISSECTING SUCCESS: HOW AND WHERE TRANSFORMATIVE INVESTMENTS TAKE PLACE
The best way to identify and assess transformative investments is by examining exemplary interventions in the discrete physical building blocks of cities: downtowns, neighbourhoods, corridors, parks and green spaces, and waterfronts.

Downtowns
If cities are going to realise their true potential, downtowns are compelling places to start. Physically, downtowns are equipped to take on an emerging set of uses, activities and functions and have the capacity to absorb real increases in population. Yet, as a consequence to America’s sprawling appetite, urban downtowns have lost their appeal. Economic interests, once the stronghold in downtowns, have moved to suburban town centres and office parks, depressing urban markets and urban value.

Across the US, downtowns are remaking themselves as residential, cultural, business and retail centres. Cities such as Chattanooga, Washington, DC and Denver have demonstrated how even one smart investment can inject new energy and jumpstart new markets. The strategic location of a new sports arena in a distressed area of downtown Washington, DC fits our definition of a transformative investment. Leveraging the proximity of a transit stop, the MCI Arena was nestled within the existing urban fabric on a city-owned urban renewal site. The arena’s pedestrian-oriented design strengthened, rather than interrupted, the continuity of the 7th Street retail corridor.2 Today, the area has been profoundly transformed as scores of new restaurants, retail and bars dot the arena’s surroundings. Residents and visitors rely heavily on the nearby transit to come to this destination.

Neighbourhoods
Ever since the physical, economic and social agglomeration of “city” was established, the function of neighbourhoods has remained relatively untouched. While real estate values of neighbourhoods have shifted over time in response to micro- and macro-economic trends, a subset of inner city communities have remained enclaves of poverty. Victims of earlier urban renewal and public housing efforts, millions of people are consigned to living in neighbourhoods isolated from the economic and social mainstream.

Cities such as St. Louis, Louisville and Atlanta have been at the forefront of public housing (and hence neighbourhood) transformation, supported by smart federal investments in the 1990s. For example, the demolition of the infamous high-rise Vaughn public housing project in St. Louis enabled the construction of a new human scale, mixed-income housing development in one of the poorest, most crime-ridden sections of the city. This redevelopment cured the mistakes made by failed public housing projects, by restoring street grids, providing quality design, and injecting a sense of social and physical connection. Constructing a mix of townhouses, garden apartments and single family homes helped catalyse other public and private sector investments.

What made this investment transformative was that it included the reconstitution of Jefferson Elementary, a nearby public school. Working closely with residents, and with the financial support of corporate and philanthropic interests, the developer helped modernise the school, making it one of the most technologically advanced educational facilities in the region. A new principal, new curriculum, and new school programmes helped it become one of the highest performing inner city schools in the state of Missouri.

Corridors
City corridors are the physical tissue that knit disparate parts of a city together. In the best of conditions, corridors are multi-dimensional in purpose, where they are destinations as much as facilitators of movement. In many cities, however, corridors are simply shuttling traffic past blocks of desolated retail and residential areas or they have become yet another cookie cutter image of suburbia—parking lots abutting the main street, standardised buildings and design, and oversized and cluttered signage.

Cities like Portland, Oregon and urban counties like Arlington, Virginia have used mass transit investments and land use reforms to create physically, economically and socially healthy corridors that give new residents reasons to choose to live nearby and existing residents reasons to stay.

Portland conceived a streetcar to spur high density housing in close-in neighbourhoods that were slowly shedding old industrial uses. The streetcars traverse a three-mile route through residential areas, the water front, to the university. Since its construction, the streetcar has not only expanded transportation choices, it has helped galvanise new destinations along its route—including new neighbourhoods, retail clusters, and economic districts.

Parks and Open Space
City green spaces (such as parks, nature trails, bike paths) were initially designed to provide the lungs of the city and an outlet for recreation, entertainment and social cohesion. As general conditions declined in many cities, the quality of urban parks also declined, to the great consternation of local residents. Green spaces were turned into under-used, if not forgotten, areas of the city; or worse still, hot spots of crime and illegal activity. Such blight discouraged cities to transform outmoded uses (such as manufacturing areas) into more green space. In cities with booming development markets, parks failed to be designed and incorporated into the new urban fabric.

Across the US, cities are pursuing a variety of strategies to reclaim or augment urban green spaces. Cities like Atlanta, for example, have created transformative parks from outmoded economic uses, such as manufacturing land along urban waterfronts or by converting old railway lines into urban trail-ways.

Cities like Scranton have reclaimed existing urban parks consumed by crime and vandalism. This has required creative physical and programmatic investments, including: redesigning parks (removing physical and visibility barriers such as walls, thinning vegetation, and eliminating “dark corners”); increasing the presence of uniformed personnel; increasing the park amenities (such as evening movies and other events to increase patronage);3 and providing regular maintenance of the park and recreational facilities.4

Waterfronts
Many American cities owe their location and initial function to the proximity to water: rivers, lakes and oceans. Waterfronts enabled cities to manufacture, warehouse and ship goods and products. Infrastructure was built and zoning was aligned to carry out these purposes. In a knowledge-intensive economy, however, the function of waterfronts has dramatically changed, reflecting the pent-up demand for new places of enjoyment, activities and uses.

As with the other building blocks, cities are pursing a range of strategies to reclaim their waterfronts, often by addressing head-on the vestiges of an earlier era.

New York has overhauled the outdated zoning guidelines for development along the Brooklyn side of the East River, enabling the construction of mixed-income housing rather than prescribing manufacturing and light industry uses.

Pittsburgh and many of its surrounding municipalities have embarked on major efforts to re-mediate the environmental contamination found in former industrial sites, paving the way for new research centres, office parks and retail facilities.

Milwaukee, Providence and Portland have demolished the freeways that separated (or hid) the waterfront from the rest of the downtown and city, and unleashed a new wave of private investment and public activities.

WHAT IS THE RECIPE FOR SUCCESS?
The following are underlying principles that set these diverse investments apart from other transactions:

Transformative Investments advance “cityness”: Investments embrace the characteristics, attributes, and dynamics that embody “city”—its complexity, its intersection of activities, its diversity of populations and cultures, its distinctively varied designs, and its convergence of the physical environment at multiple scales. Project by project, transformative investments are reclaiming the true urban identity by strengthening aspects of the ‘physical’ that are intrinsically urban—be it density, rehabilitation of a unique building or historic row, or the incorporation of compelling, if not iconic, design.

Transformative Investments require a fundamental rethinking of land use and zoning conventions: In the midst of massive economic global change, 21st century American cities still bear the indelible markings of the 20th century. In the early 20th century, for example, government bodies enacted zoning to establish new rules for urban development. While originally intended to protect “light and air” from immense overbuilding, later versions of zoning added the segregation of uses—isolating housing, office, commercial and manufacturing activities from each other. Thus, transformative investments require, at a minimum, variances from the rigid, antiquated rules that still define the urban landscape. In many cases, examples of successful transformative investments have become the tool to overhaul outdated and outmoded frameworks and transform exceptions into new guidelines.

Transformative Investments require innovative, often customised financing approaches: Cities have distinctive physical forms (e.g., historic buildings) and distinctive physical visions (e.g., distinct districts). Yet private and even public financing of the American physical landscape, for the most part, is standardised and routinised, enabling the production of similar products (e.g., single family homes, commercial strips) at high volume, low cost and low quality. Transformative investments, however, require the marrying of multiple sources of financing (e.g., conventional debt, traditional equity, tax-driven equity investments, innovative financing arrangements, public subsidy, patient philanthropic capital), placing stress on project design and implementation. In addition, achieving social objectives often require building innovative tax and shared equity approaches into particular transactions, so that appreciations in property value can serve higher community purposes (e.g., creating affordable housing trust funds). As with regulatory frames, the evolution from exceptional transactions to routinised forms of investments is required to ensure that transformative investments become more the rule rather than the exception.

Transformative Investments often involve an empirically-grounded vision at the building block level: While a vision is not a necessary pre-requisite for realising transformative investments, cities that proceed without one have a higher probability of making the wrong physical bets, siting them in the wrong places, or ultimately creating a physical landscape that fails to cumulatively add up to “ cityness”. It is easy to find such examples around the country, such as isolated mega-projects (a new stadium or convention centre) or waterfront revitalisation efforts that constructed the wrong projects, having misunderstood the market and the diversifying demographic.

Telescoping the possibilities and developing a bold vision must be done through an empirically-grounded process. A visioning exercise should therefore include: an economic and market diagnostic of the building block; a physical diagnostic; an evaluation of existing projects; and the development of a vision to transform the landscape. From here, disparate actors (public, private, civic, not-for-profit) will have the best instruments to assess whether a physical project could meet specific market, demographic and physical needs—increasing its chances of becoming truly transformative.

Transformative Investments require integrative thinking and action: Transformative investments are often an act in “connecting the dots” between the urban experiences (e.g., transportation, housing, economic activity, education and recreation), which are inextricably linked in reality but separated in action. This requires a significant change in how cities are both planned and managed.

On the public side, it means that transportation agencies must re-channel scarce infrastructure investments to leverage other city building goals beyond facilitating traffic. It means that agencies driving a social agenda, such as schools and libraries, have to re-imagine their existing and new facilities to integrate strong design and move away from isolated projects.

In the private sector, it means understanding the broader vision of the city and carefully siting and designing investments to increase successful city-building and not just project-building. It means increasing their own standards by using exemplary design and construction materials. It means finding financially beneficial approaches to mixed income housing projects and mixed use projects instead of just single uses. In all cases, it requires holistic thinking that cuts across the silos and stovepipes of specialised professions and fragmented bureaucracies.

BUILDING GREAT CITIES
For the first time in decades, American cities have a chance to experience a measurable revival. While broader macro forces have handed cities this chance, city builders are also learning from past mistakes. After investing billions of dollars into city revitalisation efforts, the principles underpinning particularly successful and catalytic projects—transformative investments—are beginning to be clarified. The most important lesson for cities, however, is to embrace “cityness”, to maximise what makes them physically and socially unique and distinctive. Only in this way will American cities reach their true greatness.


  • 1Saskia Sassen defined the term “cityness” to be the concept of embracing the characteristics, attributes, and dynamics that embody “city”: complexity, the convergence of the physical environment at multiple scales, the intersection of differences, the diversity of populations and culture, the distinctively varied designs and the layering of the old and the new. Sassen, S., “Cityness in the Urban Age”, Urban Age Bulletin 2 (Autumn 2005).
  • 2Strauss, Valerie, “Pollin Says He’ll Pay for Sports Complex District, Awaits Economic Boost, Upgraded Image”, Washington Post, Thursday, 29 December 1994.
  • 3Personal communication from Peter Harnik, Director, Center for City Park Excellence, Trust for Public Land, 6 June 2005.
  • 4Harnik, Peter, “The Excellent City Park System: What Makes it Great and How to Get There”. San Francisco, CA: The Trust for Public Land, 2003. Available online at http://www.tpl.org/tier3_cd.cfm?content_item_id=11428&folder_id=175

Publication: World Cities Summit Edition of ETHOS
     
 
 




ive

The Arab Spring Five Years Later

The dilemma felt by Arab youth was captured in Tunisia by the self-immolation in 2010 of Mohamed Bouazizi, who was frustrated by restrictions on his small street-vending business. His death became the catalyst that seemed to light up revolts throughout the Middle East. The frustration had been building for some time:  large segments of society…

       




ive

The Arab Spring Five Years Later: Vol 2

Volume 1 of The Arab Spring Five Years Later is based on extensive research conducted by scholars from a variety of backgrounds, including many associated with the Japan International Cooperation Agency. Now the original research papers are gathered in volume 2 and are available for readers who wish to go even further in understanding the…

       




ive

The Arab Spring Five Years Later: Vol. 1 & Vol. 2

This two-volume set explores in-depth the economic origins and repercussions of the Arab Spring revolts. Volume 1 of The Arab Spring Five Years Later is based on extensive research conducted by scholars from a variety of backgrounds, including many associated with the Japan International Cooperation Agency (JICA). The original research papers are gathered in volume…

       




ive

The Arab Spring five years later: Toward greater inclusiveness

Five years have passed since the self-immolation of Mohamed Bouazizi in Tunisia sparked revolts around the Arab world and the beginning of the Arab Spring. Despite high hopes that the Arab world was entering a new era of freedom, economic growth, and social justice, the transition turned out to be long and difficult, with the…

       




ive

U.S. Productivity Growth: An Optimistic Perspective


ABSTRACT

Recent literature has expressed considerable pessimism about the prospects for both productivity and overall economic growth in the U.S. economy, based either on the idea that the pace of innovation has slowed or on concern that innovation today is hurting job creation. While recognizing the problems facing the economy, this paper offers a more optimistic view of both innovation and future growth, a potential return to the innovation and employment-led growth of the 1990s. Technological opportunities remain strong in advanced manufacturing and the energy revolution will spur new investment, not only in energy extraction, but also in the transportation sector and in energy-intensive manufacturing. Education, health care, infrastructure (construction) and government are large sectors of the economy that have lagged behind in productivity growth historically. This is not because of a lack of opportunities for innovation and change but because of a lack of incentives for change and institutional rigidity.

Download the full paper »

Downloads

Authors

Publication: International Productivity Monitor
      
 
 




ive

Why Isn’t Disruptive Technology Lifting Us Out of the Recession?


The weakness of the economic recovery in advanced economies raises questions about the ability of new technologies to drive growth. After all, in the years since the global financial crisis, consumers in advanced economies have adopted new technologies such as mobile Internet services, and companies have invested in big data and cloud computing. More than 1 billion smartphones have been sold around the world, making it one of the most rapidly adopted technologies ever. Yet nations such as the United States that lead the world in technology adoption are seeing only middling GDP growth and continue to struggle with high unemployment.

There are many reasons for the restrained expansion, not least of which is the severity of the recession, which wiped out trillions of dollars of wealth and more than 7 million US jobs. Relatively weak consumer demand since the end of the recession in 2009 has restrained hiring and there are also structural issues at play, including a growing mismatch between the increasingly technical needs of employers and the skills available in the labor force. And technology itself plays a role: companies continue to invest in labor-saving technologies that reduce demand for less-skilled workers.

So are we witnessing a failure of technology? Our answer is "no." Over the longer term, in fact, we see that technology continues to drive productivity and growth, a pattern that has been evident since the Industrial Revolution; steam power, mass-produced steel, and electricity drove successive waves of growth, which has continued into the 21st century with semiconductors and the Internet. Today, we see a dozen rapidly-evolving technology areas that have the potential for economic disruption as well in the next decade. They fall into four groups: IT and how we use it; machines that work for us; energy; and the building blocks of everything (next-gen genomics and synthetic biology).

Wide ranging impacts

These disruptive technologies not only have potential for economic impact—hundreds of billions per year and even trillions for the applications we have sized—but also are broad-based (affecting many people and industries) and have transformative effects: they can alter the status quo and create opportunities for new competitors.

While these technologies will contribute to productivity and growth, we must look at economic impact in a broader sense, which includes measures of surplus created and value shifted (for instance from producers to consumers, which has been a common result of Internet adoption). The greatest benefit we measured for autonomous vehicles—cars and trucks that can proceed from point A to point B with little or no human intervention. The largest economic impact we sized for autonomous vehicles is the enormous benefit to consumers that may be possible by reducing accidents caused by human error by 70 to 90 percent. That could translate into hundreds of billions a year in economic value by 2025.

Predicting how quickly even the most disruptive technologies will affect productivity is difficult. When the first commercial microprocessor appeared there was no such thing as a microcomputer—marketers at Intel thought traffic signal controllers might be a leading application for their chip. Today we see that social technologies, which have changed how people interact with friends and family and have provided new ways for marketers to connect with consumers, may have a much larger impact as a way to raise productivity in organizations by improving communication, knowledge-sharing, and collaboration.

There are also lags and displacements as new technologies are adopted and their effects on productivity are felt. Over the next decade, advances in robotics may make it possible to automate assembly jobs that require more dexterity than machines have provided or are assumed to be more economical to carry out with low-cost labor. Advances in artificial intelligence, big data, and user interfaces (e.g., computers that can interpret ordinary speech) make it possible to automate many knowledge worker tasks.

More good than bad

There are clearly challenges for societies and economies as disruptive technologies take hold, but the long-term effects, we believe, will continue to be higher productivity and growth across sectors and nations. In earlier work, for example, we looked at the relationship between productivity and employment, which are generally believed to be in conflict (i.e., when productivity rises, employment falls). And clearly, in the short term this can happen as employers find that they can substitute machinery for labor—especially if other innovations in the economy do not create demand for labor in other areas. However, if you look at the data for productivity and employment for longer periods—over decades, for example—you see that productivity and job growth do rise in tandem.

This does not mean that labor-saving technologies do not cause dislocations, but they also eventually create new opportunities. For example, the development of highly flexible and adaptable robots will require skilled workers on the shop floor who can program these machines and work out new routines as requirements change. And the same types of tools that can be used to automate knowledge worker tasks such as finding information can also be used to augment the powers of knowledge workers, potentially creating new types of jobs.

Over the next decade it will become clearer how these technologies will be used to raise productivity and growth. There will be surprises along the way—when mass-produced steel became practical in the 19th century nobody could predict how it would enable the automobile industry in the 20th. And there will be societal challenges that policy makers will need to address, for example by making sure that educational systems keep up with the demands of the new technologies.

For business leaders the emergence of disruptive technologies can open up great new possibilities and can also lead to new threats—disruptive technologies have a habit of creating new competitors and undermining old business models. Incumbents will want to ensure their organizations continue to look forward and think long-term. Leaders themselves will need to know how technologies work and see to it that tech- and IT-savvy employees are included in every function and every team. Businesses and other institutions will need new skill sets and cannot assume that the talent they need will be available in the labor market.

Publication: Yahoo! Finance
Image Source: © Yves Herman / Reuters
      
 
 




ive

Alternative methods for measuring income and inequality


Editor’s note: The following remarks were prepared and delivered by Gary Burtless at a roundtable sponsored by the American Tax Policy Institute on January 7, 2016. Video of Burtless’ remarks are also available on the Institute’s website. Download the related slides at the right. 

We are here to discuss income inequality, alternative ways to evaluate its size and trend over time, and how it might be affected by tax policy.  My job is to introduce you to the problem of defining income and to show how the definition affects our understanding of inequality.

To eliminate suspense from the start: Nothing I am about to say undermines the popular narrative about recent inequality trends.  For the past 35 years, U.S. inequality has increased.  Inequality has increased noticeably, no matter what income definition you care to use.  A couple of things you read in the newspaper are untrue under some income definitions. For example, under a comprehensive income definition it is false to claim that all the income gains of the past 2 or 3 decades have gone to the top 1 percent, or the top 5 percent, or the top 10 percent of income recipients.  Middle- and low-income Americans have managed to achieve income gains, too, as we shall see.

Tax policy certainly affects overall inequality, but I shall leave it for Scott, David, and Tracy to take that up. Let me turn to my main job, which is to distinguish between different reasonable income measures.

The crucial thing to know is that contradictory statements can be made about some income trends because of differences in the definition of income.  In general, the most pessimistic statements about trends rely on an income definition that is restrictive in some way.  The definition may exclude important income items, items, for example, that tend to equalize or boost family incomes.  The definition may leave out adjustments to income … adjustments that tend to boost the rate of income gain for low- or middle-income recipients, but not for top-income recipients.

The narrowest income definition commonly used to evaluate income trends is Definition #1 in my slide, “pretax private, cash income.”  Columnists and news reporters are unknowingly using this income definition when they make pronouncements about the income share of the “top 1 percent.”  The data about income under this definition are almost always based on IRS income tax returns, supplemented with a bit of information from the Commerce Department’s National Income and Product Account (NIPA) data file.

The single most common income definition used to assess income trends and inequality is the Census Bureau’s “money income” definition, Definition #2 on the slide.  It is just the same as the first definition I mentioned, except this income concept also includes government cash transfer payments – Social Security, unemployment insurance, cash public assistance, Veterans’ benefits, etc.

A slightly more expansive definition (#3) also adds food stamp (or SNAP) benefits plus other government benefits that are straightforward to evaluate. Items of this kind include the implicit rent subsidy low-income families receive in publicly-subsidized housing, school lunch subsides, and means-tested home heating subsidies.

Now we come to subtractions from income. These typically reflect families’ tax obligations.  The Census Bureau makes estimates of state and federal income tax liabilities as well as payroll taxes owed by workers (though not by their employers).  Since income and payroll taxes subtract from the income available to pay for other stuff families want to buy, it seems logical to also subtract them from countable income. This is done under income Definition #4.  Some tax obligations – notably the Earned Income Credit (EIC) – are in fact subtractions from taxes owed, which would not be a problem in the case of families that still owe positive taxes to the government.  However, the EIC is refundable to taxpayers, meaning that some families have negative tax liabilities:  The government owes them money.  In this case, if you do not take taxes into account you understate low-income families’ incomes, even as you’re overstating the net incomes available to middle- and high-income families.

Now let’s get a bit more complicated.  Forget what I said about taxes, because our next income definition (#5) also ignores them.  It is an even-more-comprehensive definition of gross or pretax income.  In addition to all those cash and near-cash items I mentioned in Definition #3, Definition #5 includes imputed income items, such as: 

• The value of your employer’s premium contribution to your employee health plan;
• The value of the government’s subsidy to your public health plan – Medicare, Medicaid, state CHIP plans, etc.
• Realized taxable gains from the sale of assets; and
• Corporate income that is earned by companies in which you own a share even though it is not income that is paid directly to you.

This is the most comprehensive income definition of which I am aware that refers to gross or pre-tax income.

Finally we have Definition #6, which subtracts your direct and indirect tax payments.  The only agency that uses this income definition is principally interested in the Federal budget, so the subtractions are limited to Federal income and payroll taxes, Federal corporate income taxes, and excise taxes.

Before we go into why you should care about any of these definitions, let me mention a somewhat less important issue, namely, how we define the income-sharing group over which we estimate inequality.  The most common assessment unit for income included under Definition #1 (“Pre-tax private cash income”) is the Federal income tax filing unit.  Sometimes this unit has one person; sometimes 2 (a married couple); and sometimes more than 2, including dependents.

The Census Bureau (and, consequently, most users of Census-published statistics) mainly uses “households” as reference units, without any adjustment for variations in the size of different households.  The Bureau’s median income estimate, for example, is estimated using the annual “money income” of households, some of which contain 1 person, some contain 2, some contain 3, and so on.

Many economists and sociologists find this unsatisfactory because they think a $20,000 annual income goes a lot farther if it is supporting just one person rather than 12.  Therefore, a number of organizations—notably, the Luxembourg Income Study (LIS), the Organisation of Economic Cooperation and Development (OECD), and the Congressional Budget Office (CBO)—assume household income is equally shared within each household, but that household “needs” increase with the square root of the number of people in the household.  That is, a household containing 9 members is assumed to require 1½ times as much income to enjoy the same standard of living as a family containing 4 members.  After an adjustment is made to account for the impact of household size, these organizations then calculate inequality among persons rather than among households.

How are these alternative income definitions estimated?  Who uses them?  What do the estimates show?  I’ll only consider a two or three basic cases.

First, pretax, private, cash income. By far the most famous users of this definition are Professors Thomas Piketty and Emmanuel Saez.  Their most celebrated product is an annual estimate of the share of total U.S. income (under this restricted definition) that is received by the top 1 percent of tax filing units.

Here is their most famous chart, showing the income share of the top 1 percent going back to 1913. (I use the Piketty-Saez estimates that exclude realized capital gains in the calculation of taxpayers’ incomes.) The notable feature of the chart is the huge rise in the top income share between 1970—when it was 8 percent of all pretax private cash income—and last year—when the comparable share was 18 percent.  

I have circled one part of the line—between 1986 and 1988—to show you how sensitive their income definition is to changes in the income tax code.  In 1986 Congress passed the Tax Reform Act of 1986 (TRA86). By 1988 the reform was fully implemented.  Wealthy taxpayers noticed that TRA86 sharply reduced the payoff to holding corporate earnings inside a separately taxed corporate entity. Rich business owners or shareholders could increase their after-tax income by arranging things so their business income was taxed only once, at the individual level.  The result was that a lot of income, once earned by and held within corporations, was now passed through to the tax returns of rich individual taxpayers. These taxpayers appeared to enjoy a sudden surge in their taxable incomes between 1986 and 1988.  No one seriously believes rich people failed to get the benefits of this income before 1987.  Before 1987 the same income simply showed up on corporate rather than on individual income tax returns.

A final point:  The chart displayed in SLIDE #6 is the source of the widely believed claim that U.S. inequality is nowadays about the same as it was at the end of the Roaring 1920s, before the Great Depression.  That is close to being true – under this income definition.

Census “money income”: This income definition is very similar to the one just discussed, except that it includes cash government transfer payments.  The producer of the series is the Census Bureau, and its most famous uses are to measure trends in real median household income and the official U.S. poverty rate. Furthermore, the Census Bureau uses the income definition to compile estimates of the Gini coefficient of household income inequality and the income shares received by each one-fifth of households, ranked from lowest to highest income, and received by the top 5 percent of households.

Here is a famous graph based on the Bureau’s “median household income” series.  I have normalized the historical series using the 1999 real median income level (1999 and 2000 were the peak income years according to Census data).  Since 1999 and 2000, median income has fallen about 10 percent.  If we accept this estimate without qualification, it certainly represents bad news for living standards of the nation’s middle class. The conclusion is contradicted by other government income statistics that use a broader, more inclusive income definition, however.

And here is the Bureau’s most widely cited distributional statistic (after its “official poverty rate” estimate).  Since 1979, the Gini coefficient has increased 17 percent under this income definition. (It is worth noting, however, that the portion of the increase that occurred between 1992 and 1993 is mainly the result of methodological changes in the way the Census Bureau ascertained incomes in its 1994 income survey.)

When you hear U.S. inequality compared with that in other rich countries, the numbers are most likely based on calculations of the LIS or OECD.  Their income definition is basically “Cash and Near-cash Public and Private income minus Income and Payroll taxes owed by households.”  Under this income definition, the U.S. looks relatively very unequal and America appears to have an exceptionally high poverty rate.  U.S. inequality has been rising under this income definition, as indeed has also been the case in most other rich countries. The increase in the United States has been above average, however, helping us to retain our leadership position, both in income inequality and in relative poverty.

We turn last to the most expansive income definition:  CBO’s measure of net after-tax income.  I will use CBO’s tabulations using this income definition to shed light on some of the inequality and living standard trends implied by the narrower income definitions discussed above.

Let’s consider some potential limitations of a couple of those definitions.  The limitations do not necessarily make them flawed or uninteresting.  They do mean the narrower income measures cannot tell us some of the things that users claim they tell us.

An obvious shortcoming of the “cash pretax private income” definition is that it excludes virtually everything the government does to equalize Americans’ incomes.  Believe it or not, the Federal tax system is mildly progressive.  It claims a bigger percentage of the (declared) incomes of the rich than it does of middle-income families’ and especially the poor.  Any pretax income measure will miss that redistribution.

More seriously, it excludes all government transfer payments.  You may think the rich get a bigger percentage of their income from government handouts compared with middle class and poorer households.  That is simply wrong.  The rich get a lot less.  And the percentage of total personal income that Americans derive from government transfer payments has gone way up over the years.  In the Roaring 1920s, Americans received almost nothing in the form of government transfers. Less than 1 percent of Americans’ incomes were received as transfer payments.  By 1970—near the low point of inequality according to the Piketty-Saez measure—8.3 percent of Americans’ personal income was derived from government transfers.  Last year, the share was 17 percent. None of the increase in government transfers is reflected in Piketty and Saez’s estimates of the trend in inequality.  Inequality is nowadays lower than it was in the late 1920s, mainly because the government does more redistribution through taxes and transfers.

Both the Piketty-Saez and the Census “money income” statistics are affected by the exclusion of government- and employer-provided health benefits from the income definition. This slide contains numbers, starting in 1960, that show the share of total U.S. personal consumption consisting of personal health care consumption.  I have divided the total into two parts. The first is the share that is paid for out of our own cash incomes (the blue part at the bottom).  This includes our out-of-pocket spending for doctors’ charges, hospital fees, pharmaceutical purchases, and other provider charges as well as our out-of-pocket spending on health insurance premiums. The second is the share of our personal health consumption that is paid out of government subsidies to Medicare, Medicaid, CHIP, etc., or out of employer subsidies to employee health plans (the red part). 

As everyone knows, the share of total consumption that consists of health consumption has gone way up.  What few people recognize is that the share that is directly paid by consumers—through payments to doctors, hospitals, and household health insurance premium payments—has remained unchanged.  All of the increase in the health consumption share since 1960 has been financed through government and employer subsidies to health insurance plans. None of those government or employer contributions is counted as “income” under the Piketty-Saez and Census “money income” definitions.  You would have to be quite a cynic to claim the subsidies have brought households no living standard improvements since 1960, yet that is how they are counted under the Piketty-Saez and Census “money income” definitions.

Final slide: How much has inequality gone up under income definitions that count all income sources and subtract the Federal income, payroll, corporation, and excise taxes we pay?  CBO gives us the numbers, though unfortunately its numbers end in 2011.

Here are CBO’s estimates of real income gains between 1979 and 2011.  These numbers show that real net incomes increased in every income category, from the very bottom to the very top.  They also show that real incomes per person have increased much faster at the top—over on the right—than in the middle or at the bottom—over on the left.  Still, contrary to a common complaint that all the income gains in recent years have been received by folks at the top, the CBO numbers suggest net income gains have been nontrivial among the poor and middle class as well as among top income recipients.

Suppose we look at trends in the more recent past, say, between 2000 and 2011.  That lower panel in this slide presents a very different picture from the one implied by the Census Bureau’s “money income” statistics.  Unlike the “money income numbers” [SLIDE #9], these show that inequality has declined since 2000.  Unlike the “money income numbers” [SLIDE #8], these show that incomes of middle-income families have improved since 2000.  There are a variety of explanations for the marked contrast between the Census Bureau and CBO numbers.  But a big one is the differing income definitions the two conclusions are based on.  The more inclusive measure of income shows faster real income gains among middle-income and poorer households, and it suggests a somewhat different trend in inequality.


Authors

Image Source: © Kim Kyung Hoon / Reuters
     
 
 




ive

Job gains even more impressive than numbers show


I came across an interesting chart in yesterday’s Morning Money tipsheet from Politico that struck me as a something that sounded intuitively correct but was, in fact, not. It's worth a comment on this blog, which has served as a forum for discussion of jobs numbers throughout the recovery.

Between last week’s BLS employment report and last night’s State of the Union, we’ve heard a lot about impressive job growth in 2015. For my part, I wrote on this blog last week that the 2.6 million jobs created last year makes 2015 the second best calendar-year for job gains of the current recovery.

The tipsheet’s "Chart of the Day," however, suggested that job growth in 2015 was actually lower-than-average if we adjust for the change in the size of the labor force. This is what was in the tipsheet from Politico:


CHART OF THE DAY: NOMINAL JOB GROWTH — Via Hamilton Place Strategies: "Adjusting jobs data to account for labor force shifts can help shed some light on voters' economic angst, even as we see good headline statistics. … Though 2015 was a good year in terms of job growth during the current recovery and had higher-than-average job growth as compared to recent recoveries, 2015 actually had lower-than-average job growth if we adjust for the change in the size of the labor force." http://bit.ly/1OnBXSm


I decided to look at the numbers.

The authors propose that we should "scale" reported job gains by the number of workers, which at first seems to make sense. Surely, an increase in monthly employment of 210,000 cannot mean the same thing when there are already 150 million employed people as when there are just 75 million employed people.

But this intuition is subtly wrong for a simple reason: The age structure of the population may also differ in the two situations I have just described. Suppose when there are 75 million employed people, the population of 20-to-64 year-old people is growing 300,000 every month. Suppose also when there are 150 million employed people, the population of 20-to-64 year-olds is shrinking 100,000 per month. 

Most informed observers would say that job growth of 210,000 a month is much more impressive under the latter assumptions than it is under the first set of assumptions, even though under the latter assumptions the number of employed people is twice as high as it is under the first assumptions.

BLS estimates show that in the seven years from December 2008-December 2015, the average monthly growth in the 16-to-64 year-old (noninstitutionalized) U.S. population was 85,200 per month. That is the lowest average growth rate of the working-age population going back to at least 1960. Here are the numbers:

Once we scale the monthly employment gain by the growth in the working-age population, the growth of jobs in recent years has been more impressive—not less—than suggested by the raw monthly totals. Gains in employer payrolls have far surpassed the growth in the number of working-age Americans over the past five years.

Headline writers have been impressed by recent job gains because the job gains have been impressive.

Authors

     
 
 




ive

Remaking urban transportation and service delivery

Major changes are taking place in urban transportation and service delivery. There are shifts in car ownership, the development of ride-sharing services, investments in autonomous vehicles, the use of remote sensors for mobile applications, and changes in package and service delivery. New tools are being deployed to transport people, deliver products, and respond to a…

       




ive

AI, predictive analytics, and criminal justice

As technology becomes more sophisticated, artificial intelligence (AI) is permeating into new parts of society and being used in criminal justice to assess risks for those in pre-trial or on probation. Predictive analytics raise several questions concerning bias, accuracy, and fairness. Observers worry that these tools replicate injustice and lead to unfair outcomes in pre-trial…

       




ive

The free-world strategy progressives need

      
 
 




ive

U.S. manufacturing may depend on automation to survive and prosper


Can this sector be saved? We often hear sentiments like: "Does America still produce anything?" and "The good jobs in manufacturing have all gone." There is nostalgia for the good old days when there were plentiful well-paid jobs in manufacturing. And there is anger that successive U.S. administrations of both parties have negotiated trade deals, notably NAFTA and the admission of China into the World Trade Organization, that have undercut America's manufacturing base.

Those on the right suggest that if burdensome regulations were lifted, this would fire up a new era of manufacturing prowess. On the left, it is claimed that trade agreements are to blame and, at the very least, we should not sign any more of them. Expanding union power and recruiting are another favorite solution. Despite his position on the right, Donald Trump has joined those on the left blaming China for manufacturing’s problems.

What is the real story and what needs to be done to save this sector? The biggest factor transforming manufacturing has been technology; and technology will largely determine its future.

Disappearing jobs

Employment in the manufacturing sector declined slowly through the 1980s and 1990s, but since 2000, the decline has been much faster falling by over 6 million workers between 2000 and 2010. There were hopes that manufacturing jobs would regain much of their lost ground once the recession ended, but the number of jobs has climbed by less than a million in the recovery so far and employment has been essentially flat since the first quarter of 2015. Manufacturing used to be a road to the middle class for millions of workers with just a high school education, but that road is much narrower today—more like a footpath. In manufacturing’s prime, although not all jobs were good jobs, many were well paid and offered excellent fringe benefits. Now there are many fewer of these.

Sustained but slow output growth

The real output of the manufacturing sector from 2000 to the present gives a somewhat more optimistic view of the sector, with output showing a positive trend growth, with sharp cyclical downturns. There was a peak of manufacturing production in 2000 with the boom in technology goods, most of which were still being produced in the U.S. But despite the technology bust and the shift of much of high-tech manufacturing overseas, real output in the sector in 2007 was still nearly 11 percent higher than its peak in 2000.

Production fell in the Great Recession at a breathtaking pace, dropping by 24 percent starting in Q3 2008. Manufacturing companies were hit by a bomb that wiped out a quarter of their output. Consumers were scared and postponed the purchase of anything they did not need right away. The production of durable goods, like cars and appliances, fell even more than the total. Unlike employment in the sector, output has reclaimed it previous peak and, by the third quarter of 2015, was 3 percent above that peak. The auto industry has recovered particularly strongly. While manufacturing output growth is not breaking any speed records, it is positive.

Understanding the pattern

The explanation for the jobs picture is not simple, but the Cliff Notes version is as follows: manufacturing employment has been declining as a share of total economy-wide employment for 50 years or more—a pattern that holds for all advanced economies, even Germany, a country known for its manufacturing strength. The most important reason for U.S. manufacturing job loss is that the overall economy is not creating jobs the way it once did, especially in the business sector. This conclusion probably comes as a surprise to most Americans who believe that international trade, and trade with China in particular, is the key reason for the loss of jobs. In reality, trade is a factor in manufacturing weakness, but not the most important one.

The most important reason for U.S. manufacturing job loss is that the overall economy is not creating jobs the way it once did, especially in the business sector.

The existence of our large manufacturing trade deficit with Asia means output and employment in the sector are smaller than they would be with balanced trade. Germany, as noted, has seen manufacturing employment declines also, but the size of their manufacturing sector is larger than ours, running huge trade surplus. In addition, right now that there is global economic weakness that has caused a shift of financial capital into the U. S. looking for safety, raising the value of the dollar and thus hurting our exports. In the next few years, it is unlikely that the U.S. trade deficit will improve—and it may well worsen.

Even though it will not spark a jobs revival, manufacturing is still crucial for the future of the U.S. economy, remaining a center for innovation and productivity growth and if the U.S. trade deficit is to be substantially reduced, then manufacturing must become more competitive. The services sector runs a small trade surplus and new technologies are eliminating our energy trade deficit. Nevertheless a substantial expansion of manufactured exports is needed if there is to be overall trade balance.

Disruptive innovation in manufacturing

The manufacturing sector is still very much alive and reports of its demise are not just premature but wrong. If we want to encourage the development of a robust competitive manufacturing sector, industry leaders and policymakers must embrace new technologies. The sector will be revived not by blocking new technologies with restrictive labor practices or over-regulation but by installing them—even if that means putting robots in place instead of workers. To speed the technology revolution, however, help must be provided to those whose jobs are displaced. If they end up as long-term unemployed, or in dead-end or low-wage jobs, then not only do these workers lose out but also the benefits to society of the technology investment and the productivity increase are lost.

The manufacturing sector performs 69 percent of all the business R&D in the U.S. which is powering a revolution that will drive growth not only in manufacturing but also in the broader economy as well. The manufacturing revolution can be described by three key developments:

  1. In the internet of things, sensors are embedded in machines, transmitting information that allows them to work together and report impending maintenance problems before there is a breakdown.
  2. Advanced manufacturing includes 3-D printing, new materials and the “digital thread” which connects suppliers to the factory and the factory to customers; it breaks down economies of scale allowing new competitors to enter; and it enhances speed and flexibility.
  3. Distributed innovation allows crowdsourcing is used to find radical solutions to technical challenges much more quickly and cheaply than with traditional R&D.

In a June 2015 Fortune 500 survey, 72 percent of CEOs reported their biggest challenge is that technology is changing fast, naming it as their number one challenge. That new technology churn is especially acute in manufacturing. The revolution is placing heavy demands on managers who must adapt their businesses to become software companies, big data companies, and even media companies (as they develop a web presence). Value and profit in manufacturing is shifting to digital assets. The gap between current practice and what it takes to be good at these skills is wide for many manufacturers, particularly in their ability to find the talent they need to transform their organizations.

Recent OECD analysis highlighted the large gap between best-practice companies and average companies. Although the gap is smaller in manufacturing than in services because of the heightened level of global competition in manufacturing, it is a sign that manufacturers must learn how to take advantage of new technologies quickly or be driven out of business.

Closing the trade deficit

A glaring weakness of U.S. manufacturing is its international trade performance. Chronic trade deficits have contributed to the sector’s job losses and have required large-scale foreign borrowing that has made us a net debtor to the rest of the world -- to the tune of nearly $7 trillion by the end of 2014. Running up endless foreign debts is a disservice to our children and was one source of the instability that led to the financial crisis. America should try to regain its balance as a global competitor and that means, at the least, reducing the manufacturing trade deficit. Achieving a significant reduction in the trade deficit will be a major task, including new investment and an adjustment of today’s overvalued dollar.

The technology revolution provides an opportunity, making it profitable to manufacture in the U.S. using highly automated methods. Production can be brought home, but it won’t bring back a lot of the lost jobs. Although the revolution in manufacturing is underway and its fate is largely in the hands of the private sector, the policy environment can help speed it up and make sure the broad economy benefits.

First, policymakers must accept that trying to bring back the old days and old jobs is a mistake. Continuing to chase yesterday’s goals isn’t productive, and at this point it only puts off the inevitable. Prioritizing competitiveness, innovativeness, and the U.S. trade position over jobs could be politically difficult, however, so policymakers should look for ways to help workers who lose jobs and communities that are hard hit. Government training programs have a weak track record, but if companies do the training or partner with community colleges, then the outcomes are better. Training vouchers and wage insurance for displaced workers can help them start new careers that will mostly be in the service sector where workers with the right skills can find good jobs, not just dead-end ones.

Second, a vital part of the new manufacturing is the ecosystem around large companies. There were 50,000 fewer manufacturing firms in 2010 than in 2000, with most of the decline among smaller firms. Some of that was inevitable as the sector downsized, but it creates a problem because as large firms transition to the new manufacturing, they rely on small local firms to provide the skills and even the technologies they do not have in-house. The private sector has the biggest stake in developing the ecosystems it needs, but government can and has helped, particularly at the state and local level. Sometimes infrastructure investment is needed, land can be set aside, mentoring programs can be established for young firms, help can be given in finding funding, and simplified and expedited permitting processes instituted.

It is hard to let go of old ways of thinking. Policymakers have been trying for years to restore the number of manufacturing jobs, but that is not an achievable goal. Yes manufacturing matters; it is a powerhouse of innovation for our economy and a vital source of competitiveness. There will still be good jobs in manufacturing but it is no longer a conveyor belt to the middle class. Policymakers need to focus on speeding up the manufacturing revolution, funding basic science and engineering, and ensuring that tech talent and best-practice companies want to locate in the United States.

     
 
 




ive

The Imperial Presidency Is Alive and Well

       




ive

The imperial presidency is alive and well

       




ive

Congress pushed out that massive emergency spending bill quickly. Here are four reasons why.

       




ive

Partisanship in Perspective

Commentators and politicians from both ends of the spectrum frequently lament the state of American party politics, as our elected leaders are said to have grown exceptionally polarized — a change that has led to a dysfunctional government, writes Pietro Nivola. Nivola reexamines the nature and scope of contemporary partisanship, an assessment of its consequences, and an effort to compare the role of political parties today with the partisan divisions that prevailed during the first years of the republic.

      
 
 




ive

The next COVID-19 relief bill must include massive aid to states, especially the hardest-hit areas

Amid rising layoffs and rampant uncertainty during the COVID-19 pandemic, it’s a good thing that Democrats in the House of Representatives say they plan to move quickly to advance the next big coronavirus relief package. Especially important is the fact that Speaker Nancy Pelosi (D-Calif.) seems determined to build the next package around a generous infusion…

       




ive

Trump’s CDC directive isn’t just a war on words. It’s a war on science.

When it comes to science policy, we should take President Trump at his word. On Friday, the Trump administration prohibited officials at the Center for Disease Control and Prevention from using seven words and phrases within 2018 budget documents: “vulnerable,” “entitlement,” “diversity,” “transgender,” “fetus,” “evidence-based,” and “science-based”.  Public outrage flared up against the Orwellian-style censorship,…

       




ive

How cities can thrive in the age of Trump

Bill Finan, director of the Brookings Institution Press, discusses “The New Localism: How Cities can Thrive in the Age of Populism” with authors Bruce Katz and Jeremy Nowak. In their book and in the interview, Katz and Nowak explain why cities and the communities that surround them are best suited to address many of the…

       




ive

Assessing your innovation district: Five key questions to explore

Over the past two decades, a confluence of changing market demands and demographic preferences have led to a revaluation of urban places—and a corresponding shift in the geography of innovation. This trend has resulted in a clustering of firms, intermediaries, and workers—often near universities, medical centers, or other anchors—in dense innovation districts. Local economic development…

       




ive

Educational equality and excellence will drive a stronger economy

This election taught me two things. The first is obvious: We live in a deeply divided nation. The second, while subtle, is incredibly important: The election was a massive cry for help. People across the country–on both sides of the political spectrum–feel they have been left behind and are fearful their basic needs will continue…

       




ive

Turkey cannot effectively fight ISIS unless it makes peace with the Kurds


Terrorist attacks with high casualties usually create a sense of national solidarity and patriotic reaction in societies that fall victim to such heinous acts. Not in Turkey, however. Despite a growing number of terrorist attacks by the so-called Islamic State on Turkish soil in the last 12 months, the country remains as polarized as ever under strongman President Recep Tayyip Erdogan.

In fact, for two reasons, jihadist terrorism is exacerbating the division. First, Turkey's domestic polarization already has an Islamist-versus-secularist dimension. Most secularists hold Erdogan responsible for having created domestic political conditions that turn a blind eye to jihadist activities within Turkey.

It must also be said that polarization between secularists and Islamists in Turkey often fails to capture the complexity of Turkish politics, where not all secularists are democrats and not all Islamists are autocrats. In fact, there was a time when Erdogan was hailed as the great democratic reformer against the old secularist establishment under the guardianship of the military.

Yet, in the last five years, the religiosity and conservatism of the ruling Justice and Development Party, also known by its Turkish acronym AKP, on issues ranging from gender equality to public education has fueled the perception of rapid Islamization. Erdogan's anti-Western foreign policy discourse -- and the fact that Ankara has been strongly supportive of the Muslim Brotherhood in the wake of the Arab Spring -- exacerbates the secular-versus-Islamist divide in Turkish society.

Erdogan doesn't fully support the eradication of jihadist groups in Syria.

The days Erdogan represented the great hope of a Turkish model where Islam, secularism, democracy and pro-Western orientation came together are long gone. Despite all this, it is sociologically more accurate to analyze the polarization in Turkey as one between democracy and autocracy rather than one of Islam versus secularism.

The second reason why ISIS terrorism is exacerbating Turkey's polarization is related to foreign policy. A significant segment of Turkish society believes Erdogan's Syria policy has ended up strengthening ISIS. In an attempt to facilitate Syrian President Bashar Assad's overthrow, the AKP turned a blind eye to the flow of foreign volunteers transiting Turkey to join extremist groups in Syria. Until last year, Ankara often allowed Islamists to openly organize and procure equipment and supplies on the Turkish side of the Syria border.

Making things worse is the widely held belief that Turkey's National Intelligence Organization, or MİT, facilitated the supply of weapons to extremist Islamist elements amongst the Syrian rebels. Most of the links were with organizations such as Jabhat al-Nusra, Ahrar al-Sham and Islamist extremists from Syria's Turkish-speaking Turkmen minority.

He is trying to present the PKK as enemy number one.

Turkey's support for Islamist groups in Syria had another rationale in addition to facilitating the downfall of the Assad regime: the emerging Kurdish threat in the north of the country. Syria's Kurds are closely linked with Turkey's Kurdish nemesis, the Kurdistan Workers' Party, or PKK, which has been conducting an insurgency for greater rights for Turkey's Kurds since 1984.

On the one hand, Ankara has hardened its stance against ISIS by opening the airbase at Incirlik in southern Turkey for use by the U.S-led coalition targeting the organization with air strikes. However, Erdogan doesn't fully support the eradication of jihadist groups in Syria. The reason is simple: the Arab and Turkmen Islamist groups are the main bulwark against the expansion of the de facto autonomous Kurdish enclave in northern Syria. The AKP is concerned that the expansion and consolidation of a Kurdish state in Syria would both strengthen the PKK and further fuel similar aspirations amongst Turkey's own Kurds.

Will the most recent ISIS terrorist attack in Istanbul change anything in Turkey's main threat perception? When will the Turkish government finally realize that the jihadist threat in the country needs to be prioritized? If you listen to Erdogan's remarks, you will quickly realize that the real enemy he wants to fight is still the PKK. He tries hard after each ISIS attack to create a "generic" threat of terrorism in which all groups are bundled up together without any clear references to ISIS. He is trying to present the PKK as enemy number one.

Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

Under such circumstances, Turkish society will remain deeply polarized between Islamists, secularists, Turkish nationalists and Kurdish rebels. Terrorist attacks, such as the one in Istanbul this week and the one in Ankara in July that killed more than 100 people, will only exacerbate these divisions.

Finally, it is important to note that the Turkish obsession with the Kurdish threat has also created a major impasse in Turkish-American relations in Syria. Unlike Ankara, Washington's top priority in Syria is to defeat ISIS. The fact that U.S. strategy consists of using proxy forces such as Syrian Kurds against ISIS further complicates the situation.

There will be no real progress in Turkey's fight against ISIS unless there is a much more serious strategy to get Ankara to focus on peace with the PKK. Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

This piece was originally posted by The Huffington Post.

Publication: The Huffington Post
Image Source: © Murad Sezer / Reuters
      
 
 




ive

Why a proposed HUD rule could worsen algorithm-driven housing discrimination

In 1968 Congress passed and President Lyndon B. Johnson then signed into law the Fair Housing Act (FHA), which prohibits housing-related discrimination on the basis of race, color, religion, sex, disability, familial status, and national origin. Administrative rulemaking and court cases in the decades since the FHA’s enactment have helped shape a framework that, for…

       




ive

Getting a High Five: Advancing Africa’s transformative agenda

At his swearing in, the new African Development Bank President Akinwumi Adesina set out an agenda for the economic transformation of the continent. Among the five pillars of that agenda—popularly known as the “high fives”—is one that may have surprised many, especially in the donor community: Industrialize Africa. Why the surprise? Beyond supporting improvements in…

      
 
 




ive

Overcoming barriers: Sustainable development, productive cities, and structural transformation in Africa

Against a background of protracted decline in global commodity prices and renewed focus on the Africa rising narrative, Africa is proving resilient, underpinned by strong economic performance in non-commodity exporting countries. The rise of African cities contains the potential for new engines for the continent’s structural transformation, if harnessed properly. However, the susceptibility of Africa’s…

      
 
 




ive

Secular divergence: Explaining nationalism in Europe

Executive summary The doctrine of nationalism will continue eroding Europe’s integration until its hidden cause is recognized and addressed. In order to do so, Europe’s policymakers must acknowledge a new, powerful, and pervasive factor of social and political change: divergence within countries, sectors, jobs, or local communities. The popularity of the nationalist rhetoric should not…

       




ive

Five Myths About Turning Out the Vote

If you're an upstanding U.S. citizen, you'll stand up and be counted this Election Day, right? Well, maybe not. Just because Americans can vote doesn't mean they do. But who shows up is what decides the tight races, which makes turnout one of the most closely watched aspects of every election -- and one that has fostered a number of myths. Here are five, debunked:

1. Thanks to increasing voter apathy, turnout keeps dwindling.

This is the mother of all turnout myths. There may be plenty of apathetic voters out there, but the idea that ever fewer Americans are showing up at the polls should be put to rest. What's really happening is that the number of people not eligible to vote is rising -- making it seem as though turnout is dropping.

Those who bemoan a decline in American civic society point to the drop in turnout from 55.2 percent in 1972, when 18-year-olds were granted the right to vote, to the low point of 48.9 percent in 1996. But that's looking at the total voting-age population, which includes lots of people who aren't eligible to vote -- namely, noncitizens and convicted felons. These ineligible populations have increased dramatically over the past three decades, from about 2 percent of the voting-age population in 1972 to 10 percent today.

When you take them out of the equation, the post-1972 "decline" vanishes. Turnout rates among those eligible to vote have averaged 55.3 percent in presidential elections and 39.4 percent in midterm elections for the past three decades. There has been variation, of course, with turnout as low as 51.7 percent in 1996 and rebounding to 60.3 percent by 2004. Turnout in the most recent election, in fact, is on a par with the low-60 percent turnout rates of the 1950s and '60s.

2. Other countries' higher turnout indicates more vibrant democracies.

You can't compare apples and oranges. Voting rules differ from nation to nation, producing different turnout rates. Some countries have mandatory voting. If Americans were fined $100 for playing voter hooky on Election Day, U.S. participation might increase dramatically. But in fact, many people with a ballot pointed at their head simply cast a blank one or a nonsense vote for Mickey Mouse.

Moreover, most countries have national elections maybe once every five years; the United States has presidential or congressional elections every two years. Frequent elections may lead to voter fatigue. New European Union elections, for instance, seem to be depressing turnout in member countries. After decades of trailing turnout in the United Kingdom, U.S. turnout in 2004 was on a par with recent British elections, in which turnout was 59.4 percent in 2001 and 61.4 percent in 2005.

Americans are asked to vote more often -- in national, state, local and primary contests -- than the citizens of any other country. They can be forgiven for missing one or two elections, can't they? Even then, over the course of several elections, Americans have more chances to participate and their turnout may be higher than that in countries where people vote only once every five years.

3. Negative ads turn off voters and reduce turnout.

Don't be so sure. The case on this one is still open. Negative TV advertising increased in the mid-1980s, but turnout hasn't gone down correspondingly. The negative Swift boat campaign against Sen. John F. Kerry (D-Mass.) apparently did little to depress turnout in the 2004 presidential race.

Some academic studies have found that negative advertising increases turnout. And that's not so surprising: A particularly nasty ad grabs people's attention and gets them talking. People participate when they're interested. A recent GOP attack ad on Rep. Harold E. Ford Jr. (D-Tenn.), a Senate candidate, has changed the dynamic of the race, probably not because it changed minds or dissuaded Democrats, but because it energized listless Republicans.

We'll have to wait to see whether the attack on Ford backfires because voters perceive it as unfair. That's the danger of going negative. So campaigns tend to stick to "contrast ads," in which candidates contrast their records with those of their opponents. When people see stark differences between candidates, they're more likely to vote.

4. The Republican "72-hour campaign" will win the election.

Not necessarily. You can lead citizens to the ballot, but you can't make them vote.

Republicans supposedly have a super-sophisticated last-minute get-out-the-vote effort that identifies voters who'll be pivotal in electing their candidates. Studies of a campaign's personal contact with voters through phone calls, door-to-door solicitation and the like find that it does have some positive effect on turnout. But people vote for many reasons other than meeting a campaign worker, such as the issues, the closeness of the election and the candidates' likeability. Further, these studies focus on get-out-the-vote drives in low-turnout elections, when contacts from other campaigns and outside groups are minimal. We don't know what the effects of mobilization drives are in highly competitive races in which people are bombarded by media stories, television ads and direct mail.

Republican get-out-the-vote efforts could make a difference in close elections if Democrats simply sat on the sidelines. But this year Democrats have vowed to match the GOP mobilization voter for voter. So it'll take more than just knowing whether a prospective voter owns a Volvo or a BMW for Republicans to eke out victory in a competitive race.

5. Making voter registration easier would dramatically increase turnout.

Well, yes and no.

In 1993, the Democratic government in Washington enacted "Motor Voter," a program that allowed people to register to vote when they received their driver's license or visited a welfare office. Democrats thought that if everyone were registered, turnout rates would increase -- by as much as 7 percentage points.

But while many people registered to vote, turnout didn't go up much. Subsequent studies found only small increases in turnout attributable to Motor Voter, perhaps 2 percentage points.

Sizable increases in turnout can be seen in states with Election Day registration, which allows people to register when they vote. This may be related to the fact that lots of people don't make up their minds to vote until Election Day, rather than months in advance when they get a license.

Publication: The Washington Post
     
 
 




ive

The Competitive Problem of Voter Turnout

On November 7, millions of Americans will exercise their civic duty to vote. At stake will be control of the House and Senate, not to mention the success of individual candidates running for office. President Bush's "stay the course" agenda will either be enabled over the next two years by a Republican Congress or knocked off kilter by a Democratic one.

With so much at stake, it is not surprising that the Pew Research Center found that 51 percent of registered voters have given a lot of thought to this November's election. This is higher than any other recent midterm election, including 44 percent in 1994, the year Republicans took control of the House. If so, turnout should better the 1994 turnout rate among eligible voters of 41 percent.

There is good reason to suspect that despite the high interest, turnout will not exceed 1994. The problem is that a national poll is, well, a national poll, and does not measure attitudes of voters within states and districts.

People vote when there is a reason to do so. Republican and Democratic agendas are in stark contrast on important issues, but voters also need to believe that their vote will matter in deciding who will represent them. It is here that the American electoral system is broken for many voters.

Voters have little choice in most elections. In 1994, Congressional Quarterly called 98 House elections as competitive. Today, they list 51. To put it another way, we are already fairly confident of the winner in nearly 90 percent of House races. Although there is no similar tracking for state legislative offices, we know that the number of elections won by less than 60 percent of the vote has fallen since 1994.

The real damage to the national turnout rate is in the large states of California and New York, which together account for 17 percent of the country's eligible voters. Neither state has a competitive Senate or Governor's election, and few competitive House or state legislative races. Compare to 1994, when Californians participated in competitive Senate and governor races the state's turnout was 5 percentage points above the national rate. The same year New York's competitive governor's race helped boost turnout a point above the national rate.

Lacking stimulation from two of the largest states, turnout boosts will have to come from elsewhere. Texas has an interesting four-way governor's race that might draw from infrequent voters to the polls. Ohio's competitive Senate race and some House races might also draw voters. However, in other large states like Florida, Illinois, Michigan and Pennsylvania, turnout will suffer from largely uncompetitive statewide races.

The national turnout rate will likely be less than 1994 and fall shy of 40 percent. This is not to say that turnout will be poor everywhere. Energized voters in Connecticut get to vote in an interesting Senate race and three of five Connecticut House seats are up for grabs. The problem is that turnout will be localized in these few areas of competition.

The fault is not on the voters; people's lives are busy, and a rational person will abstain when their vote does not matter to the election outcome. The political parties also are sensitive to competition and focus their limited resources where elections are competitive. Television advertising and other mobilizing efforts by campaigns will only be found in competitive races.

The old adage of "build it and they will come" is relevant. All but hardcore sports fans tune out a blowout. Building competitive elections -- and giving voters real choices -- will do much to increase voter turnout in American politics. There are a number of reforms on the table: redistricting to create competitive districts, campaign financing to give candidates equal resources, and even altering the electoral system to fundamentally change how a vote elects representatives. If voters want choice and a government more responsive to their needs, they should consider how these seemingly arcane election procedures have real consequences on motivating them to do the most fundamental democratic action: vote.

Publication: washingtonpost.com
     
 
 




ive

Early Voting: A Live Web Chat with Michael McDonald


Event Information

September 26, 2012
12:30 PM - 1:00 PM EDT

Online Only
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Thousands of Americans are already casting their votes in the 2012 elections through a variety of vote-by-mail and in-person balloting that allows citizens to cast their votes well in advance of November 6. From military personnel posted overseas to absentee voters, these early voting opportunities give voters the opportunity to make their voices heard even when they can’t stand in line on Election Day. However, there are pitfalls in the process.

Expert Michael McDonald says that while a great deal of attention has been focused on voter fraud, the untold story is that during the last presidential election, some 400,000 absentee ballots were discarded as improperly submitted. How can early voters make sure their voices are heard? What effect will absentee and other early voting programs have in this election year? On September 26, McDonald took your questions and comments in a live web chat moderated by Vivyan Tran of POLITICO.

12:30 Vivyan Tran: Welcome everyone, let's get started.

12:30 Michael McDonald: Early voting was 30% of all votes cast in the 2008 election. My expectation is that 35% of all votes in 2012 will be cast prior to Election Day. In some states, the volume will be much higher. In the battleground state of CO, about 85% of the votes will be cast early; 70% in FL; and 45% in Ohio.

What does it all mean? Hopefully I will be able to answer that question in today's chat!

12:30 Comment from JMC: At what point do you think that the in person early voters become less partisan types eager to cast their vote and more "regular folks" who would be more swayed by debate performances, TV ads, and the like?

12:30 Comment from Jason: 400,000 absentee ballots were discarded in 2008? How?

12:30 Michael McDonald: Reasons why election officials reject mail ballots: unsigned, envelope not sealed, multiple ballots in one envelope, etc. 400K rejected in 2008 does not include the higher rate of spoiled ballots that typically occur with paper mail ballots compared to electronic recording devices used in polling places. Moral: make sure you follow closely the proper procedures to cast your mail ballot!

12:31 Michael McDonald: @JMC: If they are going to vote early, most people wait until the week prior to the election. Those voting now have already made up their minds. But, the polls indicate many people have already done so, so maybe we see more early voting in 2012 as a consequence.

12:31 Comment from User: It was my understanding that absentee ballots are never counted unless the race is incredibly close in a particular state? Is that true - or do the rules for that vary by state?

12:32 Michael McDonald: No, all early votes are counted. What may not be counted, depending on state law and if the election is close enough for them to matter, are provisional ballots.

12:33 Comment from Damion: The blurb here says 400,000 early votes were discarded. Shouldn't the board of elections be reprimanded for that? Who was at fault and what consequences were there?

12:33: Michael McDonald: No, these are ballots "discarded" because people did not follow proper procedures and they must be rejected by law.

12:33 Comment from Shirley: Can you Facebook your vote in?

12:34 Michael McDonald: No. However, election officials are transmitting ballots electronically to overseas citizens and military voters. Voters must print the ballot, fill it out, sign it, scan it, and return. There are ways for these voters to verify that their ballot was received.

12:35 Comment from Karen K: What kind of impact could these discards have on the 2012 election?

12:36 Michael McDonald: Difficult to say. More Republicans vote by mail (excluding all mail ballot states). But, we don't know much about those who fail to follow the procedures. They might be less educated or elderly, and thus might counter the overall trend we see in mail balloting. Who knows?

12:37 Comment from User: This is the first I've heard of so many early votes getting discarded. Is this an issue people are addressing in a serious way?

12:38 Michael McDonald: Unfortunately, we are too focused on issues like voter fraud, which are low occurrence events, when there are many more important ways in which votes are lost in the system. Hopefully we can get the message out so fewer people disenfranchise themselves.

12:39 Comment from Anonymous: What do we know so far about absentee votes for 2012? Can we tell who they're leaning toward in specific states and how?

12:40 Michael McDonald: It's a little early :) yet. One of the major changes from 2008 is that the overseas civilian ballots -- a population that leans D -- was sent ballots much earlier this year than in 2008. We'll get a much better sense of the state of play in the two weeks prior to the election.

12:41 Michael McDonald: That said, the number of absentee ballot requests is running about the same as in 2008, if not a little higher, suggesting that the early vote will indeed be higher than in 2008, and perhaps that overall turnout will be on par with 2008, too.

12:41 Comment from Leslie: So, how can I ensure my early ballot is counted? There are so many rules and regulations, I'm never sure I've brought/filled out the paperwork.

12:42 Michael McDonald: Many states and localities allow people to check on-line the status of their ballot. Do a search for your local election official's webpage to see if that is available to you.

12:42 Comment from Daryyl: Can you define provisional ballots then?

12:44 Michael McDonald: Provisional ballots are required under federal law to allow people to vote if there is a problem with their voter registration. Election officials work after the election to resolve the situation.

If you vote in-person early, then you can resolve provisional ballot situations much sooner, which is good.

12:45 Michael McDonald: Some states use provisional ballots for other purposes: e.g., for a person who does not have the required id or to manage a change in voter registration address. One of the untold stories of this cycle is that FL will manage change of reg. address through provisional ballots. OH does so, and 200K provisionals were cast in 2008. Expect 300K in FL, which may mean we will not know the outcome in FL until weeks after the election. Can you say 2000?

12:45 Comment from Mark, Greenbelt: Is early voting a new phenomenon, or is it increasing? It seems we should make it easier for people to vote when they can.

12:46 Michael McDonald: We are seeing more people vote early, particularly in states that offer the option. However, only MD changed its law from 2008 to allow in-person early voting. OH is sending absentee ballot requests to all registered voters, which is not a change in law, but a change in procedure that is expected to significantly increase early voting there.

12:47 Comment from Jennifer S. : Why do we vote on Tuesday? It seems inconvenient. Wouldn't more people vote if we did it on the weekend? Or over a period of days that offered both morning and evening hours?

12:48 Michael McDonald: We used to have early voting in the US! Back at the Founding, elections were held over several days to allow people living in remote areas to get to the courthouse (the polling place back in the day) to vote. In the mid-1840s, the federal gov't set the current single day for voting because -- what else? -- claims of vote fraud. That people could vote more than once.

12:49 Comment from Winston: What percentage of the U.S. population votes? And, if you could make one change that would increase voting in the U.S. what would be?

12:50 Michael McDonald: I also calculate turnout rates for the country for the media and academics. 62.2% of the eligible voters cast a ballot that counted in 2008. If I were to wave a magic wand, I would have election day registration. California just adopted it yesterday (but starting 2015). States with EDR have +5-7 percentage points of turnout.

12:50 Comment from Bernie S.: One of your colleagues at Brookings, Bill Galston, has suggested that we make voting mandatory, as they do in Australia. What do you think of that idea? Is it even possible here?

12:51 Michael McDonald: That will never happen in a county that values individual freedom so deeply as the US. Fun fact: a few years back, AZ voters rejected a ballot initiative to have voters entered into a lottery.

12:51 Comment from James: If early voting becomes more and more common, shouldn't candidates start campaigning earlier?

12:53 Michael McDonald: They do. In fact, you will see the presidential candidates visit battleground states that have in-person early voting at the start of the period. In 2008, you could see how early voting increased in places where Obama held rallies.

12:53 Comment from Devi P. : What are the factors that drive turnout? How do we get people to the polls? And what can you say about the "microtargeting" strategies the political parties are using to get their voters out?

12:54 Michael McDonald: One of the major ways in which elections have changed in the past decade is that campaigns now place more effort into voter contacts. Over 50% of people reported a contact in 2008. These contacts are known to increase turnout rates by upwards of 10 percentage points. Even contacts from Facebook friends seems to matter!

12:54 Comment from Wendy P, Ohio: What's your position on electronic voting? Can't every voting machine be hacked? Isn't plain old paper balloting more secure?

12:56 Michael McDonald: I went to Caltech, so I am sensitive to the potential for hacking. That said, I encourage experimentation so that we can build a better system. There are counties that do hold electronic elections!

12:56 Comment from Leslie: 400,000 seems like a lot - does this actually have impact on the electoral votes, and if so, should we be worried in this coming election that a lengthy recall may occur?

12:57 Michael McDonald: It could affect the outcome. So please spread the word through your networks. This is the #1 way in which votes are lost in the system!

12:57 Comment from JVotes: Perhaps we should microtarget with ballot issues. Many Americans seem disappointed with the two candidates we have to choose from.

12:58 Michael McDonald: Actually, ballot issues are known to increase turnout. But only a small amount in a presidential election, about 1 percentage point. People vote in the main show: the presidential election.

12:58 Michael McDonald: Interesting aside on that: early voting seems to have a small turnout effect in presidential election, but a larger effect in state and local elections.

12:58 Comment from Jaime Ravenet: Is there a reading of the new voter ID requirements (in at least the 9 most contested states) that does not constitute an "abridgment" of citizens' voting rights?

1:00 Michael McDonald: Perhaps under state constitutions. But the US Supreme Court has already ruled in favor of Indiana's id law. Still, that does not mean that lawyers will try to find some way under federal law to overturn them. TX was blocked because their law was determined to be discriminatory, per Sec. 5 of the Voting Rights Act.

1:00 Vivyan Tran: Thanks for the questions everyone, see you next week!

      
 
 




ive

Technology Transfer: Highly Dependent on University Resources


Policy makers at all levels, federal and state and local governments, are depositing great faith in innovation as a driver of economic growth and job creation. In the knowledge economy, universities have been called to play a central role as knowledge producers. Universities are actively seeking to accommodate those public demands and many have engaged an ongoing review of their educational programs and their research portfolios to make them more attuned to industrial needs. Technology transfer is a function that universities are seeking to make more efficient in order to better engage with the economy.

By law, universities can elect to take title to patents from federally funded research and then license them to the private sector. For years, the dominant model of technology transfer has been to market university patents with commercial promise to prospect partners in industry. Under this model, very few universities have been able to command high licensing fees while the vast majority has never won the lottery of a “blockbuster” patent. Most technology transfer offices are cost centers for their universities.

However, upon further inspection, the winners of this apparent lottery seem to be an exclusive club. Over the last decade only 37 universities have shuffled in the top 20 of the licensing revenue ranking. What is more, 5 of the top 20 were barely covering the expenses of their tech transfer offices; the rest were not even making ends meet.[i] It may seem that the blockbuster patent lottery is rigged. See more detail in my Brookings report.

That appearance is due to the fact that landing a patent of high commercial value is highly dependent on the resources available to universities. Federal research funding is a good proxy variable to measure those resources. Figure 1 below shows side by side federal funding and net operating income of tech transfer offices. If high licensing revenues are a lottery; then it is one in which only universities with the highest federal funding can participate. Commercial patents may require a critical mass of investment to build the capacity to produce breakthrough discoveries that are at the same time mature enough for the private investors to take an interest.

Figure 1. A rigged lottery?

High federal research funding is the ticket to enter the blockbuster patent lottery

               

Source: Author elaboration with AUTM data (2013) [ii]

But now, let’s turn onto another view of the asymmetry of resources and licensing revenues of  universities; the geographical dimension. In Figure 2 we can appreciate the degree of dispersion (or concentration) of both, federal research investment and licensing revenue, across the states. It is easy to recognize the well-funded universities on the East and West coast receiving most of federal funds, and it is easy to observe as well that it is around the same regions, albeit more scattered, that licensing revenues are high.

If policymakers are serious about fostering innovation, it is time to discuss the asymmetries of resources among universities across the nation. Licensing revenues is a poor measure of technology transfer activity, because universities engage in a number of interactions with the private sector that do not involve patent licensing contracts. However, this data hints at the larger challenge: If universities are expected to be engines of growth for their regions and if technology transfer is to be streamlined, federal support must be allocated by mechanisms that balance the needs across states. This is not to suggest that research funding should be reallocated from top universities to the rest; that would be misguided policy. But it does suggest that without reform, the engines of growth will not roar throughout the nation, only in a few places.

Figure 2. Tech Transfer Activites Depend on Resources

Bubbles based on Metropolitan Statistical Areas and propotional to size of the variable



[i] These figures are my calculation based on Association of Technology Managers survey data (AUTM, 2013). In 2012, 155 universities reported data to the survey; a majority of the 207 Carnegie classified universities as high or very high research activity.

[ii] Note the patenting data is reported by some universities at the state system level (e.g. the UC system).  The corresponding federal funding was aggregated across the same reporting universe.

Image Source: © Ina Fassbender / Reuters
     
 
 




ive

Innovation and manufacturing labor: a value-chain perspective


Policies and initiatives to promote U.S. manufacturing would be well advised to take a value chain perspective of this economic sector. Currently, our economic statistics do not include pre-production services to manufacturing such as research and development or design or post-production services such as repair and maintenance or sales. Yet, manufacturing firms invest heavily in these services because they are crucial to the success of their business. 

In a new paper, Kate Whitefoot and Walter Valdivia offer a fresh insight into the sector’s labor composition and trends by examining employment in manufacturing from a value chain perspective. While the manufacturing sector shed millions of jobs in the 2002-2010 period—a period that included the Great Recession—employment in upstream services expanded 26 percent for market analysis, 13 percent for research and development, and 23 percent for design and technical services. Average wages for these services increased over 10 percent in that period. Going forward, this pattern is likely to be repeated. Technical occupations, particularly in upstream segments are expected to have the largest increases in employment and wages.

In light of the findings, the authors offer the following recommendations: 

  • Federal manufacturing policy: Expand PCAST’s Advanced Manufacturing Partnership recommendations—specifically, for developing a national system of certifications for production skills and establishing a national apprenticeship program for skilled trades in manufacturing—to include jobs outside the factory such as those in research and development, design and technical services, and market analysis.
  • Higher education: Institutions of higher education should consider some adjustment to their curriculum with a long view of the coming changes to high-skill occupations, particularly with respect to problem identification and the management of uncertainty in highly automated work environments. In addition, universities and colleges should disseminate information among prospect and current students about occupations where the largest gains of employment and higher wage premiums are expected. 
  • Improve national statistics: Supplement the North American Industry Classification System (NAICS) with data that permits tracking the entire value chain, including the development of a demand-based classification system. This initiative could benefit from adding survey questions to replicate the data collection of countries with a Value Added Tax—without introducing the tax, that is—allowing in this manner a more accurate estimation of the value added by each participant in a production network.

Whitefoot and Valdivia stress that any collective efforts aimed at invigorating manufacturing must seize the opportunities throughout the entire value chain including upstream and downstream services to production.

Downloads

Authors

Image Source: © Jeff Tuttle / Reuters
     
 
 




ive

University-industry partnerships can help tackle antibiotic resistant bacteria


An academic-industrial partnership published last January in the prestigious journal Nature the results of the development of antibiotic teixobactin. The reported work is still at an early preclinical stage but it is nevertheless good news. Over the last decades the introduction of new antibiotics has slowed down nearly to a halt and over the same period we have seen a dangerous increase in antibiotic resistant bacteria.

Such is the magnitude of the problem that it has attracted the attention of the U.S. government. Accepting several recommendations presented by the President’s Council of Advisors on Science and Technology (PCAST) in their comprehensive report, the Obama Administration issued last September an Executive Order establishing an interagency Task Force for combating antibiotic resistant bacteria and directing the Secretary of Human and Health Services (HHS) to establish an Advisory Council on this matter. More recently the White House issued a strategic plan to tackle this problem.

Etiology of antibiotic resistance

Infectious diseases have been a major cause of morbidity and mortality from time immemorial. The early discovery of sulfa drugs in the 1930s and then antibiotics in the 1940s significantly aided the fight against these scourges. Following World War II society experienced extraordinary gains in life expectancy and overall quality of life. During that period, marked by optimism, many people presumed victory over infectious diseases. However, overuse of antibiotics and a slowdown of innovation, allowed bacteria to develop resistance at such a pace that some experts now speak of a post-antibiotic era.

The problem is manifold: overuse of antibiotics, slow innovation, and bacterial evolution.

The overuse of antibiotics in both humans and livestock also facilitated the emergence of antibiotic resistant bacteria. Responsibility falls to health care providers who prescribed antibiotics liberally and patients who did not complete their prescribed dosages. Acknowledging this problem, the medical community has been training physicians to avoid pressures to prescribe antibiotics for children (and their parents) with infections that are likely to be viral in origin. Educational efforts are also underway to encourage patients to complete their full course of every prescribed antibiotic and not to halt treatment when symptoms ease. The excessive use of antibiotics in food-producing animals is perhaps less manageable because it affects the bottom line of farm operations. For instance, the FDA reported that even though famers were aware of the risks, antibiotics use in feedstock increased by 16 percent from 2009 to 2012.

The development of antibiotics—perhaps a more adequate term would be anti-bacterial agents—indirectly contributed to the problem by being incremental and by nearly stalling two decades ago. Many revolutionary innovations in antibiotics were introduced in a first period of development that started in the 1940s and lasted about two decades. Building upon scaffolds and mechanisms discovered theretofore, a second period of incremental development followed over three decades, through to 1990s, with roughly three new antibiotics introduced every year. High competition and little differentiations rendered antibiotics less and less profitable and over a third period covering the last 20 years pharmaceutical companies have cut development of new antibiotics down to a trickle.

The misguided overuse and misuse of antibiotics together with the economics of antibiotic innovation compounded the problem taking place in nature: bacteria evolves and adapts rapidly.

Current policy initiatives

The PCAST report recommended federal leadership and investment to combat antibiotic-resistant bacteria in three areas: improving surveillance, increasing the longevity of current antibiotics through moderated usage, and picking up the pace of development of new antibiotics and other effective interventions.

To implement this strategy PCAST suggested an oversight structure that includes a Director for National Antibiotic Resistance Policy, an interagency Task Force for Combating Antibiotic Resistance Bacteria, and an Advisory Council to be established by the HHS Secretary. PCAST also recommended increasing federal support from $450 million to $900 million for core activities such as surveillance infrastructure and development of transformative diagnostics and treatments. In addition, it proposed $800 million in funding for the Biomedical Advanced Research and Development Authority to support public-private partnerships for antibiotics development.

The Obama administration took up many of these recommendations and directed their implementation with the aforementioned Executive Order. More recently, it announced a National Strategy for Combating Antibiotic Resistant Bacteria to implement the recommendations of the PCAST report. The national strategy has five pillars: First, slow the emergence and spread of resistant bacteria by decreasing the abusive usage of antibiotics in health care as well as in farm animals; second, establish national surveillance efforts that build surveillance capability across human and animal environments; third, advance development and usage of rapid and innovative diagnostics to provide more accurate care delivery and data collection; forth, seek to accelerate the invention process for new antibiotics, other therapeutics and vaccines across all stages, including basic and applied research and development; finally, emphasize the importance of international collaboration and endorse the World Health Organization Action Plan to address antimicrobial resistance.

University-Industry partnerships

Therefore, an important cause of our antibiotic woes seems to be driven by economic logic. On one hand, pharmaceutical companies have by and large abandoned investment in antibiotic development; competition and high substitutability have led to low prices and in their financial calculation, pharmaceutical companies cannot justify new developmental efforts. On the other hand, farmers have found the use of antibiotics highly profitable and thus have no financial incentives to halt their use.

There is nevertheless a mirror explanation of a political character.

The federal government allocates about $30 billion for research in medicine and health through the National Institutes of Health. The government does not seek to crowd out private research investment; rather, the goal is to fund research the private sector would not conduct because the financial return of that research is too uncertain. Economic theory prescribes government intervention to address this kind of market failure. However, it is also government policy to privatize patents to discoveries made with public monies in order to facilitate their transfer from public to private organizations. An unanticipated risk of this policy is the rebalancing of the public research portfolio to accommodate the growing demand for the kind of research that feeds into attractive market niches. The risk is that the more aligned public research and private demand become, the less research attention will be directed to medical needs without great market prospects. The development of new antibiotics seems to be just that kind of neglected medical public need. If antibiotics are unattractive to pharmaceutical companies, antibiotic development should be a research priority for the NIH. We know that it is unlikely that Congress will increase public spending for antibiotic R&D in the proportion suggested by PCAST, but the NIH could step in and rebalance its own portfolio to increase antibiotic research. Either increasing NIH funding for antibiotics or NIH rebalancing its own portfolio, are political decisions that are sure to meet organized resistance even stronger than antibiotic resistance.

The second mirror explanation is that farmers have a well-organized lobby. It is no surprise that the Executive Order gingerly walks over recommendations for the farming sector and avoid any hint at an outright ban of antibiotics use, lest the administration is perceived as heavy-handed. Considering the huge magnitude of the problem, a political solution is warranted. Farmers’ cooperation in addressing this national problem will have to be traded for subsidies and other extra-market incentives that compensate for loss revenues or higher costs. The administration will do well to work out the politics with farmer associations first before they organize in strong opposition to any measure to curb antibiotic use in feedstock.

Addressing this challenge adequately will thus require working out solutions to the economic and political dimensions of this problem. Public-private partnerships, including university-industry collaboration, could prove to be a useful mechanism to balance the two dimensions of the equation. The development of teixobactin mentioned above is a good example of this prescription as it resulted from collaboration between the university of Bonn Germany, Northeastern University, and Novobiotic Pharmaceutical, a start-up in Cambridge Mass.

If the NIH cannot secure an increase in research funding for antibiotics development and cannot rebalance substantially its portfolio, it can at least encourage Cooperative Research and Development Agreements as well as university start-ups devoted to develop new antibiotics. In order to promote public-private and university-industry partnerships, policy coordination is advised. The nascent enterprises will be assisted greatly if the government can help them raise capital connecting them to venture funding networks or implementing a loan guarantees programs specific to antibiotics.  It can also allow for an expedited FDA approval which would lessen the regulatory burden. Likewise, farmers may be convinced to discontinue the risky practice if innovation in animal husbandry can effectively replace antibiotic use. Public-private partnerships, particularly through university extension programs, could provide an adequate framework to test alternative methods, scale them up, and subsidize the transition to new sustainable practices that are not financially painful to farmers.

Yikun Chi contributed to this post

More TechTank content available here

Authors

Image Source: © Reuters Staff / Reuters
     
 
 




ive

NASA considers public values in its Asteroid Initiative


NASA’s Asteroid Initiative encompasses efforts for the human exploration of asteroids—as well as the Asteroid Grand Challenge—to enhance asteroid detection capabilities and mitigate their threat to Earth. The human space flight portion of the initiative primarily includes the Asteroid Redirect Mission (ARM), which is a proposal to put an asteroid in orbit of the moon and send astronauts to it. The program originally contemplated two alternatives for closer study: capturing a small 10m diameter asteroid versus simply recovering a boulder from a much larger asteroid. Late in March, NASA offered an update of its plans. It has decided to retrieve a boulder from an asteroid near Earth’s orbit—candidates are the asteroids 2008 EV5, Bennu, and Itokawa—and will place the boulder on the moon’s orbit to further study it.

This mission will help NASA develop a host of technical capabilities. For instance, Solar Electric Propulsion uses solar electric power to charge atoms for spacecraft propulsion—in the absence of gravity, even a modicum of force can alter the trajectory of a body in outer space. Another related capability under development is the gravity tractor, which is based on the notion that even the modest mass of a spacecraft can exert sufficient gravitational force over an asteroid to ever so slightly change its orbit. The ARM spacecraft mass could be further increased by its ability to capture a boulder from the asteroid that is steering clear of the Earth, enabling a test of how humans might prevent asteroid threats in the future. Thus, NASA will have a second test of how to deflect near-Earth objects on a hazardous trajectory. The first test, implemented as part of the Deep Impact Mission, is a kinetic impactor; that is, crashing a spacecraft on an approaching object to change its trajectory.

The Asteroid Initiative is a partner of the agency’s Near Earth Object Observation (NEOO) program. The goal of this program is to discover and monitor space objects traveling on a trajectory that could pose the risk of hitting Earth with catastrophic effects. The program also seeks to develop mitigation strategies. The capabilities developed by ARM could also support other programs of NASA, such as the manned exploration of Mars.

NEOO has recently enjoyed an uptick of public support. It used to be funded at about $4 million in the 1990s and in 2010 was allocated a paltry $6 million. But then, a redirection of priorities—linked to the transition from the Bush to the Obama administrations—increased funding for NEOO to about $20 million in 2012 and $40 million in 2014—and NASA is seeking $50 million for 2015. It is clear that NASA officials made a compelling case for the importance of NEOO; in fact, what they are asking seems quite a modest amount if indeed asteroids pose an existential risk to life on earth. At the same time, the instrumental importance of the program and the public funds devoted to it beg the question as to whether taxpayers should have a say in the decisions NASA is making regarding how to proceed with the program.

NASA has done something remarkable to help answer this question.

Last November, NASA partnered with the ECAST network (Expert and Citizen Assessment of Science and Technology) to host a citizen forum assessing the Asteroid Initiative. ECAST is a consortium of science policy and advocacy organizations which specializes in citizen deliberations on science policy. The forum consisted of a dialogue with 100 citizens in Phoenix and Boston who learned more about the asteroid initiative and then commented on various aspects of the project.

The participants, who were selected to approximate the demographics of the U.S. population, were asked to assess mitigation strategies to protect against asteroids. They were introduced to four strategies: civil defense, gravity tractor, kinetic impactor, and nuclear blast deflection. As part of the deliberations, they were asked to consider the two aforementioned approaches to perform ARM. A consensus emerged about the boulder retrieval option primarily because citizens thought that option offered better prospects for developing planetary defense technologies.  This preference existed despite the excitement of capturing a full asteroid, which could potentially have additional economic impacts. The participants showed interest in promoting the development of mitigation capabilities at least as much as they wanted to protect traditional NASA goals such as the advancement of science and space flight technology. This is not surprising given that concerns about doomsday should reasonably take precedence over traditional research and exploration concerns.

NASA could have decided to set ARM along the path of boulder retrieval exclusively on technical merits, but having conducted a citizen forum, the agency is now able to claim that this decision is also socially robust, which is to say, is responsive to public values of consensus. In this manner, NASA has shown a promising method by which research mission federal agencies can increase their public accountability.

In the same spirit of responsible research and innovation, a recent Brookings paper I authored with David Guston—who is a co-founder of ECAST—proposes a number of other innovative ways in which the innovation enterprise can be made more responsive to public values and social expectations.

Kudos to NASA for being at the forefront of innovation in space exploration and public accountability.

Image Source: © Handout . / Reuters
     
 
 




ive

Patent infringement suits have a reputational cost for universities


Universities cash handsome awards on infringement cases

Last month, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university.

The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision.

This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors.

Means not always aligned with aims in patent law

When university inventions emerge from federal research grants, universities can also sue the infringers, but in those cases they would be testing the accepted interpretations of current patent law.

The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of the patents.

To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures.

But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry.

I should clarify that the predictor circuit, the blue-light diode, and the Kavcic detectors are not in that subset of patents. But even in they were, there is no indication that the University of Wisconsin-Madison would have exercised its patent rights with any less vigor just because the original research was funded by public funds. Today, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research.

You can have an answer for every question and still lose the debate

It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university.

Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society.

The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise.

University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties.

Substantive steps to improve the universities’ public image

Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions.

First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company.

Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more.

Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments.

Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents.

As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting.

Time to get it right on anti-troll legislation

Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities.  Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung.

University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited their ability to sue patent infringers. However, their convictions stand on shaky ground and their material interests are not those of the vast majority of universities.

A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management.

No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system.

Image Source: © Stephen Lam / Reuters
      
 
 




ive

Patent infringement suits have a reputational cost for universities


This post originally appeared on the Center for Technology Innovation’s TechTank blog.

Universities cash handsome awards on infringement cases

This October, a jury found Apple Inc. guilty of infringing a patent of the University of Wisconsin-Madison (UW) and ordered the tech giant to pay $234 million. The university scored a big financial victory, but this hardly meant any gain for the good name of the university.

The plaintiffs argued successfully in court that Apple infringed their 1998 patent on a predictor circuit that greatly improved the efficiency of microchips used in the popular iPhone 5s, 6, and 6 Plus. Apple first responded by challenging the validity of the patent, but the US Patent and Trademark Office ruled in favor of the university. Apple plans to appeal, but the appellate court is not likely to reverse the lower court’s decision.

This is not the first time this university has asserted its patents rights (UW sued Intel in 2008 for this exact same patent and reportedly settled for $110 million). Nor is this the first time universities in general have taken infringers to court. Prominent cases in recent memory include Boston University, which sued several companies for infringement of a patent for blue light-emitting diodes and settled out of court with most of them, and Carnegie Mellon, who was awarded $237 million by the federal appellate court on its infringement suit against Marvell, a semiconductor company, for its use of an enhanced detector of data in hard drives called Kavcic detectors.

Means not always aligned with aims in patent law

When university patented inventions emerge from federal research grants, infringement suits test the accepted interpretations of current patent law.

The Bayh-Dole Act of 1980 extended patent law and gave small-business and universities the right to take title to patents from federal research grants—later it was amended to extend the right to all federal grantees regardless of size. The ostensible aim of this act is to “to promote the utilization of inventions arising from federally supported research or development.” Under the law, a condition for universities (or any other government research performers) to keep their exclusive rights on those patents is that they or their licensees take “effective steps to achieve practical application” of those patents. Bayh-Dole was not designed to create a new source of revenue for universities. If companies are effectively using university technologies, Bayh-Dole’s purpose is served without need of patents.

To understand this point, consider a counterfactual: What if the text of Bayh-Dole had been originally composed to grant a conditional right to patents for federal research grantees? The condition could be stated like this: “This policy seeks to promote the commercialization of federally funded research and to this end it will use the patent system. Grantees may take title to patents if and only if other mechanisms for disseminating and developing those inventions into useful applications prove unsuccessful.” Under this imagined text, the universities could still take title to patents on their inventions if they or the U.S. Patent and Trademark Office were not aware that the technologies were being used in manufactures.

But no court would find their infringement claim meritorious if the accused companies could demonstrate that, absent of willful infringement, they had in fact used the technologies covered by university patents in their commercial products. In this case, other mechanisms for disseminating and developing the technologies would have proven successful indeed. The reality that Bayh-Dole did not mandate such a contingent assignation of rights creates a contradiction between its aims and the means chosen to advance those aims for the subset of patents that were already in use by industry.

I should remark that UW’s predictor circuit resulted from grants from NSF and DARPA and there is no indication that the university exercised its patent rights with any less vigor just because the original research was funded by public funds. In fact, it is fully expected from universities to aggressively assert their patent rights regardless of the source of funding for the original research.

You can have an answer for every question and still lose the debate

It is this litigious attitude that puts off many observers. While the law may very well allow universities to be litigious, universities could still refuse to exercise their rights under circumstances in which those rights are not easily reconciled with the public mission of the university.

Universities administrators, tech transfer personnel, and particularly the legal teams winning infringement cases have legitimate reasons to wonder why universities are publicly scorned. After all, they are acting within the law and simply protecting their patent rights; they are doing what any rational person would do. They may be really surprised when critics accuse universities of becoming allies of patent trolls, or of aiding and abetting their actions. Such accusations are unwarranted. Trolls are truants; the universities are venerable institutions. Patent trolls would exploit the ambiguities of patent law and the burdens of due process to their own benefit and to the detriment of truly productive businesses and persons. In stark contrast, universities are long established partners of democracy, respected beyond ideological divides for their abundant contributions to society.

The critics may not be fully considering the intricacies of patent law. Or they may forget that universities are in need of additional revenue—higher education has not seen public financial support increase in recent years, with federal grants roughly stagnated and state funding falling drastically in some states. Critics may also ignore that revenues collected from licensing of patents, favorable court rulings, and out-of-court settlements, are to a large extent (usually two thirds of the total) plugged back into the research enterprise.

University attorneys may have an answer for every point that critics raise, but the overall concern of critics should not be dismissed outright. Given that many if not most university patents can be traced back to research funded by tax dollars, there is a legitimate reason for observers to expect universities to manage their patents with a degree of restraint. There is also a legitimate reason for public disappointment when universities do not seem to endeavor to balance the tensions between their rights and duties.

Substantive steps to improve the universities’ public image

Universities can become more responsive to public expectations about their character not only by promoting their good work, but also by taking substantive steps to correct misperceptions.

First, when universities discover a case of proven infringement, they should take companies to court as a measure of last resort. If a particular company refuses to negotiate in good faith and an infringement case ends up in court, the universities should be prepared to demonstrate to the court of public opinion that they have tried, with sufficient insistence and time, to negotiate a license and even made concessions in pricing the license. In the case of the predictor circuit patent, it seems that the University of Wisconsin-Madison tried to license the technology and Apple refused, but the university would be in a much better position if it could demonstrate that the licensing deals offered to Apple would have turned to be far less expensive for the tech company.

Second, universities would be well advised not to join any efforts to lobby Congress for stronger patent protection. At least two reasons substantiate this suggestion. First, as a matter of principle, the dogmatic belief that without patents there is no innovation is wrong. Second, as a matter of material interest, universities as a group do not have a financial interest in patenting. It’s worth elaborating these points a bit more.

Neither historians nor social science researchers have settled the question about the net effects of patents on innovation. While there is evidence of social benefits from patent-based innovation, there is also evidence of social costs associated with patent-monopolies, and even more evidence of momentous innovations that required no patents. What’s more, the net social benefit varies across industries and over time. Research shows economic areas in which patents do spur innovation and economic sectors where it actually hinders them. This research explains, for instance, why some computer and Internet giants lobby Congress in the opposite direction to the biotech and big pharma industries. Rigorous industrial surveys of the 1980s and 1990s found that companies in most economic sectors did not use patents as their primary tool to protect their R&D investments.

Yet patenting has increased rapidly over the past four decades. This increase includes industries that once were uninterested in patents. Economic analyses have shown that this new patenting is a business strategy against patent litigation. Companies are building patent portfolios as a defensive strategy, not because they are innovating more. The university’s public position on patent policy should acknowledge that the debate on the impact of patents on innovation is not settled and that this impact cannot be observed in the aggregate, but must be considered in the context of each specific economic sector, industry, or even market. From this vantage point, universities could then turn up or down the intensity with which they negotiate licenses and pursue compensation for infringement. Universities would better assert their commitment to their public mission if they compute on a case by case basis the balance between social benefits and costs for each of its controversial patents.

As to the material interest in patents, it is understandable that some patent attorneys or the biotech lobby publicly espouse the dogma of patents, that there is no innovation without patents. After all, their livelihood depends on it. However, research universities as a group do not have any significant financial interest in stronger patent protection. As I have shown in a previous Brookings paper, the vast majority of research universities earn very little from their patent portfolios and about 87% of tech transfer offices operate in the red. Universities as a group receive so little income from licensing and asserting their patents relative to the generous federal support (below 3%), that if the federal government were to declare that grant reviewers should give a preference to universities that do not patent, all research universities would stop the practice at once. It is true that a few universities (like the University of Wisconsin-Madison) raise significant revenue from their patent portfolio, and they will continue to do so regardless of public protestations. But the majority of universities do not have a material interest in patenting.

Time to get it right on anti-troll legislation

Last year, the House of Representative passed legislation closing loopholes and introducing disincentives for patent trolls. Just as mirror legislation was about to be considered in the Senate, Sen. Patrick Leahy withdrew it from the Judiciary Committee. It was reported that Sen. Harry Reid forced the hand of Mr. Leahy to kill the bill in committee. In the public sphere, the shrewd lobbying efforts to derail the bill were perceived to be pro-troll interests. The lobbying came from pharmaceutical companies, biotech companies, patent attorneys, and, to the surprise of everyone, universities.  Little wonder that critics overreacted and suggested universities were in partnership with trolls: even if they were wrong, these accusations stung.

University associations took that position out of a sincere belief in the dogma of patents and out of fear that the proposed anti-troll legislation limited the universities’ ability to sue patent infringers. However, their convictions stand on shaky ground and only a few universities sue for infringement. In taking that policy position, university associations are representing neither the interests nor the beliefs of the vast majority of universities.

A reversal of that position is not only possible, but would be timely. When anti-troll legislation is again introduced in Congress, universities should distance themselves from efforts to protect the policy status quo that so benefits patent trolls. It is not altogether improbable that Congress sees fit to exempt universities from some of the requirements that the law would impose. University associations could show Congress the merit of such exemptions in consideration of the universities’ constant and significant contributions to states, regions, and the nation. However, no such concessions could ever be expected if the universities continue to place themselves in the company of those who profit from patent management.

No asset is more valuable for universities than their prestige. It is the ample recognition of their value in society that guarantees tax dollars will continue to flow into universities. While acting legally to protect their patent rights, universities are nevertheless toying with their own legitimacy. Let those universities that stand to gain from litigation act in their self-interest, but do not let them speak for all universities. When university associations advocate for stronger patent protection, they do the majority of universities a disservice. These associations should better represent the interests of all their members by advocating a more neutral position about patent reform, by publicly praising universities’ restraint on patent litigation, and by promoting a culture and readiness in technology transfer offices to appraise each patent not by its market value but by its social value. At the same time, the majority of universities that obtain neither private nor social benefits from patenting should press their political representatives to adopt a more balanced approach to policy advocacy, lest they squander the reputation of the entire university system.

Editor's Note: The post was corrected to state that UW’s predictor circuit did originate from federally funded research.

Image Source: © Stephen Lam / Reuters
      
 
 




ive

Alternative perspectives on the Internet of Things


Editor's Note: TechTakes is a new series that collects the diverse perspectives of scholars around the Brookings Institution on technology policy issues. This first post in the series features contributions from Scott Andes, Susan Hennessey, Adie Tomer, Walter Valdivia, Darrell M. West, and Niam Yaraghi on the Internet of Things.

In the coming years, the number of devices around the world connected to the Internet of Things (IoT) will grow rapidly. Sensors located in buildings, vehicles, appliances, and clothing will create enormous quantities of data for consumers, corporations, and governments to analyze. Maximizing the benefits of IoT will require thoughtful policies. Given that IoT policy cuts across many disciplines and levels of government, who should coordinate the development of new IoT platforms? How will we secure billions of connected devices from cyberattacks? Who will have access to the data created by these devices? Below, Brookings scholars contribute their individual perspectives on the policy challenges and opportunities associated with the Internet of Things.

The Internet of Things will be everywhere

Darrell M. West is vice president and director of Governance Studies and founding director of the Center for Technology Innovation.

Humans are lovable creatures, but prone to inefficiency, ineffectiveness, and distraction. They like to do other things when they are driving such as listening to music, talking on the phone, texting, or checking email. Judging from the frequency of accidents though, many individuals believe they are more effective at multi-tasking than is actually the case.

The reality of these all too human traits is encouraging a movement from communication between computers to communication between machines. Driverless cars soon will appear on the highways in large numbers, and not just as a demonstration project. Remote monitoring devices will transmit vital signs to health providers, who then can let people know if their blood pressure has spiked or heart rhythm has shifted in a dangerous direction. Sensors in appliances will let individuals know when they are running low on milk, bread, or cereal. Thermostats will adjust their energy settings to the times when people actually are in the house, thereby saving substantial amounts of money while also protecting natural resources.

With the coming rise of a 5G network, the Internet of Things will unleash high-speed devices and a fully connected society. Advanced digital devices will enable a wide range of new applications from energy and transportation to home security and healthcare. They will help humans manage the annoyances of daily lives such as traffic jams, not being able to find parking places, or keeping track of physical fitness. The widespread adoption of smart appliances, smart energy grids, resource management tools, and health sensors will improve how people connect with one another and their electronic devices. But they also will raise serious security, privacy, and policy issues.

Implications for surveillance

Susan Hennessey is Fellow in National Security in Governance Studies at the Brookings Institution. She is the Managing Editor of the Lawfare blog, which is devoted to sober and serious discussion of "Hard National Security Choices.”

As the debate over encryption and diminished law enforcement access to communications enters the public arena, some posit the growing Internet of Things as a solution to “Going Dark.” A recently released Harvard Berkman Center report, “Don’t Panic,” concludes in part that losses of communication content will be offset by the growth of IoT and networked sensors. It argues IoT provides “prime mechanisms for surveillance: alternative vectors for information-gathering that could more than fill many of the gaps left behind by sources that have gone dark – so much so that they raise troubling questions about how exposed to eavesdropping the general public is poised to become.”

Director of National Intelligence James Clapper agrees that IoT has some surveillance potential. He recently testified before Congress that “[i]n the future, intelligence services might use the IoT for identification, surveillance, monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials.”

But intelligence gathering in the Internet age is fundamentally about finding needles in haystacks – IoT is poised to add significantly more hay than needles. Law enforcement and the intelligence community will have to develop new methods to isolate and process the magnitude of information. And Congress and the courts will have to decide how laws should govern this type of access.

For now, the unanswered question remains: How many refrigerators does it take to catch a terrorist?

IoT governance

Scott Andes is a senior policy analyst and associate fellow at the Anne T. and Robert M. Bass Initiative on Innovation and Placemaking, a part of the Centennial Scholar Initiative at the Brookings Institution.

As with many new technology platforms, the Internet of Things is often approached as revolutionary, not evolutionary technology. The refrain is that some scientific Rubicon has been crossed and the impact of IoT will come soon regardless of public policy. Instead, the role of policymakers is to ensure this new technology is leveraged within public infrastructure and doesn’t adversely affect national security or aggravate inequality. While these goals are clearly important, they all assume technological advances of IoT are staunchly within the realm of the private sector and do not justify policy intervention. However, as with almost all new technologies that catch the public’s eye—robotics, clean energy, autonomous cars, etc.—hyperbolic news reporting overstates the market readiness of these technologies, further lowering the perceived need of policy support.

The problem with this perspective is twofold. First, greater scientific breakthroughs are still needed. The current rate of improvement in processing power and data storage, miniaturization of devices, and more energy efficient sensors only begin to scratch the surface of IoT’s full potential. Advances within next-generation computational power, autonomous devices, and interoperable systems still require scientific breakthroughs and are nowhere near deployment. Second, even if the necessary technological advancements of IoT have been met, it’s not clear the U.S. economy will be the prime recipient of its economic value. Nations that lead in advanced manufacturing, like Germany, may already be better poised to export IoT-enabled products. Policymakers in the United States should view technological advancements in IoT as a global economic race that can be won through sound science policies. These should include: accelerating basic engineering research; helping that research reach the market; supporting entrepreneurs’ access to capital; and training a science and engineering-ready workforce that can scale up new technologies.

IoT will democratize innovation

Walter D. Valdivia is a fellow in the Center for Technology Innovation at Brookings.

The Internet of Things could be a wonderful thing, but not in the way we imagine it.

Today, the debate is dominated by cheerleaders or worrywarts. But their perspectives are merely two sides of the same coin: technical questions about reliability of communications and operations, and questions about system security. Our public imagination about the future is being narrowly circumscribed by these questions. However, as the Internet of Things starts to become a thing—or multiples things, or a networked plurality—it is likely to intrude so intensely into our daily lives that alternative imaginations will emerge and will demand a hearing.

A compelling vision of the future is necessary to organize and coordinate the various market and political agents who will integrate IoT into society. Technological success is usually measured in terms set by the purveyor of that vision. Traditionally, this is a small group with a financial stake in technological development: the innovating industry. However, the intrusiveness and pervasiveness of the Internet of Things will prompt ordinary citizens to augment that vision. Citizen participation will deny any group a monopoly on that vision of the future. Such a development would be a true step in the direction of democratizing innovation. It could make IoT a wonderful thing indeed.

Applications of IoT for infrastructure

Adie Tomer is a fellow at the Brookings Institution Metropolitan Policy Program and a member of the Metropolitan Infrastructure Initiative.

The Internet of Things and the built environment are a natural fit. The built environment is essentially just a collection of physical objects—from sidewalks and streets to buildings and water pipes—that all need to be managed in some capacity. Today, we measure our shared use of those objects through antiquated analog or digital systems. Think of the electricity meter on a building, or a person manually counting pedestrians on a busy city street. Digital, Internet-connected sensors promise to modernize measurement, relaying a whole suit of indicators to centralized databases tweaked to make sense of such big data.

But let’s not fool ourselves. Simply outfitting cities and metro areas with more sensors won’t solve any of our pressing urban issues. Without governance frameworks to apply the data towards goals around transportation congestion, more efficient energy use, or reduced water waste, these sensors could be just another public investment that doesn’t lead to public benefit.

The real goal for IoT in the urban space, then, is to ensure our built environment supports broader economic, social, and environmental objectives. And that’s not a technology issue—that’s a question around leadership and agenda-setting.

Applications of IoT for health care

Niam Yaraghi is a fellow in the Brookings Institution's Center for Technology Innovation.

Health care is one of the most exciting application areas for IoT. Imagine that your Fitbit could determine if you fall, are seriously hurt, and need to be rushed to hospital. It automatically pings the closest ambulance and sends a brief summary of your medical status to the EMT personnel so that they can prepare for your emergency services even before they reach the scene. On the way, the ambulance will not need to use sirens to make way since the other autonomous vehicles have already received a notification about approaching ambulance and clear the way while the red lights automatically turn green. 

IoT will definitely improve the efficiency of health care services by reducing medical redundancies and errors. This dream will come true sooner than you think. However, if we do not appropriately address the privacy and security issues of healthcare data, then IoT can be our next nightmare. What if terrorist organizations (who are becoming increasingly technology savvy) find a way to hack into Fitbit and send wrong information to an EMT? Who owns our medical data? Can we prevent Fitbit from selling our health data to third parties? Given these concerns, I believe we should design a policy framework that encourages accountability and responsibility with regards to health data. The framework should precisely define who owns data; who can collect, store, mine and use it; and what penalties will be enforced if entities acted outside of this framework.

Authors

  • Jack Karsten
      
 
 




ive

The benefits of a knives-out Democratic debate

Stop whining about Democrats criticizing each other. The idea that Democrats attacking Democrats is a risk and an avenue that will deliver reelection to Donald Trump is nonsense. Democrats must attack each other and attack each other aggressively. Vetting presidential candidates, highlighting their weaknesses and the gaps in their record is essential to building a…

       




ive

Five books you should read to better understand Islam


After a recent talk about my ISIS book, one of the audience members asked, “What can I read to help me not hate Islam?” I don’t think it’s a scholar’s job to persuade others to love or hate any culture. But the question was sincere, so I suggested some books that have helped me better understand Islam. I also put the question to Twitter. Below is some of what I and others came up with.

Two cautions before we dive in: First, the list is obviously not exhaustive and I’ve left out overly apologetic books—in my experience, they only increase the skeptical reader’s suspicion that she’s being suckered. Second, people on Twitter gave me great suggestions but I’ve only included those I’ve read and can vouch for:

Muhammad and the Quran: Two of the best books you’ll ever read about Muhammad and the Quran are also the shortest: The Koran: A Very Short Introduction and Muhammad, both by Michael Cook. He writes with great wit and deep scholarship.

Other scriptures: Most non-Muslims are unaware that Islamic scripture is more than the Quran. It includes a vast collection of words and deeds attributed to Muhammad by later authors. These scriptures are sort of like the Gospels, and Muslim scholars fight over their authenticity like Christian scholars debate about the accuracy of Matthew, Mark, Luke, and John. These extra Islamic scriptures contain most of the teachings that make modern people (Muslims included) uncomfortable about Islam. One of the world’s experts on these scriptures, Jonathan Brown, has written a terrific book about them, Misquoting Muhammad.

Rumi: The medieval mystic’s poems about life and death are beautiful and moving, no matter your belief system. I loved his poems so much as an undergrad that I went on to study Middle Eastern languages just so I could read his work in the original. I’m glad I first viewed Islam through the eyes of Rumi and not a group like ISIS. Neither is solely representative of Islam but both draw heavily on its scriptures and reach such different conclusions.

The Bible: Many people recommended reading the Bible to decrease hate of Islam. The nerd in me leapt to the least obvious conclusion, “Ah, good idea! Reading some of the rough stuff in the Hebrew Bible is a good way to put a kindred ancient religion like Islam in perspective.” But they meant something a little less complicated:

It’s a worthy perspective today no matter your faith.

Authors

Image Source: © David Gray / Reuters
     
 
 




ive

Podcast | Comparative politics & international relations: Lessons for Indian foreign policy