ess

The Private Sector and Sustainable Development: Market-Based Solutions for Addressing Global Challenges

The private sector is an important player in sustainable global development. Corporations are finding that they can help encourage economic growth and development in the poorest of countries. Most importantly, the private sector can tackle development differently by taking a market-based approach. The private sector is providing new ideas in the fight to end global…

       




ess

Congress, Nord Stream II, and Ukraine

Congress has long weighed sanctions as a tool to block the Nord Stream II gas pipeline under the Baltic Sea from Russia to Germany. Unfortunately, it has mulled the question too long, and time has run out. With some 85% of the pipeline already laid, new congressional sanctions aimed at companies participating in the pipeline’s…

       




ess

The Arab Spring five years later: Toward greater inclusiveness

Five years have passed since the self-immolation of Mohamed Bouazizi in Tunisia sparked revolts around the Arab world and the beginning of the Arab Spring. Despite high hopes that the Arab world was entering a new era of freedom, economic growth, and social justice, the transition turned out to be long and difficult, with the…

       




ess

Why Isn’t Disruptive Technology Lifting Us Out of the Recession?


The weakness of the economic recovery in advanced economies raises questions about the ability of new technologies to drive growth. After all, in the years since the global financial crisis, consumers in advanced economies have adopted new technologies such as mobile Internet services, and companies have invested in big data and cloud computing. More than 1 billion smartphones have been sold around the world, making it one of the most rapidly adopted technologies ever. Yet nations such as the United States that lead the world in technology adoption are seeing only middling GDP growth and continue to struggle with high unemployment.

There are many reasons for the restrained expansion, not least of which is the severity of the recession, which wiped out trillions of dollars of wealth and more than 7 million US jobs. Relatively weak consumer demand since the end of the recession in 2009 has restrained hiring and there are also structural issues at play, including a growing mismatch between the increasingly technical needs of employers and the skills available in the labor force. And technology itself plays a role: companies continue to invest in labor-saving technologies that reduce demand for less-skilled workers.

So are we witnessing a failure of technology? Our answer is "no." Over the longer term, in fact, we see that technology continues to drive productivity and growth, a pattern that has been evident since the Industrial Revolution; steam power, mass-produced steel, and electricity drove successive waves of growth, which has continued into the 21st century with semiconductors and the Internet. Today, we see a dozen rapidly-evolving technology areas that have the potential for economic disruption as well in the next decade. They fall into four groups: IT and how we use it; machines that work for us; energy; and the building blocks of everything (next-gen genomics and synthetic biology).

Wide ranging impacts

These disruptive technologies not only have potential for economic impact—hundreds of billions per year and even trillions for the applications we have sized—but also are broad-based (affecting many people and industries) and have transformative effects: they can alter the status quo and create opportunities for new competitors.

While these technologies will contribute to productivity and growth, we must look at economic impact in a broader sense, which includes measures of surplus created and value shifted (for instance from producers to consumers, which has been a common result of Internet adoption). The greatest benefit we measured for autonomous vehicles—cars and trucks that can proceed from point A to point B with little or no human intervention. The largest economic impact we sized for autonomous vehicles is the enormous benefit to consumers that may be possible by reducing accidents caused by human error by 70 to 90 percent. That could translate into hundreds of billions a year in economic value by 2025.

Predicting how quickly even the most disruptive technologies will affect productivity is difficult. When the first commercial microprocessor appeared there was no such thing as a microcomputer—marketers at Intel thought traffic signal controllers might be a leading application for their chip. Today we see that social technologies, which have changed how people interact with friends and family and have provided new ways for marketers to connect with consumers, may have a much larger impact as a way to raise productivity in organizations by improving communication, knowledge-sharing, and collaboration.

There are also lags and displacements as new technologies are adopted and their effects on productivity are felt. Over the next decade, advances in robotics may make it possible to automate assembly jobs that require more dexterity than machines have provided or are assumed to be more economical to carry out with low-cost labor. Advances in artificial intelligence, big data, and user interfaces (e.g., computers that can interpret ordinary speech) make it possible to automate many knowledge worker tasks.

More good than bad

There are clearly challenges for societies and economies as disruptive technologies take hold, but the long-term effects, we believe, will continue to be higher productivity and growth across sectors and nations. In earlier work, for example, we looked at the relationship between productivity and employment, which are generally believed to be in conflict (i.e., when productivity rises, employment falls). And clearly, in the short term this can happen as employers find that they can substitute machinery for labor—especially if other innovations in the economy do not create demand for labor in other areas. However, if you look at the data for productivity and employment for longer periods—over decades, for example—you see that productivity and job growth do rise in tandem.

This does not mean that labor-saving technologies do not cause dislocations, but they also eventually create new opportunities. For example, the development of highly flexible and adaptable robots will require skilled workers on the shop floor who can program these machines and work out new routines as requirements change. And the same types of tools that can be used to automate knowledge worker tasks such as finding information can also be used to augment the powers of knowledge workers, potentially creating new types of jobs.

Over the next decade it will become clearer how these technologies will be used to raise productivity and growth. There will be surprises along the way—when mass-produced steel became practical in the 19th century nobody could predict how it would enable the automobile industry in the 20th. And there will be societal challenges that policy makers will need to address, for example by making sure that educational systems keep up with the demands of the new technologies.

For business leaders the emergence of disruptive technologies can open up great new possibilities and can also lead to new threats—disruptive technologies have a habit of creating new competitors and undermining old business models. Incumbents will want to ensure their organizations continue to look forward and think long-term. Leaders themselves will need to know how technologies work and see to it that tech- and IT-savvy employees are included in every function and every team. Businesses and other institutions will need new skill sets and cannot assume that the talent they need will be available in the labor market.

Publication: Yahoo! Finance
Image Source: © Yves Herman / Reuters
      
 
 




ess

Reassessing the internet of things


Nearly 30 years ago, the economists Robert Solow and Stephen Roach caused a stir when they pointed out that, for all the billions of dollars being invested in information technology, there was no evidence of a payoff in productivity. Businesses were buying tens of millions of computers every year, and Microsoft had just gone public, netting Bill Gates his first billion. And yet, in what came to be known as the productivity paradox, national statistics showed that not only was productivity growth not accelerating; it was actually slowing down. “You can see the computer age everywhere,” quipped Solow, “but in the productivity statistics.”

Today, we seem to be at a similar historical moment with a new innovation: the much-hyped Internet of Things – the linking of machines and objects to digital networks. Sensors, tags, and other connected gadgets mean that the physical world can now be digitized, monitored, measured, and optimized. As with computers before, the possibilities seem endless, the predictions have been extravagant – and the data have yet to show a surge in productivity.

A year ago, research firm Gartner put the Internet of Things at the peak of its Hype Cycle of emerging technologies.

As more doubts about the Internet of Things productivity revolution are voiced, it is useful to recall what happened when Solow and Roach identified the original computer productivity paradox. For starters, it is important to note that business leaders largely ignored the productivity paradox, insisting that they were seeing improvements in the quality and speed of operations and decision-making. Investment in information and communications technology continued to grow, even in the absence of macroeconomic proof of its returns.

That turned out to be the right response. By the late 1990s, the economists Erik Brynjolfsson and Lorin Hitt had disproved the productivity paradox, uncovering problems in the way service-sector productivity was measured and, more important, noting that there was generally a long lag between technology investments and productivity gains.

Our own research at the time found a large jump in productivity in the late 1990s, driven largely by efficiencies made possible by earlier investments in information technology. These gains were visible in several sectors, including retail, wholesale trade, financial services, and the computer industry itself. The greatest productivity improvements were not the result of information technology on its own, but by its combination with process changes and organizational and managerial innovations.

Our latest research, The Internet of Things: Mapping the Value Beyond the Hypeindicates that a similar cycle could repeat itself. We predict that as the Internet of Things transforms factories, homes, and cities, it will yield greater economic value than even the hype suggests. By 2025, according to our estimates, the economic impact will reach $3.9-$11.1 trillion per year, equivalent to roughly 11% of world GDP. In the meantime, however, we are likely to see another productivity paradox; the gains from changes in the way businesses operate will take time to be detected at the macroeconomic level.

One major factor likely to delay the productivity payoff will be the need to achieve interoperability. Sensors on cars can deliver immediate gains by monitoring the engine, cutting maintenance costs, and extending the life of the vehicle. But even greater gains can be made by connecting the sensors to traffic monitoring systems, thereby cutting travel time for thousands of motorists, saving energy, and reducing pollution. However, this will first require auto manufacturers, transit operators, and engineers to collaborate on traffic-management technologies and protocols.

Indeed, we estimate that 40% of the potential economic value of the Internet of Things will depend on interoperability. Yet some of the basic building blocks for interoperability are still missing. Two-thirds of the things that could be connected do not use standard Internet Protocol networks.

Other barriers standing in the way of capturing the full potential of the Internet of Things include the need for privacy and security protections and long investment cycles in areas such as infrastructure, where it could take many years to retrofit legacy assets. The cybersecurity challenges are particularly vexing, as the Internet of Things increases the opportunities for attack and amplifies the consequences of any breach.

But, as in the 1980s, the biggest hurdles for achieving the full potential of the new technology will be organizational. Some of the productivity gains from the Internet of Things will result from the use of data to guide changes in processes and develop new business models. Today, little of the data being collected by the Internet of Things is being used, and it is being applied only in basic ways – detecting anomalies in the performance of machines, for example.

It could be a while before such data are routinely used to optimize processes, make predictions, or inform decision-making – the uses that lead to efficiencies and innovations. But it will happen. And, just as with the adoption of information technology, the first companies to master the Internet of Things are likely to lock in significant advantages, putting them far ahead of competitors by the time the significance of the change is obvious to everyone.

Editor's Note: This opinion originally appeared on Project Syndicate August 6, 2015.

Publication: Project Syndicate
Image Source: © Vincent Kessler / Reuters
      
 
 




ess

Job gains even more impressive than numbers show


I came across an interesting chart in yesterday’s Morning Money tipsheet from Politico that struck me as a something that sounded intuitively correct but was, in fact, not. It's worth a comment on this blog, which has served as a forum for discussion of jobs numbers throughout the recovery.

Between last week’s BLS employment report and last night’s State of the Union, we’ve heard a lot about impressive job growth in 2015. For my part, I wrote on this blog last week that the 2.6 million jobs created last year makes 2015 the second best calendar-year for job gains of the current recovery.

The tipsheet’s "Chart of the Day," however, suggested that job growth in 2015 was actually lower-than-average if we adjust for the change in the size of the labor force. This is what was in the tipsheet from Politico:


CHART OF THE DAY: NOMINAL JOB GROWTH — Via Hamilton Place Strategies: "Adjusting jobs data to account for labor force shifts can help shed some light on voters' economic angst, even as we see good headline statistics. … Though 2015 was a good year in terms of job growth during the current recovery and had higher-than-average job growth as compared to recent recoveries, 2015 actually had lower-than-average job growth if we adjust for the change in the size of the labor force." http://bit.ly/1OnBXSm


I decided to look at the numbers.

The authors propose that we should "scale" reported job gains by the number of workers, which at first seems to make sense. Surely, an increase in monthly employment of 210,000 cannot mean the same thing when there are already 150 million employed people as when there are just 75 million employed people.

But this intuition is subtly wrong for a simple reason: The age structure of the population may also differ in the two situations I have just described. Suppose when there are 75 million employed people, the population of 20-to-64 year-old people is growing 300,000 every month. Suppose also when there are 150 million employed people, the population of 20-to-64 year-olds is shrinking 100,000 per month. 

Most informed observers would say that job growth of 210,000 a month is much more impressive under the latter assumptions than it is under the first set of assumptions, even though under the latter assumptions the number of employed people is twice as high as it is under the first assumptions.

BLS estimates show that in the seven years from December 2008-December 2015, the average monthly growth in the 16-to-64 year-old (noninstitutionalized) U.S. population was 85,200 per month. That is the lowest average growth rate of the working-age population going back to at least 1960. Here are the numbers:

Once we scale the monthly employment gain by the growth in the working-age population, the growth of jobs in recent years has been more impressive—not less—than suggested by the raw monthly totals. Gains in employer payrolls have far surpassed the growth in the number of working-age Americans over the past five years.

Headline writers have been impressed by recent job gains because the job gains have been impressive.

Authors

     
 
 




ess

Should Congress raise the full retirement age to 70?


No. We should exempt workers earning the lowest wages.

Social Security faces a serious funding problem. The program takes in too little money to pay all that has been promised to future beneficiaries. Government forecasters predict Social Security’s reserve fund will be depleted between 2030 and 2034. There are two basic ways we can eliminate the funding gap: cut benefits or increase contributions. A common proposal is to increase the age at which workers can claim full retirement benefits. For people nearing retirement today, the full retirement age is 66. As a result of a 1983 law, that age will rise to 67 for workers born after 1959.

When policymakers urge us to raise the retirement age, they are proposing to increase the full retirement age beyond 67, possibly to 70, for workers now in their 30s or 40s. This saves money, but it also cuts monthly retirement benefits by the same percentage for every worker, unless workers delay claiming benefits. The policy might seem fair if workers in future generations could all expect to share in gains in life expectancy. However, new research shows that gains in life expectancy have been very unequal, with the biggest improvements among workers who earn top incomes. Life expectancy gains for workers with the lowest incomes have been small or negligible.

If the full retirement age were raised, future retirees with high lifetime earnings can expect to receive some compensation when their monthly benefits are cut. Because they can expect to live longer than today’s retirees, they will receive benefits for a longer span of years after 65. For low-wage workers, there is no compensation. Since they are not living longer, their lifetime benefits will fall by the same proportion as their monthly benefits. Thus, “raising the retirement age” is a policy that cuts the lifetime benefits of future low-wage workers by a bigger percentage than it does of future high-wage workers.

The fact that low-wage workers have seen small or negligible gains in life expectancy signals that their health when they are past 60 is no better than that of low-wage workers born 20 or 30 years ago. This suggests their capacity to work past 60 is no better than it was for past generations. A sensible policy for cutting future benefits should therefore preserve current benefit levels for workers who have contributed to Social Security for many years but have earned low wages.

Editor's note: This piece originally appeared in CQ Researcher.

Authors

Publication: CQ Researcher
Image Source: © Lucy Nicholson / Reuters
      
 
 




ess

Labor force dynamics in the Great Recession and its aftermath: Implications for older workers


Unlike prime-age Americans, who have experienced declines in employment and labor force participation since the onset of the Great Recession, Americans past 60 have seen their employment and labor force participation rates increase.

In order to understand the contrasting labor force developments among the old, on the one hand, and the prime-aged, on the other, this paper develops and analyzes a new data file containing information on monthly labor force changes of adults interviewed in the Current Population Survey (CPS).

The paper documents notable differences among age groups with respect to the changes in labor force transition rates that have occurred over the past two decades. What is crucial for understanding the surprising strength of old-age labor force participation and employment are changes in labor force transition probabilities within and across age groups. The paper identifies several shifts that help account for the increase in old-age employment and labor force participation:

  • Like workers in all age groups, workers in older groups saw a surge in monthly transitions from employment to unemployment in the Great Recession.
  • Unlike workers in prime-age and younger groups, however, older workers also saw a sizeable decline in exits to nonparticipation during and after the recession. While the surge in exits from employment to unemployment tended to reduce the employment rates of all age groups, the drop in employment exits to nonparticipation among the aged tended to hold up labor force participation rates and employment rates among the elderly compared with the nonelderly. Among the elderly, but not the nonelderly, the exit rate from employment into nonparticipation fell more than the exit rate from employment into unemployment increased.
  • The Great Recession and slow recovery from that recession made it harder for the unemployed to transition into employment. Exit rates from unemployment into employment fell sharply in all age groups, old and young.
  • In contrast to unemployed workers in younger age groups, the unemployed in the oldest age groups also saw a drop in their exits to nonparticipation. Compared with the nonaged, this tended to help maintain the labor force participation rates of the old.
  • Flows from out-of-the-labor-force status into employment have declined for most age groups, but they have declined the least or have actually increased modestly among older nonparticipants.

Some of the favorable trends seen in older age groups are likely to be explained, in part, by the substantial improvement in older Americans’ educational attainment. Better educated older people tend to have lower monthly flows from employment into unemployment and nonparticipation, and they have higher monthly flows from nonparticipant status into employment compared with less educated workers.

The policy implications of the paper are:

  • A serious recession inflicts severe and immediate harm on workers and potential workers in all age groups, in the form of layoffs and depressed prospects for finding work.
  • Unlike younger age groups, however, workers in older groups have high rates of voluntary exit from employment and the workforce, even when labor markets are strong. Consequently, reduced rates of voluntary exit from employment and the labor force can have an outsize impact on their employment and participation rates.
  • The aged, as a whole, can therefore experience rising employment and participation rates even as a minority of aged workers suffer severe harm as a result of permanent job loss at an unexpectedly early age and exceptional difficulty finding a new job.
  • Between 2001 and 2015, the old-age employment and participation rates rose, apparently signaling that older workers did not suffer severe harm in the Great Recession.
  • Analysis of the gross flow data suggests, however, that the apparent improvements were the combined result of continued declines in age-specific voluntary exit rates, mostly from the ranks of the employed, and worsening reemployment rates among the unemployed. The older workers who suffered involuntary layoffs were more numerous than before the Great Recession, and they found it much harder to get reemployed than laid off workers in years before 2008. The turnover data show that it has proved much harder for these workers to recover from the loss of their late-career job loss.

Download "Labor Force Dynamics in the Great Recession and its Aftermath: Implications for Older Workers" »

Downloads

Authors

Publication: Center for Retirement Research at Boston College
      
 
 




ess

Lessons of history, law, and public opinion for AI development

Artificial intelligence is not the first technology to concern consumers. Over time, many innovations have frightened users and led to calls for major regulation or restrictions. Inventions such as the telegraph, television, and robots have generated everything from skepticism to outright fear. As AI technology advances, how should we evaluate AI? What measures should be…

       




ess

Bonding for Clean Energy Progress


With Washington adrift and the United Nations climate change panel again calling for action, the search for new clean energy finance solutions continues.  

Against this backdrop, the Metro Program has worked with state- and city-oriented partners to highlight such responses as repurposing portions of states’ clean energy funds and creating state green banks.  Likewise, the Center for American Progress just recently highlighted the potential of securitization and investment yield vehicles, called yield cos. And last week an impressive consortium of financiers, state agencies, and philanthropies announced the creation of the Warehouse for Energy Efficiency Loans (WHEEL) aimed at bringing low-cost capital to loan programs for residential energy efficiency. WHEEL is the country’s first true secondary market for home energy loans—and a very big deal. 

Another big deal is the potential of bond finance as a tool for clean energy investment at the state and local level. That’s the idea advanced in a new paper released this morning that we developed with practitioners at the Clean Energy Group and the Council for Development Finance Authorities.

Over 100 years, the nation’s state and local infrastructure finance agencies have issued trillions of dollars’ worth of public finance bonds to fund the construction of the nation’s roads, bridges, hospitals, and other infrastructure—and literally built America. Now, as clean energy subsidies from Washington dwindle, these agencies are increasingly willing to finance clean energy projects, if only the clean energy community will embrace them.

So far, these authorities are only experimenting. However, the bond finance community has accumulated significant experience in getting to scale and knows how to raise large sums for important purposes by selling bonds to Wall Street. Accordingly, the clean energy community—working at the state and regional level—should leverage that expertise. The challenge is for the clean energy and bond finance communities to work collaboratively to create new models for clean energy bond finance in states, and so to establish a new clean energy asset class that can easily be traded in capital markets.

Along these lines, our new brief argues that state and local bonding authorities, clean energy leaders, and other partners should do the following: 

  • Establish mutually useful partnerships between development finance experts and clean energy officials at the state and local government levels
  • Expand and scale up bond-financed clean energy projects using credit enhancement and other emerging tools to mitigate risk and through demonstration projects
  • Improve availability of data and develop standardized documentation so that the risks  and rewards of clean energy investments can be better understood
  • Create a pipeline of rated and private placement deals, in effect a new clean energy asset class, to meet the demand by institutional investors for fixed-income clean energy securities
And it’s happening. Already, bonding has been embraced in smart ways in New York; Hawaii; Morris County, NJ; and Toledo, among other locations featured in our paper. Now, it’s time for states and municipalities to increase the use of bonds for clean energy purposes. If they can do that it will be yet another instance of the nation’s states, metro areas, and private sector stepping up with a major breakthrough at a moment of federal inaction.
Image Source: © ERIC THAYER / Reuters
      
 
 




ess

Clean Energy Finance Through the Bond Market: A New Option for Progress


State and local bond finance represents a powerful but underutilized tool for future clean energy investment.

For 100 years, the nation’s state and local infrastructure finance agencies have issued trillions of dollars’ worth of public finance bonds to fund the construction of the nation’s roads, bridges, hospitals, and other infrastructure—and literally built America. Now, as clean energy subsidies from Washington dwindle, these agencies are increasingly willing to finance clean energy projects, if only the clean energy community will embrace them.

So far, these authorities are only experimenting. However, the bond finance community has accumulated significant experience in getting to scale and knows how to raise large amounts for important purposes by selling bonds to Wall Street. The challenge is therefore to create new models for clean energy bond finance in states and regions, and so to establish a new clean energy asset class that can easily be traded in capital markets. To that end, this brief argues that state and local bonding authorities and other partners should do the following:

  • Establish mutually useful partnerships between development finance experts and clean energy officials at the state and local government levels
  • Expand and scale up bond-financed clean energy projects using credit enhancement and other emerging tools to mitigate risk and through demonstration projects
  • Improve the availability of data and develop standardized documentation so that the risks and rewards of clean energy investments can be better understood
  • Create a pipeline of rated and private placement deals, in effect a new clean energy asset class, to meet the demand by institutional investors for fixed-income clean energy securities

Downloads

Authors

Image Source: © Steve Marcus / Reuters
      
 
 




ess

The free-world strategy progressives need

      
 
 




ess

Reassessing the internet of things


Nearly 30 years ago, the economists Robert Solow and Stephen Roach caused a stir when they pointed out that, for all the billions of dollars being invested in information technology, there was no evidence of a payoff in productivity. Businesses were buying tens of millions of computers every year, and Microsoft had just gone public, netting Bill Gates his first billion. And yet, in what came to be known as the productivity paradox, national statistics showed that not only was productivity growth not accelerating; it was actually slowing down. “You can see the computer age everywhere,” quipped Solow, “but in the productivity statistics.”

Today, we seem to be at a similar historical moment with a new innovation: the much-hyped Internet of Things – the linking of machines and objects to digital networks. Sensors, tags, and other connected gadgets mean that the physical world can now be digitized, monitored, measured, and optimized. As with computers before, the possibilities seem endless, the predictions have been extravagant – and the data have yet to show a surge in productivity.

A year ago, research firm Gartner put the Internet of Things at the peak of its Hype Cycle of emerging technologies.

As more doubts about the Internet of Things productivity revolution are voiced, it is useful to recall what happened when Solow and Roach identified the original computer productivity paradox. For starters, it is important to note that business leaders largely ignored the productivity paradox, insisting that they were seeing improvements in the quality and speed of operations and decision-making. Investment in information and communications technology continued to grow, even in the absence of macroeconomic proof of its returns.

That turned out to be the right response. By the late 1990s, the economists Erik Brynjolfsson and Lorin Hitt had disproved the productivity paradox, uncovering problems in the way service-sector productivity was measured and, more important, noting that there was generally a long lag between technology investments and productivity gains.

Our own research at the time found a large jump in productivity in the late 1990s, driven largely by efficiencies made possible by earlier investments in information technology. These gains were visible in several sectors, including retail, wholesale trade, financial services, and the computer industry itself. The greatest productivity improvements were not the result of information technology on its own, but by its combination with process changes and organizational and managerial innovations.

Our latest research, The Internet of Things: Mapping the Value Beyond the Hypeindicates that a similar cycle could repeat itself. We predict that as the Internet of Things transforms factories, homes, and cities, it will yield greater economic value than even the hype suggests. By 2025, according to our estimates, the economic impact will reach $3.9-$11.1 trillion per year, equivalent to roughly 11% of world GDP. In the meantime, however, we are likely to see another productivity paradox; the gains from changes in the way businesses operate will take time to be detected at the macroeconomic level.

One major factor likely to delay the productivity payoff will be the need to achieve interoperability. Sensors on cars can deliver immediate gains by monitoring the engine, cutting maintenance costs, and extending the life of the vehicle. But even greater gains can be made by connecting the sensors to traffic monitoring systems, thereby cutting travel time for thousands of motorists, saving energy, and reducing pollution. However, this will first require auto manufacturers, transit operators, and engineers to collaborate on traffic-management technologies and protocols.

Indeed, we estimate that 40% of the potential economic value of the Internet of Things will depend on interoperability. Yet some of the basic building blocks for interoperability are still missing. Two-thirds of the things that could be connected do not use standard Internet Protocol networks.

Other barriers standing in the way of capturing the full potential of the Internet of Things include the need for privacy and security protections and long investment cycles in areas such as infrastructure, where it could take many years to retrofit legacy assets. The cybersecurity challenges are particularly vexing, as the Internet of Things increases the opportunities for attack and amplifies the consequences of any breach.

But, as in the 1980s, the biggest hurdles for achieving the full potential of the new technology will be organizational. Some of the productivity gains from the Internet of Things will result from the use of data to guide changes in processes and develop new business models. Today, little of the data being collected by the Internet of Things is being used, and it is being applied only in basic ways – detecting anomalies in the performance of machines, for example.

It could be a while before such data are routinely used to optimize processes, make predictions, or inform decision-making – the uses that lead to efficiencies and innovations. But it will happen. And, just as with the adoption of information technology, the first companies to master the Internet of Things are likely to lock in significant advantages, putting them far ahead of competitors by the time the significance of the change is obvious to everyone.

Editor's Note: This opinion originally appeared on Project Syndicate August 6, 2015.

Publication: Project Syndicate
Image Source: © Vincent Kessler / Reuters
     
 
 




ess

Not just for the professionals? Understanding equity markets for retail and small business investors


Event Information

April 15, 2016
9:00 AM - 12:30 PM EDT

The Brookings Institution
Falk Auditorium
1775 Massachusetts Ave., N.W.
Washington, DC 20036

Register for the Event

The financial crisis is now eight years behind us, but its legacy lingers on. Many Americans are concerned about their financial security and are particularly worried about whether they will have enough for retirement. Guaranteed benefit pensions are gradually disappearing, leaving households to save and invest for themselves. What role could equities play for retail investors?

Another concern about the lingering impact of the crisis is that business investment and overall economic growth remains weak compared to expectations. Large companies are able to borrow at low interest rates, yet many of them have large cash holdings. However, many small and medium sized enterprises face difficulty funding their growth, paying high risk premiums on their borrowing and, in some cases, being unable to fund investments they would like to make. Equity funding can be an important source of growth financing.

On Friday, April 15, the Initiative on Business and Public Policy at Brookings examined what role equity markets can play for individual retirement security, small business investment and whether they can help jumpstart American innovation culture by fostering the transition from startups to billion dollar companies.

You can join the conversation and tweet questions for the panelists at #EquityMarkets.

Video

Audio

Transcript

Event Materials

      
 
 




ess

How a U.S. embassy in Jerusalem could actually jump-start the peace process

President-elect Donald Trump has said that he aspires to make the “ultimate deal” to end the Israeli-Palestinian conflict, while also promising to move the U.S. embassy in Israel to Jerusalem. As I wrote in a recent op-ed in The New York Times, those two goals seem at odds, since relocating the embassy under current circumstances […]

      
 
 




ess

Moving to Access: Is the current transport model broken?

For several generations, urban transportation policymakers and practitioners around the world favored a “mobility” approach, aimed at moving people and vehicles as fast as possible by reducing congestion. The limits of such an approach, however, have become more apparent over time, as residents struggle to reach workplaces, schools, hospitals, shopping, and numerous other destinations in […]

      
 
 




ess

Congress pushed out that massive emergency spending bill quickly. Here are four reasons why.

       




ess

The politics of Congress’s COVID-19 response

In the face of economic and health challenges posed by COVID-19, Congress, an institution often hamstrung by partisanship, quickly passed a series of bills allocating trillions of dollars for economic stimulus and relief. In this episode, Sarah Binder joins David Dollar to discuss the politics behind passing that legislation and lingering uncertainties about its oversight…

       




ess

Congress and Trump have produced four emergency pandemic bills. Don’t expect a fifth anytime soon.

       




ess

Two Cheers for Our Peculiar Politics: America’s Political Process and the Economic Crisis

Pietro Nivola offers two cheers, instead of three, for the American political system in light of the latest global economic concerns. He argues that since 2008, the federal government has not committed many basic economic blunders, but fiscal policy could improve on the state and local levels.

      
 
 




ess

Reckless in Riyadh

       




ess

The places a COVID-19 recession will likely hit hardest

At first blush, it seems like the coronavirus pandemic is shutting down the economy everywhere, equally, with frightening force and totality. In many respects, that’s true: Across the country, consumer spending—which supports 70% of the economy—is crashing in community after community, as people avoid stores, restaurants, movie theaters, offices, and other public places. Already, the…

       




ess

Stimulus steps the US should take to reduce regional economic damages from the COVID-19 recession

The coronavirus pandemic seems likely to trigger a severe worldwide recession of uncertain length. In addition to responding to the public health needs, policymakers are debating how they can respond with creative new economic policies, which are now urgently needed. One strategy they should consider is both traditional and yet oddly missing from the current…

       




ess

The robots are ready as the COVID-19 recession spreads

As if American workers don’t have enough to worry about right now, the COVID-19 pandemic is resurfacing concerns about technology’s impact on the future of work. Put simply, any coronavirus-related recession is likely to bring about a spike in labor-replacing automation. What’s the connection between recessions and automation? On its face, the transition to automation may…

       




ess

Class Notes: Unequal Internet Access, Employment at Older Ages, and More

This week in Class Notes: The digital divide—the correlation between income and home internet access —explains much of the inequality we observe in people's ability to self-isolate. The labor force participation rate among older Americans and the age at which they claim Social Security retirement benefits have risen in recent years. Higher minimum wages lead to a greater prevalence…

       




ess

The effect of COVID-19 and disease suppression policies on labor markets: A preliminary analysis of the data

World leaders are deliberating when and how to re-open business operations amidst considerable uncertainty as to the economic consequences of the coronavirus. One pressing question is whether or not countries that have remained relatively open have managed to escape at least some of the economic harm, and whether that harm is related to the spread…

       




ess

Assessing your innovation district: A how-to guide

“Assessing your innovation district: A how-to guide,” is a tool for public and private leaders to audit the assets that comprise their local innovation ecosystem. The guide is designed to reveal how to best target resources toward innovative and inclusive economic development tailored to an area’s unique strengths and challenges. Over the past two decades,…

       




ess

Assessing your innovation district: Five key questions to explore

Over the past two decades, a confluence of changing market demands and demographic preferences have led to a revaluation of urban places—and a corresponding shift in the geography of innovation. This trend has resulted in a clustering of firms, intermediaries, and workers—often near universities, medical centers, or other anchors—in dense innovation districts. Local economic development…

       




ess

People In Transition: Assessing the Economies of Central and Eastern Europe and the CIS

After 17 years of transition to market economies in central and eastern Europe and the Commonwealth of Independent States (CIS), are people better off now than they were in 1989? Brookings Global recently hosted a presentation by Senior Fellow and European Bank for Reconstruction & Development (EBRD) Chief Economist, Erik Berglöf, on the 2007 Transition…

       




ess

Progress in Emerging Markets is Being Put at Risk

Finance ministers of the Group of Eight leading economies have commissioned a study on the role of financial market speculation in recent oil price rises. In India, the regulator recently suspended trade in futures markets for several commodities, blaming speculators for price rises. The global credit crisis has made the financial sector vulnerable to populist…

       




ess

The future of school accountability under ESSA

With the Every Student Succeeds Act (ESSA) replacing No Child Left Behind as the new federal education law, states have gained greater freedom to personalize their education policies. ESSA’s promise of decentralization is a victory for state education leaders, but also transfers to them the responsibility of ensuring that school systems are held accountable. During…

       




ess

Comments on “How automation and other forms of IT affect the middle class: Assessing the estimates” by Jaimovich and Siu

Nir Jaimovich and Henry Siu have written a very helpful and useful paper that summarizes the empirical literature by labor economists on how automation affect the labor market and the middle class. Their main arguments can be summarized as follows: The labor markets in the US (and other industrialized countries) has become increasingly “polarized” in…

       




ess

Turkey cannot effectively fight ISIS unless it makes peace with the Kurds


Terrorist attacks with high casualties usually create a sense of national solidarity and patriotic reaction in societies that fall victim to such heinous acts. Not in Turkey, however. Despite a growing number of terrorist attacks by the so-called Islamic State on Turkish soil in the last 12 months, the country remains as polarized as ever under strongman President Recep Tayyip Erdogan.

In fact, for two reasons, jihadist terrorism is exacerbating the division. First, Turkey's domestic polarization already has an Islamist-versus-secularist dimension. Most secularists hold Erdogan responsible for having created domestic political conditions that turn a blind eye to jihadist activities within Turkey.

It must also be said that polarization between secularists and Islamists in Turkey often fails to capture the complexity of Turkish politics, where not all secularists are democrats and not all Islamists are autocrats. In fact, there was a time when Erdogan was hailed as the great democratic reformer against the old secularist establishment under the guardianship of the military.

Yet, in the last five years, the religiosity and conservatism of the ruling Justice and Development Party, also known by its Turkish acronym AKP, on issues ranging from gender equality to public education has fueled the perception of rapid Islamization. Erdogan's anti-Western foreign policy discourse -- and the fact that Ankara has been strongly supportive of the Muslim Brotherhood in the wake of the Arab Spring -- exacerbates the secular-versus-Islamist divide in Turkish society.

Erdogan doesn't fully support the eradication of jihadist groups in Syria.

The days Erdogan represented the great hope of a Turkish model where Islam, secularism, democracy and pro-Western orientation came together are long gone. Despite all this, it is sociologically more accurate to analyze the polarization in Turkey as one between democracy and autocracy rather than one of Islam versus secularism.

The second reason why ISIS terrorism is exacerbating Turkey's polarization is related to foreign policy. A significant segment of Turkish society believes Erdogan's Syria policy has ended up strengthening ISIS. In an attempt to facilitate Syrian President Bashar Assad's overthrow, the AKP turned a blind eye to the flow of foreign volunteers transiting Turkey to join extremist groups in Syria. Until last year, Ankara often allowed Islamists to openly organize and procure equipment and supplies on the Turkish side of the Syria border.

Making things worse is the widely held belief that Turkey's National Intelligence Organization, or MİT, facilitated the supply of weapons to extremist Islamist elements amongst the Syrian rebels. Most of the links were with organizations such as Jabhat al-Nusra, Ahrar al-Sham and Islamist extremists from Syria's Turkish-speaking Turkmen minority.

He is trying to present the PKK as enemy number one.

Turkey's support for Islamist groups in Syria had another rationale in addition to facilitating the downfall of the Assad regime: the emerging Kurdish threat in the north of the country. Syria's Kurds are closely linked with Turkey's Kurdish nemesis, the Kurdistan Workers' Party, or PKK, which has been conducting an insurgency for greater rights for Turkey's Kurds since 1984.

On the one hand, Ankara has hardened its stance against ISIS by opening the airbase at Incirlik in southern Turkey for use by the U.S-led coalition targeting the organization with air strikes. However, Erdogan doesn't fully support the eradication of jihadist groups in Syria. The reason is simple: the Arab and Turkmen Islamist groups are the main bulwark against the expansion of the de facto autonomous Kurdish enclave in northern Syria. The AKP is concerned that the expansion and consolidation of a Kurdish state in Syria would both strengthen the PKK and further fuel similar aspirations amongst Turkey's own Kurds.

Will the most recent ISIS terrorist attack in Istanbul change anything in Turkey's main threat perception? When will the Turkish government finally realize that the jihadist threat in the country needs to be prioritized? If you listen to Erdogan's remarks, you will quickly realize that the real enemy he wants to fight is still the PKK. He tries hard after each ISIS attack to create a "generic" threat of terrorism in which all groups are bundled up together without any clear references to ISIS. He is trying to present the PKK as enemy number one.

Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

Under such circumstances, Turkish society will remain deeply polarized between Islamists, secularists, Turkish nationalists and Kurdish rebels. Terrorist attacks, such as the one in Istanbul this week and the one in Ankara in July that killed more than 100 people, will only exacerbate these divisions.

Finally, it is important to note that the Turkish obsession with the Kurdish threat has also created a major impasse in Turkish-American relations in Syria. Unlike Ankara, Washington's top priority in Syria is to defeat ISIS. The fact that U.S. strategy consists of using proxy forces such as Syrian Kurds against ISIS further complicates the situation.

There will be no real progress in Turkey's fight against ISIS unless there is a much more serious strategy to get Ankara to focus on peace with the PKK. Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

This piece was originally posted by The Huffington Post.

Publication: The Huffington Post
Image Source: © Murad Sezer / Reuters
      
 
 




ess

Algorithms and sentencing: What does due process require?

There are significant potential benefits to using data-driven risk assessments in criminal sentencing. For example, risk assessments have rightly been endorsed as a mechanism to enable courts to reduce or waive prison sentences for offenders who are very unlikely to reoffend. Multiple states have recently enacted laws requiring the use of risk assessment instruments. And…

       




ess

Products liability law as a way to address AI harms

Artificial intelligence (AI) is a transformative technology that will have a profound impact on manufacturing, robotics, transportation, agriculture, modeling and forecasting, education, cybersecurity, and many other applications. The positive benefits of AI are enormous. For example, AI-based systems can lead to improved safety by reducing the risks of injuries arising from human error. AI-based systems…

       




ess

The US-Africa Business Forum: Africa’s “middle class” and the “in-between” sector—A new opening for manufacturing?

Editor’s Note: On September 21, the Department of Commerce and Bloomberg Philanthropies are hosting the second U.S.-Africa Business Forum. Building on the forum in 2014, this year’s meeting again hosts heads of state, U.S. CEOs, and African business leaders, but aims to go beyond past commitments and towards effective implementation. This year’s forum will focus on six sectors important…

      
 
 




ess

The Law Firm Business Model Is Dying

Clifford Winston and Robert Crandall say that the bankruptcies of major, long-standing law firms signal a change in how businesses and the public are choosing to find legal services. Winston and Crandall argue that deregulation would revitalize the industry, bringing new ideas, technologies, talents and operating procedures into the practice of law.

      
 
 




ess

Party Fundraising Success Continues Through Mid-Year

With only a few months remaining before the 2004 elections, national party committees continue to demonstrate financial strength and noteworthy success in adapting to the more stringent fundraising rules imposed by the Bipartisan Campaign Reform Act (BCRA). A number of factors, including the deep partisan divide in the electorate, the expectations of a close presidential race, and the growing competition in key Senate and House races, have combined with recent party investments in new technology and the emergence of the Internet as a major fundraising tool to produce what one party chairman has described as a "perfect storm" for party fundraising.1 Consequently, both national parties have exceeded the mid-year fundraising totals achieved in 2000, and both approach the general election with substantial amounts of money in the bank.

After eighteen months of experience under the new rules, the national parties are still outpacing their fundraising efforts of four years ago. As of June 30, the national parties have raised $611.1 million in federally regulated hard money alone, as compared to $535.6 million in hard and soft money combined at a similar point in the 2000 election cycle. The Republicans lead the way, taking in more than $381 million as compared to about $309 million in hard and soft money by the end of June in 2000. The Democrats have also raised more, bringing in $230 million as compared to about $227 million in hard and soft money four years ago. Furthermore, with six months remaining in the election cycle, both national parties have already raised more hard money than they did in the 2000 election cycle.2 In fact, by the end of June, every one of the Democratic and Republican national party committees had already exceeded its hard money total for the entire 2000 campaign.3

This surge in hard money fundraising has allowed the national party committees to replace a substantial portion of the revenues they previously received through unlimited soft money contributions. Through June, these committees have already taken in enough additional hard money to compensate for the $254 million of soft money that they had garnered by this point in 2000, which represented a little more than half of their $495 million in total soft money receipts in the 2000 election cycle.

View the accompanying data tables (PDF - 11.4 KB)


1Terrence McAuliffe, Democratic National Committee Chairman, quoted in Paul Fahri, "Small Donors Grow Into Big Political Force," Washington Post, May 3, 2004, p. A11.
2In 2000, the Republican national party committees raised $361.6 million in hard money, while the Democratic national committees raised $212.9 million. These figures are based on unadjusted data and do not take into account any transfers of funds that may have taken place among the national party committees.
3The election cycle totals for 2000 can be found in Federal Election Commission, "FEC Reports Increase in Party Fundraising for 2000," press release, May 15, 2001. Available at http://www.fec.gov/press/press2001/051501partyfund/051501partyfund.html (viewed July 28, 2004).

Downloads

Authors

     
 
 




ess

Financing the 2008 Election : Assessing Reform


Brookings Institution Press 2011 341pp.

The 2008 elections were by any standard historic. The nation elected its first African American president, and the Republicans nominated their first female candidate for vice president. More money was raised and spent on federal contests than in any election in U.S. history. Barack Obama raised a record-setting $745 million for his campaign and federal candidates, party committees, and interest groups also raised and spent record-setting amounts. Moreover, the way money was raised by some candidates and party committees has the potential to transform American politics for years to come.

The latest installment in a series that dates back half a century, Financing the 2008 Election is the definitive analysis of how campaign finance and spending shaped the historic presidential and congressional races of 2008. It explains why these records were set and what it means for the future of U.S. politics. David Magleby and Anthony Corrado have assembled a team of experts who join them in exploring the financing of the 2008 presidential and congressional elections. They provide insights into the political parties and interest groups that made campaign finance history and summarize important legal and regulatory changes that affected these elections.

Contributors: Allan Cigler (University of Kansas), Stephanie Perry Curtis (Brigham Young University), John C. Green (Bliss Institute at the University of Akron), Paul S. Herrnson (University of Maryland), Diana Kingsbury (Bliss Institute at the University of Akron), Thomas E. Mann (Brookings Institution).

ABOUT THE EDITORS

Anthony Corrado
David B. Magleby
David B. Magleby is dean of the College of Family, Home, and Social Sciences and Distinguished Professor of Political Science at Brigham Young University. He is the author of Financing the 2000 Election, a coeditor with Corrado of Financing the 2004 Election, and coauthor of Government by the People (Pearson Prentice Hall), now in its 21st edition.

Downloads

Ordering Information:
  • {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-0332-7, $32.95 Add to Cart
      
 
 




ess

@ Brookings Podcast: The Politics and Process of Congressional Redistricting

Now that the 2010 Census is concluded, states will begin the process of reapportionment—re-drawing voting district lines to account for population shifts. Nonresident Senior Fellow Michael McDonald says redistricting has been fraught with controversy and corruption since the nation’s early days, when the first “gerrymandered” district was drawn. Two states—Arizona and California—have instituted redistricting commissions intended to insulate the process from political shenanigans, but politicians everywhere will continue to work the system to gain electoral advantage and the best chance of re-election for themselves and their parties.

Subscribe to audio and video podcasts of Brookings events and policy research »

Video

Audio

      
 
 




ess

A Status Report on Congressional Redistricting


Event Information

July 18, 2011
10:00 AM - 11:30 AM EDT

Falk Auditorium
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Register for the Event

Full video archive of this event is also available via C-SPAN here.

The drawing of legislative district boundaries is arguably among the most self-interested and least transparent systems in American democracy. Every ten years redistricting authorities, usually state legislatures, redraw congressional and legislative lines in accordance with Census reapportionment and population shifts within states. Most state redistricting authorities are in the midst of their redistricting process, while others have already finished redrawing their state and congressional boundaries. A number of initiatives—from public mapping competitions to independent shadow commissions—have been launched to open up the process to the public during this round of redrawing district lines.

On July 18, Brookings hosted a panel of experts to review the results coming in from the states and discuss how the rest of the process is likely to unfold. Panelists focused on evidence of partisan or bipartisan gerrymandering, the outcome of transparency and public mapping initiatives, and minority redistricting.

After the panel discussion, participants took audience questions.

Video

Audio

Transcript

Event Materials

      
 
 




ess

Social Security Smörgåsbord? Lessons from Sweden’s Individual Pension Accounts

President Bush has proposed adding optional personal accounts as one of the central elements of a major Social Security reform proposal. Although many details remain to be worked out, the proposal would allow individuals who choose to do so to divert part of the money they currently pay in Social Security taxes into individual investment…

       




ess

Bridging the Social Security Divide: Lessons From Abroad

Executive Summary Efforts by President George W. Bush to promote major reforms in the Social Security retirement program have not led to policy change, but rather to increased polarization between the two parties. And the longer we wait to address Social Security’s long-term funding problem, the bigger and more painful the changes will need to…

       




ess

Federal R&D: Why is defense dominant yet less talked about?


Federal departments and agencies received just above $133 billion in R&D funds in 2013. To put that figure in perspective, World Bank data for 2013 shows that, 130 countries had a GDP below that level; U.S. R&D is larger than the entire economy of 60 percent of all countries in the world.

The chart below shows how those funds are allocated among the most important federal departments and agencies in terms of R&D.

Those looking at these figures for the first time may be surprised to see that the Department of Defense takes about half of the pie. It should be noted however that not all federal R&D is destined to preserve U.S. military preeminence in the world. From non-defense research, 42 percent is destined to the much-needed research conducted by the National Institutes of Health, 17 percent to the research of the Department of Energy—owner of 17 celebrated national laboratories—16 percent for space exploration, and 8 percent for understanding the natural and social worlds at a fundamental level. The balance category is only lumped together for visual display not for its importance; it includes for instance the significant work of the National Oceanic and Atmospheric Administration and the National Institute of Standards and Technology.

Despite the impressive size of defense R&D, we hear little about it. While much of defense research and development is classified, in time, civilian applications find their way into mainstream commercial uses—the Internet and GPS emerged from research done at DARPA. Far more visible than defense R&D is biomedical research, clean energy research, or news about truly impressive discoveries either in distant galaxies or in the depths of our oceans.

What produces this asymmetry of visibility of federal R&D work?

In a recent Brookings paper, a colleague and I suggest that the answer lies in the prominence of R&D in the agencies’ accounting books. In short: How visible is R&D and how much the agency seeks to discuss it in public fora depends not on the relative importance, but on how large a portion of the agency’s budget is dedicated to R&D.

From a budget perspective, we identified two types of agencies performing R&D: those agencies whose main mission is to perform research and development, and those agencies that perform many functions in addition to R&D. For the former, the share of R&D in the discretionary budget is consistently high, while for the latter group, R&D is only a small part of their total budget (see the chart below). This distinction influences how agencies will argue for their R&D money, because they will make their case on the most important uses of their budget. If agencies have a low R&D share, they will keep it mixed with other functions and programs; for instance, research efforts will be justified only as supporting the main agency mission. In turn, agencies with a high R&D share must argue for their budgets highlighting the social outcomes of their work. These include three agencies whose primary mission is research (NASA, NSF, NIH), and a fourth (DoE) where research is a significant element of its mission.

There is little question that the four agencies with high R&D share produce greatly beneficial research for society. Their strategy of promoting their work publicly is not only smart budget politics but also civic and pedagogical in the sense of helping taxpayers understand that their tax dollars are well-spent. However, it is interesting to observe that other agencies may be producing research of equal social impact that flies under the public radar, mainly because those agencies prefer as a matter of good budget policy to keep a low profile for their R&D work.

One interesting conclusion for institutional design from this analysis is that promoting a research agency to the level of departments of government or its director to a cabinet rank position may bring prominence to its research, not because more and better research will necessarily get done but simply because that agency will seek public recognition for their work in order to justify its budget. Likewise, placing a research agency within a larger department may help conceal and protect their R&D funding; the politics of the department will focus on its main goals and R&D would recede to a concern of secondary interest in political battles.

In the Politics of Federal R&D we discuss in more detail the changing politics of budget and how R&D agencies can respond. The general strategies of concealment and self-promotion are likely to become more important for agencies to protect a steady growth of their research and development budgets.

Data sources: R&D data from the American Association for the Advancement of Sciences historical trends in Federal R&D. Total non-discretionary spending by federal agency from the Office of Management and Budget.

Image Source: © Edgar Su / Reuters
      
 
 




ess

Gene editing: New challenges, old lessons


It has been hailed as the most significant discovery in biology since polymerase chain reaction allowed for the mass replication of DNA samples. CRISPR-Cas9 is an inexpensive and easy-to-use gene-editing method that promises applications ranging from medicine to industrial agriculture to biofuels. Currently, applications to treat leukemia, HIV, and cancer are under experimental development.1 However, new technical solutions tend to be fraught with old problems, and in this case, ethical and legal questions loom large over the future.

Disagreements on ethics

The uptake of this method has been so fast that many scientists have started to worry about inadequate regulation of research and its unanticipated consequences.2 Consider, for instance, the disagreement on research on human germ cells (eggs, sperm, or embryos) where an edited gene is passed onto offspring. Since the emergence of bioengineering applications in the 1970s, the scientific community has eschewed experiments to alter human germline and some governments have even banned them.3 The regulation regimes are expectedly not uniform: for instance, China bans the implantation of genetically modified embryos in women but not the research with embryos.

Last year, a group of Chinese researchers conducted gene-editing experiments on non-viable human zygotes (fertilized eggs) using CRISPR.4 News that these experiments were underway prompted a group of leading U.S. geneticists to meet in March 2015 in Napa, California, to begin a serious consideration of ethical and legal dimensions of CRISPR and called for a moratorium on research editing genes in human germline.5 Disregarding that call, the Chinese researchers published their results later in the year largely reporting a failure to precisely edit targeted genes without accidentally editing non-targets. CRISPR is not yet sufficiently precise.

CRISPR reignited an old debate on human germline research that is one of the central motivations (but surely not the only one) for an international summit on gene editing hosted by the U.S. National Academies of Sciences, the Chinese Academy of Sciences, and the U.K.'s Royal Society in December 2015. About 500 scientists as well as experts in the legal and ethical aspects of bioengineering attended.6 Rather than consensus, the meeting highlighted the significant contrasts among participants about the ethics of inquiry, and more generally, about the governance of science. Illustrative of these contrasts are the views of prominent geneticists Francis Collins, Director of the National Institutes of Health, and George Church, professor of genetics at Harvard. Collins argues that the “balance of the debate leans overwhelmingly against human germline engineering.” In turn, Church, while a signatory of the moratorium called by the Napa group, has nevertheless suggested reasons why CRISPR is shifting the balance in favor of lifting the ban on human germline experiments.7

The desire to speed up discovery of cures for heritable diseases is laudable. But tinkering with human germline is truly a human concern and cannot be presumed to be the exclusive jurisdictions of scientists, clinicians, or patients. All members of society have a stake in the evolution of CRISPR and must be part of the conversation about what kind of research should be permitted, what should be discouraged, and what disallowed. To relegate lay citizens to react to CRISPR applications—i.e. to vote with their wallets once applications hit the market—is to reduce their citizenship to consumer rights, and public participation to purchasing power.8 Yet, neither the NAS summit nor the earlier Napa meeting sought to solicit the perspectives of citizens, groups, and associations other than those already tuned in the CRISPR debates.9

The scientific community has a bond to the larger society in which it operates that in its most basic form is the bond of the scientist to her national community, is the notion that the scientist is a citizen of society before she is a denizen of science. This bond entails liberties and responsibilities that transcend the ethos and telos of science and, consequently, subordinates science to the social compact. It is worth recalling this old lesson from the history of science as we continue the public debate on gene editing. Scientists are free to hold specific moral views and prescriptions about the proper conduct of research and the ethical limits of that conduct, but they are not free to exclude the rest of society from weighing in on the debate with their own values and moral imaginations about what should be permitted and what should be banned in research. The governance of CRISPR is a question of collective choice that must be answered by means of democratic deliberation and, when irreconcilable differences arise, by the due process of democratic institutions.

Patent disputes

More heated than the ethical debate is the legal battle for key CRISPR patents that has embroiled prominent scientists involved in perfecting this method. The U.S. Patent and Trademark Office initiated a formal contestation process, called interference, in March 2016 to adjudicate the dispute. The process is likely to take years and appeals are expected to extend further in time. Challenges are also expected to patents filed internationally, including those filed with the European Patent Office.

To put this dispute in perspective, it is instructive to consider the history of CRISPR authored by one of the celebrities in gene science, Eric Lander.10 This article ignited a controversy because it understated the role of one of the parties to the patent dispute (Jennifer Doudna and Emmanuelle Charpentier), while casting the other party as truly culminating the development of this technology (Feng Zhang, who is affiliated to Lander’s Broad Institute). Some gene scientists accused Lander of tendentious inaccuracies and of trying to spin a story in a manner that favors the legal argument (and economic interest) of Zhang.

Ironically, the contentious article could be read as an argument against any particular claim to the CRISPR patents as it implicitly questions the fairness of granting exclusive rights to an invention. Lander tells the genesis of CRISPR that extends through a period of two decades and over various countries, where the protagonists are the many researchers who contributed to the cumulative knowledge in the ongoing development of the method. The very title of Lander’s piece, “The Heroes of CRISPR” highlights that the technology has not one but a plurality of authors.

A patent is a legal instrument that recognizes certain rights of the patent holder (individual, group, or organization) and at the same time denies those rights to everyone else, including those other contributors to the invention. Patent rights are thus arbitrary under the candle of history. I am not suggesting that the bureaucratic rules to grant a patent or to determine its validity are arbitrary; they have logical rationales anchored in practice and precedent. I am suggesting that in principle any exclusive assignation of rights that does not include the entire community responsible for the invention is arbitrary and thus unfair. The history of CRISPR highlights this old lesson from the history of technology: an invention does not belong to its patent holder, except in a court of law.

Some scientists may be willing to accept with resignation the unfair distribution of recognition granted by patents (or prizes like the Nobel) and find consolation in the fact that their contribution to science has real effects on people’s lives as it materializes in things like new therapies and drugs. Yet patents are also instrumental in distributing those real effects quite unevenly. Patents create monopolies that, selling their innovation at high prices, benefit only those who can afford them. The regular refrain to this charge is that without the promise of high profits, there would be no investments in innovation and no advances in life-saving medicine. What’s more, the biotech industry reminds us that start-ups will secure capital injections only if they have exclusive rights to the technologies they are developing. Yet, Editas Medicine, a biotech start-up that seeks to exploit commercial applications of CRISPR (Zhang is a stakeholder), was able to raise $94 million in its February 2016 initial public offering. That some of Editas’ key patents are disputed and were entering interference at USPTO was patently not a deterrent for those investors.

Towards a CRISPR democratic debate

Neither the governance of gene-editing research nor the management of CRISPR patents should be the exclusive responsibility of scientists. Yet, they do enjoy an advantage in public deliberations on gene editing that is derived from their technical competence and from the authority ascribed to them by society. They can use this advantage to close the public debate and monopolize its terms, or they could turn it into stewardship of a truly democratic debate about CRISPR.

The latter choice can benefit from three steps. A first step would be openness: a public willingness to consider and internalize public values that are not easily reconciled with research values. A second step would be self-restraint: publicly affirming a self-imposed ban on research with human germline and discouraging research practices that are contrary to received norms of prudence. A third useful step would be a public service orientation in the use of patents: scientists should pressure their universities, who hold title to their inventions, to preserve some degree of influence over research commercialization so that the dissemination and access to innovations is consonant with the noble aspirations of science and the public service mission of the university. Openness, self-restraint, and an orientation to service from scientists will go a long way to make of CRISPR a true servant of society and an instrument of democracy.


Other reading: See media coverage compiled by the National Academies of Sciences.

1Nature: an authoritative and accessible primer. A more technical description of applications in Hsu, P. D. et al. 2014. Cell, 157(6): 1262–1278.

2For instance, see this reflection in Science, and this in Nature.

3More about ethical concerns on gene editing here: http://www.geneticsandsociety.org/article.php?id=8711

4Liang, P. et al. 2015. Protein & Cell, 6, 363–372

5Science: A prudent path forward for genomic engineering and germline gene modification.

6Nature: NAS Gene Editing Summit.

7While Collins and Church participated in the summit, their views quoted here are from StatNews.com: A debate: Should we edit the human germline. See also Sciencenews.org: Editing human germline cells sparks ethics debate.

8Hurlbut, J. B. 2015. Limits of Responsibility, Hastings Center Report, 45(5): 11-14.

9This point is forcefully made by Sheila Jasanoff and colleagues: CRISPR Democracy, 2015 Issues in S&T, 22(1).

10Lander, E. 2016. The Heroes of CRISPR. Cell, 164(1-2): 18-28.

Image Source: © Robert Pratta / Reuters
      
 
 




ess

‘Essential’ cannabis businesses: Strategies for regulation in a time of widespread crisis

Most state governors and cannabis regulators were underprepared for the COVID-19 pandemic, a crisis is affecting every economic sector. But because the legal cannabis industry is relatively new in most places and still evolving everywhere, the challenges are even greater. What’s more, there is no history that could help us understand how the industry will endure the current economic situation. And so, in many…

       




ess

Realist or neocon? Mixed messages in Trump advisor’s foreign policy vision


Last night, retired lieutenant general Michael Flynn addressed the Republican convention as a headline speaker on the subject of national security. One of Donald Trump’s closest advisors—so much so that he was considered for vice president—Flynn repeated many of the themes found in his new book, The Field of Fight, How We Can Win the Global War Against Radical Islam and Its Allies, which he coauthored with Michael Ledeen. (The book is published by St. Martin’s, which also published mine.)

Written in Flynn’s voice, the book advances two related arguments: First, the U.S. government does not know enough about its enemies because it does not collect enough intelligence, and it refuses to take ideological motivations seriously. Second, our enemies are collaborating in an “international alliance of evil countries and movements that is working to destroy” the United States despite their ideological differences.

Readers will immediately notice a tension between the two ideas. “On the surface,” Flynn admits, “it seems incoherent.” He asks: 

“How can a Communist regime like North Korea embrace a radical Islamist regime like Iran? What about Russia’s Vladimir Putin? He is certainly no jihadi; indeed, Russia has a good deal to fear from radical Islamist groups.” 

Flynn spends much of the book resolving the contradiction and proving that America’s enemies—North Korea, China, Russia, Iran, Syria, Cuba, Bolivia, Venezuela, Nicaragua, al-Qaida, Hezbollah, and ISIS—are in fact working in concert.

No one who has read classified intelligence or studied international relations will balk at the idea that unlikely friendships are formed against a common enemy. As Flynn observes, the revolutionary Shiite government in Tehran cooperates with nationalist Russia and communist North Korea; it has also turned a blind eye (at the very least) to al-Qaida’s Sunni operatives in Iran and used them bargaining chips when negotiating with Osama bin Laden and the United States. 

Flynn argues that this is more than “an alliance of convenience.” Rather, the United States’ enemies share “a contempt for democracy and an agreement—by all the members of the enemy alliance—that dictatorship is a superior way to run a country, an empire, or a caliphate.” Their shared goals of maximizing dictatorship and minimizing U.S. interference override their substantial ideological differences. Consequently, the U.S. government must work to destroy the alliance by “removing the sickening chokehold of tyranny, dictatorships, and Radical Islamist regimes.” Its failure to do so over the past decades gravely imperils the United States, he contends.

The book thus offers two very different views of how to exercise American power abroad: spread democracies or stand with friendly strongmen...[P]erhaps it mirrors the confusion in the Republican establishment over the direction of conservative foreign policy.

Some of Flynn’s evidence for the alliance diverts into the conspiratorial—I’ve seen nothing credible to back up his assertion that the Iranians were behind the 1979 takeover of the Grand Mosque in Mecca by Sunni apocalypticists. And there’s an important difference between the territorially-bounded ambitions of Iran, Russia, and North Korea, on the one hand, and ISIS’s desire to conquer the world on the other; the former makes alliances of convenience easier than the latter. Still, Flynn would basically be a neocon if he stuck with his core argument: tyrannies of all stripes are arrayed against the United States so the United States should destroy them.

But some tyrannies are less worthy of destruction than others. In fact, Flynn argues there’s a category of despot that should be excluded from his principle, the “friendly tyrants” like President Abdel-Fatah el-Sissi in Egypt and former president Zine Ben Ali in Tunisia. Saddam Hussein should not have been toppled, Flynn argues, and even Russia could become an “ideal partner for fighting Radical Islam” if only it would come to its senses about the threat of “Radical Islam.” Taken alone, these arguments would make Flynn realist, not a neocon. 

The book thus offers two very different views of how to exercise American power abroad: spread democracies or stand with friendly strongmen. Neither is a sure path to security. Spreading democracy through the wrong means can bring to power regimes that are even more hostile and authoritarian; standing with strongmen risks the same. Absent some principle higher than just democracy or security for their own sakes, the reader is unable to decide between Flynn’s contradictory perspectives and judge when their benefits are worth the risks. 

It’s strange to find a book about strategy so at odds with itself. Perhaps the dissonance is due to the co-authors’ divergent views (Ledeen is a neocon and Flynn is comfortable dining with Putin.) Or perhaps it mirrors the confusion in the Republican establishment over the direction of conservative foreign policy. Whatever the case, the muddled argument offered in The Field of Fight demonstrates how hard it is to overcome ideological differences to ally against a common foe, regardless of whether that alliance is one of convenience or conviction. 

Authors

      
 
 




ess

The dark side of consensus in Tunisia: Lessons from 2015-2019

Executive Summary Since the 2011 revolution, Tunisia has been considered a model for its pursuit of consensus between secular and Islamist forces. While other Arab Spring countries descended into civil war or military dictatorship, Tunisia instead chose dialogue and cooperation, forming a secular-Islamist coalition government in 2011 and approving a constitution by near unanimity in…

       




ess

The lesser threat: How the Muslim Brotherhood views Shias and Shiism

       




ess

Why Europe’s energy policy has been a strategic success story


For Europe, it has been a rough year, or perhaps more accurately a rough decade. The terrorist attacks in London, Madrid, and elsewhere have taken a toll, as did the Iraq and Afghanistan wars. But things really got tough beginning with the Great Recession—and its prolonged duration for Europe, including grave economic crises in much of the southern part of the continent. That was followed by Vladimir Putin’s aggression against Ukraine, as well as the intensification of the Syrian, Libyan, and Yemeni conflicts with their tragic human consequences, including massive displacement of people and the greatest flow of refugees since World War II. The recent attacks in Paris and Brussels have added to the gloom and fear. This recent history, together with the advent of nationalistic and inward-looking policies in virtually all European Union member states, makes it easy to get despondent—and worry that the entire European project is failing.

To be sure, these are not the best of times. Europe is perceived by some, including Republican presidential candidate Donald Trump, as failing to invest enough in its own security, since NATO allies spend less than 1.4 percent of GDP on their armed forces while the United States spends twice that. However, we must not lose sight of the key structural advantages—and the important policy successes—that have brought Europe where it is today. For example, Europe’s recent progress in energy policy has been significant—good not only for economic and energy resilience, but also for NATO's collective handling of the revanchist Russia threat. 

[W]e must not lose sight of the key structural advantages—and the important policy successes—that have brought Europe where it is today.

For many years, analysts and policymakers have debated the question of Europe's dependence on natural gas from Russia. Today, this problem is largely solved. Russia provides only one-third of Europe’s gas. Importantly, Europe’s internal infrastructure for transporting natural gas in all desired directions has improved greatly. So have its available storage options, as well as its possibilities to import alternatives either by pipeline or in the form of liquefied natural gas. As a result, almost all member states are currently well-positioned to withstand even a worst-case scenario. 

Indeed, European Commission analyses show that even a multi-month long supply disruption could be addressed, albeit at real economic cost, by diversification and fuel switching. Progress in energy efficiency and renewable energy investments also help. There is more to do to enhance European energy security, but much has been done already. The Europeans have shown that, with ups and downs, they can address energy security themselves.

Already this energy success has contributed to a strategic success. Europe has been heavily criticized for not standing up more firmly to Russia in response to the annexation of Crimea and the conflict in eastern Ukraine. In fact, all EU member states have agreed to keep economic sanctions in place against Moscow. In addition, lifting the sanctions has been firmly attached to the implementation of the Minsk II agreement—and despite recent cracks in European solidarity, we hope that this stance will hold going forward. 

The notion that Europe is weak and dependent on Russian natural gas is a relic from the past.

The notion that Europe is weak and dependent on Russian natural gas is a relic from the past. Europe has a strong regulatory framework with which commercial entities, including Gazprom, have to abide. For those who doubt the impact of these regulations, just ask Google or Microsoft. With the end of so-called destination clauses, natural gas can be re-sold whenever required, as long as sufficient infrastructure is in place. Just last year, Germany re-exported over 30 billion cubic meters of gas, mostly Russian, in particular to Central and Eastern Europe (including Ukraine). That volume exceeds the annual consumption of every European state with the exceptions of Germany, Italy, France, and Britain.

In theory, Europe could even substantially wean itself off Russian gas if need be. To be sure, that would come at a major expense: over 200 billion euros of additional investments over a period of two years or more, and then an annual 35 billion euros, according to some calculations. That will almost surely not happen. But as a way of bounding the worst-case scenario, it is still informative. One might say that Europe has escalation dominance over Russia; the latter needs to export to Europe more than Europe need Russian hydrocarbons.

The internal energy market is not finished, but Europe’s energy security has significantly improved in recent years. Even though world markets are currently awash in resources, there is no time for complacence, and European leaders should finish the job, foremost by safeguarding the swift construction of the so-called Projects of Common Interest (key energy infrastructure projects that address the remaining bottlenecks in the EU market), so that the U.S. State Department can take new infrastructure projects like Nord Stream 2 off its priority list, and make energy policy another true European success story. It is already much of the way there, and Western security is the better for it.