effect

Cost Effectiveness Analysis and Finding the Best Policies to Fight COVID-19

Robert Stavins: Cost Effectiveness Analysis and Finding the Best Policies to Fight COVID-19




effect

The Need for Creative and Effective Nuclear Security Vulnerability Assessment and Testing

Realistic, creative vulnerability assessment and testing are critical to finding and fixing nuclear security weaknesses and avoiding over-confidence. Both vulnerability assessment and realistic testing are needed to ensure that nuclear security systems are providing the level of protection required. Systems must be challenged by experts thinking like adversaries, trying to find ways to overcome them. Effective vulnerability assessment and realistic testing are more difficult in the case of insider threats, and special attention is needed. Organizations need to find ways to give people the mission and the incentives to find nuclear security weaknesses and suggest ways they might be fixed. With the right approaches and incentives in place, effective vulnerability assessment and testing can be a key part of achieving and sustaining high levels of nuclear security.




effect

More than price transparency is needed to empower consumers to shop effectively for lower health care costs


As the nation still struggles with high healthcare costs that consume larger and larger portions of patient budgets as well as government coffers, the search for ways to get costs under control continues. Total healthcare spending in the U.S. now represents almost 18 percent of our entire economy. One promising cost-savings approach is called “reference pricing,” where the insurer establishes a price ceiling on selected services (joint replacement, colonoscopy, lab tests, etc.). Often, this price cap is based on the average of the negotiated prices for providers in its network, and anything above the reference price has to be covered by the insured consumer.

A study published in JAMA Internal Medicine by James Robinson and colleagues analyzed grocery store Safeway’s experience with reference pricing for laboratory services such as such as a lipid panel, comprehensive metabolic panel or prostate-specific antigen test. Safeway’s non-union employees were given information on prices at all laboratories through a mobile digital platform and told what Safeway would cover. Patients who chose a lab charging above the payment limit were required to pay the full difference themselves.

Employers see this type of program as a way to incentivize employees to think through the price of services when making healthcare decisions. Employees enjoy savings when they switch to a provider whose negotiated price is below the reference price, whereas if they choose services above it, they are responsible for the additional cost.

Robinson’s results show substantial savings to both Safeway and to its covered employees from reference pricing. Compared to trends in prices paid by insurance enrollees not subject to the caps of reference pricing, costs paid per test went down almost 32 percent, with a total savings over three years of $2.57 million – patients saved $1.05 million in out-of-pocket costs and Safeway saved $1.7 million.

I wrote an accompanying editorial in JAMA Internal Medicine focusing on different types of consumer-driven approaches to obtain lower prices; I argue that approaches that make the job simpler for consumers are likely to be even more successful. There is some work involved for patients to make reference pricing work, and many may have little awareness of price differences across laboratories, especially differences between those in some physicians’ offices, which tend to be more expensive but also more convenient, and in large commercial laboratories. Safeway helped steer their employees with accessible information: they provided employees with a smartphone app to compare lab prices.

But high-deductible plans like Safeway’s that provide extensive price information to consumers often have only limited impact because of the complexity of shopping for each service involved in a course of treatment -- something close to impossible for inpatient care. In addition, high deductibles are typically met for most hospitalizations (which tend to be the very expensive), so those consumers are less incentivized to comparison shop.

Plans that have limited provider networks relieve the consumer of much complexity and steer them towards providers with lower costs. Rather than review extensive price information, the consumer can focus on whether the provider is in the network. Reference pricing is another approach that simplifies—is the price less than the reference price? What was striking about Robinson’s results is that reference pricing for laboratories was employed in a high-deductible plan, showing that the savings achieved—in excess of 30 percent compared to a control—were beyond what the high deductible had accomplished.

While promising, reference pricing cannot be applied to all medical services: it works best for standardized services and where variation in quality is less of a concern. It also can be applied only to services that are “shoppable,” which is only about one-third of privately-insured spending. Even if reference pricing expanded to a number of other medical services, other cost containment approaches, including other network strategies, are needed to successfully contain health spending and lower costs for non-shoppable medical services.


Editor's note: This piece originally appeared in JAMA.

Authors

Publication: JAMA
       




effect

Why local governments should prepare for the fiscal effects of a dwindling coal industry

       




effect

Webinar: The effects of the coronavirus outbreak on marginalized communities

As the coronavirus outbreak rapidly spreads, existing social and economic inequalities in society have been exposed and exacerbated. State and local governments across the country, on the advice of public health officials, have shuttered businesses of all types and implemented other social distancing recommendations. Such measures assume a certain basic level of affluence, which many…

       




effect

How Can We Most Effectively Measure Happiness?


Editor's Note: At a Zócalo Public Square* event, several experts were asked to weigh in on the following question: How should we most effectively measure happiness? Here is Carol Graham's response-

We must make it a measure that’s meaningful to the average person

Happiness is increasingly in the media. Yet it is an age-old topic of inquiry for psychologists, philosophers, and even the early economists (before the science got dismal). The pursuit of happiness is even written into the Declaration of Independence (and into the title of my latest Brookings book, I might add). Public discussions of happiness rarely define the concept. Yet an increasing number of economists and psychologists are involved in a new science of measuring well-being, a concept that includes happiness but extends well beyond it.

Those of us involved focus on two distinct dimensions: hedonic well-being, a daily experience component; and evaluative well-being, the way in which people think about their lives as a whole, including purpose or meaning. Jeremy Bentham focused on the former and proposed increasing the happiness and contentment of the greatest number of individuals possible in a society as the goal of public policy. Aristotle, meanwhile, thought of happiness as eudemonia, a concept that combined two Greek words: “eu” meaning abundance and “daimon” meaning the power controlling an individual’s destiny. Using distinct questions and methods, we are able to measure both. We can look within and across societies and see how people experience their daily lives and how that varies across activities such as commuting time, work, and leisure time on the one hand, and how they feel about their lives as a whole—including their opportunities and past experiences, on the other. Happiness crosses both dimensions of well-being. If you ask people how happy they felt yesterday, you are capturing their feelings during yesterday’s experiences. If you ask them how happy they are with their lives in general, they are more likely to think of their lives as a whole.

The metrics give us a tool for measuring and evaluating the importance of many non-income components of people’s lives to their overall welfare. The findings are intuitive. Income matters to well-being, and not having enough income is bad for both dimensions. But income matters more to evaluative well-being, as it gives people more ability to choose how to live their lives. More income cannot make them experience each point in the day better. Other things, such as good health and relationships, matter as much if not more to well-being than income. The approach provides useful complements to the income-based metrics that are already in our statistics and in the GDP. Other countries, such as Britain, have already begun to include well-being metrics in their national statistics. There is even a nascent discussion of doing so here.

Perhaps what is most promising about well-being metrics is that they seem to be more compelling for the average man (or woman) on the street than are complex income measures, and they often tell different stories. There are, for example, endless messages about the importance of exercising for health, the drawbacks of smoking, and the expenses related to long commutes. Yet it is likely that they are most often heard by people who already exercise, don’t smoke, and bicycle to work. And exercise does not really enter into the GNP, while cigarette purchases and the gasoline and other expenses related to commuting enter in positively. If you told people that exercising made them happier and that smoking and commuting time made them unhappy (and yes, these are real findings from nationwide surveys), then perhaps they might listen?

Read other responses to this question at zocalopublicsquare.org »

*Zócalo Public Square is a not-for-profit daily ideas exchange that blends digital humanities journalism and live events. 

Authors

Publication: Zócalo Public Square
Image Source: © Ho New / Reuters
     
 
 




effect

New ideas for development effectiveness


Almost two years ago, I alerted readers to a contest, sponsored by the Bill and Melinda Gates Foundation through the Global Development Network, to develop new ideas to improve the impact of development cooperation. The Next Horizons Essay contest 2014 received 1,470 submissions from 142 countries, from which 13 winners were selected.

Four of the winners took part in a roundtable at the Brookings Institution yesterday. Here’s a quick synopsis of the main takeaways.

There is a lot of experimentation happening in the delivery of aid, and most aid agencies are thinking hard about how to position themselves to contribute more to the sustainable development goals. In part, this is because these agencies are mission-driven to improve impact. The current system of aid replenishments of multilateral institutions forces them to compete with each other by persuading donors that they are best deserving of the scarce aid budgets being allocated. Even bilateral aid agencies find themselves under budgetary stress, asked to justify the impact of their lending compared to a counterfactual of channeling the money through a multilateral agency or of contributing to an appeal from the United Nations for humanitarian assistance or climate financing.

Stephen Mwangi Macharia talked about using development assistance to promote social impact investing. He noted the problems of sustainability, dependence, and ownership that can arise in traditional aid relationships and argued that social entrepreneurs can avoid such pitfalls. The question then becomes how donors can best help build the market infrastructure to support such efforts. Stephen’s idea: develop a social impact network initiative to build entrepreneurs’ capacity to develop “bankable” projects and to have a database to help match entrepreneurs and funders. 

There is certainly a lot of interest in social impact investing. According to the Global Impact Investing Network, around $60 billion are already under management (although mostly in developed countries) and the market is growing rapidly. Some questioned the role of aid donors however, noting that they could reduce incentives for others (universities, non-profits, etc.) who charge a fee for business development, awareness raising, and other market services. Others questioned the risk tolerance of donors for impact investing and a culture in many countries where business is viewed suspiciously when it tries to intentionally generate positive social and environmental impacts. As an aside, Judith Rodin, president of the Rockefeller Foundation, has noted that the development of impact investing was one of the accomplishments that she was most proud of.

Ray Kennedy suggested that vertical funds, because of better governance and a sharper focus, should be a preferred channel for development assistance. Interestingly, his argument was not based on advocacy for a particular sector, but on the improved adaptability of these institutions. His evidence provided several examples of how vertical funds changed in response to changing global conditions, and, he argued, such change is a highly desirable virtue in our rapidly changing times.

Of course, the recommendation to favor vertical funds did not go unchallenged. There was a lively discussion about the comparative advantage of different institutions and the dangers of mission creep by more effective institutions into space left open by less effective institutions. Yet, most agreed that new platforms were being fluidly created to solve new problems, and that a “mixed coalition,” to borrow a phrase from one of the participants, was part of the preferred solution.

Yuen Yuen Ang took on the problem of local ownership directly. It is easy to talk about local ownership, she said, but few agencies do anything about it in their actual operations. Instead, they promote best practice ideas, some of which may fail even the basic test of “do no harm.” Basing her arguments on the complexity of how organizations change, she advocates specific internal reforms: diversify staff experiences and backgrounds beyond economics and finance; carve out time for staff to pursue “non-standard” approaches; and build a bank of examples about “best-fit” approaches that have been shown to work in weak institutional settings.

A lively discussion followed on best-fit versus best-practice approaches and, indeed, on whether there is a trade-off between the two or whether the issue is how to balance both at the same time. There was agreement that best-practice applies to some issues, especially where global standards have developed (debt management or anti-money laundering, perhaps). Best-fit is more useful when judgement and a deep understanding of local conditions are required. Some questioned the role of external donor agencies in such contexts, however.

Dan Honig argued for greater autonomy of field-based staff. Based on an extensive and unique data set, he was able to test the impact of the degree of autonomy on project success. The econometrics show significant impact of autonomy on certain activities and in certain situations. When the context is fluid and unpredictable, as in fragile states for example, or when judgement is required, as in institutional development, then autonomy can help. But when desired outcomes are easily measurable, such as school or road construction, then autonomy makes little difference.

During the discussion, there was agreement that too much of a focus on metrics could be distortionary and, in fluid situations, could be damaging. The theme of donor risk aversion came up again, but this time coupled with the idea that metrics, however false and misleading they might be, provide comfort and cover for bureaucrats. A sympathetic hearing was given to former United States Agency for International Development Administrator Andrew Natsios’ concept of “obsessive measurement disorder.” But, participants also warned of the need to show that the costs of autonomy, in the form of larger field presence and a limited ability to scale up, outweighed the benefits.

It was refreshing to see new evidence and multidisciplinary approaches being brought to bear on development effectiveness. The four themes highlighted in these essays—making markets work for the poor, improving agency governance, local ownership and contextualization, and decentralization and autonomy—resonated with those participants who are, or had been, active in aid agencies. I thank the Global Development Network and the Bill and Melinda Gates Foundation for this initiative, as well as to the winning scholars for injecting new ideas into the discourse.

Authors

      
 
 




effect

Charts of the Week: Housing affordability, COVID-19 effects

In Charts of the Week this week, housing affordability and some new COVID-19 related research. How to lower costs of apartment building to make them more affordable to build In the first piece in a series on how improved design and construction decisions can lower the cost of building multifamily housing, Hannah Hoyt and Jenny…

       




effect

The welfare effects of peer entry in the accommodation market: The case of Airbnb

The Internet has greatly reduced entry and advertising costs across a variety of industries. Peer-to-peer marketplaces such as Airbnb, Uber, and Etsy currently provide a platform for small and part-time peer providers to sell their goods and services. In this paper, Chiara Farronato of Harvard Business School and Andrey Fradkin of Boston University study the…

       




effect

From Popular Revolutions to Effective Reforms: A Statesman's Forum with President Mikheil Saakashvili of Georgia


Event Information

March 17, 2011
2:00 PM - 3:00 PM EDT

Saul/Zilkha Rooms
The Brookings Institution
1775 Massachusetts Avenue, NW
Washington, DC 20036

Since the Rose Revolution in November 2003, Georgia has grappled with the many challenges of building a modern, Western-oriented state, including implementing political and economic reforms, fighting corruption, and throwing off the vestiges of the Soviet legacy. On the path toward a functioning and reliable democracy, Georgia has pursued these domestic changes in an often difficult international environment, as evidenced by the Russia-Georgia conflict in 2008.

On March 17, the Center on the United States and Europe at Brookings (CUSE) hosted President Mikheil Saakashvili to discuss Georgia’s approach to these challenges. A leader of Georgia’s 2003 Rose Revolution, Saakashvili was elected president of Georgia in January 2004 and reelected for a second term in January 2008.

Vice President Martin Indyk, director of Foreign Policy at Brookings, provided introductory remarks and Senior Fellow and CUSE Director Fiona Hill moderated the discussion. After the program, President Saakashvili took audience questions.

Video

Audio

Transcript

Event Materials

     
 
 




effect

Experts assess the nuclear Non-Proliferation Treaty, 50 years after it went into effect

March 5, 2020 marks the 50th anniversary of the entry into effect of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Five decades on, is the treaty achieving what was originally envisioned? Where is it succeeding in curbing the spread of nuclear weapons, and where might it be falling short? Four Brookings experts on defense…

       




effect

Faster, more efficient innovation through better evidence on real-world safety and effectiveness


Many proposals to accelerate and improve medical product innovation and regulation focus on reforming the product development and regulatory review processes that occur before drugs and devices get to market. While important, such proposals alone do not fully recognize the broader opportunities that exist to learn more about the safety and effectiveness of drugs and devices after approval. As drugs and devices begin to be used in larger and more diverse populations and in more personalized clinical combinations, evidence from real-world use during routine patient care is increasingly important for accelerating innovation and improving regulation.

First, further evidence development from medical product use in large populations can allow providers to better target and treat individuals, precisely matching the right drug or device to the right patients. As genomic sequencing and other diagnostic technologies continue to improve, postmarket evidence development is critical to assessing the full range of genomic subtypes, comorbidities, patient characteristics and preferences, and other factors that may significantly affect the safety and effectiveness of drugs and devices. This information is often not available or population sizes are inadequate to characterize such subgroup differences in premarket randomized controlled trials.

Second, improved processes for generating postmarket data on medical products are necessary for fully realizing the intended effect of premarket reforms that expedite regulatory approval. The absence of a reliable postmarket system to follow up on potential safety or effectiveness issues means that potential signals or concerns must instead be addressed through additional premarket studies or through one-off postmarket evaluations that are more costly, slower, and likely to be less definitive than would be possible through a better-established infrastructure. As a result, the absence of better systems for generating postmarket evidence creates a barrier to more extensive use of premarket reforms to promote innovation.

These issues can be addressed through initiatives that combine targeted premarket reforms with postmarket steps to enhance innovation and improve evidence on safety and effectiveness throughout the life cycle of a drug or device. The ability to routinely capture clinically relevant electronic health data within our health care ecosystem is improving, increasingly allowing electronic health records, payer claims data, patient-reported data, and other relevant data to be leveraged for further research and innovation in care. Recent legislative proposals released by the House of Representatives’ 21st Century Cures effort acknowledge and seek to build on this progress in order to improve medical product research, development, and use. The initial Cures discussion draft included provisions for better, more systematic reporting of and access to clinical trials data; for increased access to Medicare claims data for research; and for FDA to promulgate guidance on the sources, analysis, and potential use of so-called Real World Evidence. These are potentially useful proposals that could contribute valuable data and methods to advancing the development of better treatments.

What remains a gap in the Cures proposals, however, is a more systematic approach to improving the availability of postmarket evidence. Such a systematic approach is possible now. Biomedical researchers and health care plans and providers are doing more to collect and analyze clinical and outcomes data. Multiple independent efforts – including the U.S. Food and Drug Administration’s Sentinel Initiative for active postmarket drug safety surveillance, the Patient-Centered Outcomes Research Institute’s PCORnet for clinical effectiveness studies, the Medical Device Epidemiology Network (MDEpiNet) for developing better methods and medical device registries for medical device surveillance and a number of dedicated, product-specific outcomes registries – have demonstrated the potential for large-scale, systematic postmarket data collection. Building on these efforts could provide unprecedented evidence on how medical products perform in the real-world and on the course of underlying diseases that they are designed to treat, while still protecting patient privacy and confidentiality.

These and other postmarket data systems now hold the potential to contribute to public-private collaboration for improved population-based evidence on medical products on a wider scale. Action in the Cures initiative to unlock this potential will enable the legislation to achieve its intended effect of promoting quicker, more efficient development of effective, personalized treatments and cures.

What follows is a set of both short- and long-term proposals that would bolster the current systems for postmarket evidence development, create new mechanisms for generating postmarket data, and enable individual initiatives on evidence development to work together as part of a broad push toward a truly learning health care system.

Downloads

      




effect

Risk evaluation and mitigation strategies (REMS): Building a framework for effective patient counseling on medication risks and benefits

Event Information

July 24, 2015
8:45 AM - 4:15 PM EDT

The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC

Under the Food and Drug Administration Amendments Act (FDAAA) of 2007, the FDA has the authority to require pharmaceutical manufacturers to develop Risk Evaluation and Mitigation Strategies (REMS) for drugs or biologics that carry serious potential or known risks. Since that time, the REMS program has become an important tool in ensuring that riskier drugs are used safely, and it has allowed FDA to facilitate access to a host of drugs that may not otherwise have been approved. However, concerns have arisen regarding the effects of REMS programs on patient access to products, as well as the undue burden that the requirements place on the health care system. In response to these concerns, FDA has initiated reform efforts aimed at improving the standardization, assessment, and integration of REMS within the health care system. As part of this broader initiative, the agency is pursuing four priority projects, one of which focuses on improving provider-patient benefit-risk counseling for drugs that have a REMS attached.

Under a cooperative agreement with FDA, the Center for Health Policy at Brookings held an expert workshop on July 24 titled, “Risk Evaluation and Mitigation Strategies (REMS): Building a Framework for Effective Patient Counseling on Medication Risks and Benefits”. This workshop was the first in a series of convening activities that will seek input from stakeholders across academia, industry, health systems, and patient advocacy groups, among others. Through these activities, Brookings and FDA will further develop and refine an evidence-based framework of best practices and principles that can be used to inform the development and effective use of REMS tools and processes.

Event Materials

       




effect

As coronavirus hits Latin America, expect serious and enduring effects

As COVID-19 passes across the globe, Latin America may be hard-hit, with deep humanitarian, economic, and political consequences. In early March, there was hope that the remoteness or the weather in Latin America might help it escape the virus. But within three weeks, the number of known infections jumped exponentially, spreading to every country in…

       




effect

Charts of the Week: Housing affordability, COVID-19 effects

In Charts of the Week this week, housing affordability and some new COVID-19 related research. How to lower costs of apartment building to make them more affordable to build In the first piece in a series on how improved design and construction decisions can lower the cost of building multifamily housing, Hannah Hoyt and Jenny…

       




effect

Scaling Up: A Path to Effective Development

Introduction

The global community has set itself the challenge of meeting the Millennium Development Goals (MDGs) by 2015 as a way to combat world poverty and hunger. In 2007, the halfway point, it is clear that many countries will not be able to meet the MDGs without undertaking significantly greater efforts. One constraint that needs to be overcome is that development interventions—projects, programs, policies—are all too often like small pebbles thrown into a big pond: they are limited in scale, short-lived, and therefore have little lasting impact. This may explain why so many studies have found that external aid has had weak or no development impact in the aggregate, even though many individual interventions have been successful in terms of their project- or program-specific goals.

Confronted with the challenge of meeting the MDGs, the development community has recently begun to focus on the need to scale up interventions. Scaling up means taking successful projects, programs, or policies and expanding, adapting, and sustaining them in different ways over time for greater development impact. This emphasis on scaling up has emerged from concern over how to deploy and absorb the substantially increased levels of official development assistance that were promised by the wealthy countries at recent G8 summits. A fragmented aid architecture complicates this task; multilateral, bilateral, and private aid entities have multiplied, leading to many more—but smaller— aid projects and programs and increasing transaction costs for recipient countries. In response, some aid donors have started to move from project to program support, and in the Paris Declaration, official donors committed themselves to work together for better coordinated aid delivery.

The current focus on scaling up is not entirely new, however. During the 1980s, as nongovernmental organizations (NGOs) increasingly began to engage in development activities, scaling up emerged as a challenge. NGO interventions were (and are) typically small in scale and often apply new approaches. Therefore, the question of how to replicate and scale up successful models gained prominence even then, especially in connection with participatory and community development approaches. Indeed, the current interest among philanthropic foundations and NGOs in how to scale up their interventions is an echo of these earlier concerns.

In response to this increased focus on scaling up—and its increased urgency—this policy brief takes a comprehensive look at what the literature and experience have to say about whether and how to scale up development interventions.

Downloads

Authors

Publication: International Food Policy Research Institute
      
 
 




effect

Scaling Up: A Framework and Lessons for Development Effectiveness from Literature and Practice

Abstract

Scaling up of development interventions is much debated today as a way to improve their impact and effectiveness. Based on a review of scaling up literature and practice, this paper develops a framework for the key dynamics that allow the scaling up process to happen. The authors explore the possible approaches and paths to scaling up, the drivers of expansion and of replication, the space that has to be created for interventions to grow, and the role of evaluation and of careful planning and implementation. They draw a number of lessons for the development analyst and practitioner. More than anything else, scaling up is about political and organizational leadership, about vision, values and mindset, and about incentives and accountability—all oriented to make scaling up a central element of individual, institutional, national and international development efforts. The paper concludes by highlighting some implications for aid and aid donors.

An annotated bibliography of the literature on scaling up and development aid effectiveness was created by Oksana Pidufala to supplement this working paper. Read more »

Downloads

Authors

      
 
 




effect

Webinar: The effects of the coronavirus outbreak on marginalized communities

As the coronavirus outbreak rapidly spreads, existing social and economic inequalities in society have been exposed and exacerbated. State and local governments across the country, on the advice of public health officials, have shuttered businesses of all types and implemented other social distancing recommendations. Such measures assume a certain basic level of affluence, which many…

       




effect

The polarizing effect of Islamic State aggression on the global jihadi movement

      
 
 




effect

Charts of the Week: Housing affordability, COVID-19 effects

In Charts of the Week this week, housing affordability and some new COVID-19 related research. How to lower costs of apartment building to make them more affordable to build In the first piece in a series on how improved design and construction decisions can lower the cost of building multifamily housing, Hannah Hoyt and Jenny…

       




effect

Global Leadership in Transition : Making the G20 More Effective and Responsive


Brookings Institution Press with the Korean Development Institute 2011 353pp.

Global Leadership in Transition calls for innovations that "institutionalize" or consolidate the G20, helping to make it the global economy’s steering committee. The emergence of the G20 as the world’s premier forum for international economic cooperation presents an opportunity to improve economic summitry and make global leadership more responsive and effective, a major improvement over the G8 era.

The origin of Global Leadership in Transition—which contains contributions from three dozen top experts from all over the world—was a Brookings seminar on issues surrounding the 2010 Seoul G20 summit. That grew into a further conference in Washington and eventually a major symposium in Seoul.

“Key contributors to this volume were well ahead of their time in advocating summit meetings of G20 leaders. In this book, they now offer a rich smorgasbord of creative ideas for transforming the G20 from a crisis-management committee to a steering group for the international system that deserves the attention of those who wish to shape the future of global governance.”—C. Randall Henning, American University and the Peterson Institute

Contributors: Alan Beattie, Financial Times; Thomas Bernes, Centre for International Governance Innovation (CIGI); Sergio Bitar, former Chilean minister of public works; Paul Blustein, Brookings Institution and CIGI; Barry Carin, CIGI and University of Victoria; Andrew F. Cooper, CIGI and University of Waterloo; Kemal Derviş, Brookings; Paul Heinbecker, CIGI and Laurier University Centre for Global Relations; Oh-Seok Hyun, Korea Development Institute (KDI); Jomo Kwame Sundaram, United Nations; Homi Kharas, Brookings; Hyeon Wook Kim, KDI; Sungmin Kim, Bank of Korea; John Kirton, University of Toronto; Johannes Linn, Brookings and Emerging Markets Forum; Pedro Malan, Itau Unibanco; Thomas Mann, Brookings; Paul Martin, former prime minister of Canada; Simon Maxwell, Overseas Development Institute and Climate and Development Knowledge Network; Jacques Mistral, Institut Français des Relations Internationales; Victor Murinde, University of Birmingham (UK); Pier Carlo Padoan, OECD Paris; Yung Chul Park, Korea University; Stewart Patrick, Council on Foreign Relations; Il SaKong, Presidential Committee for the G20 Summit; Wendy R. Sherman, Albright Stonebridge Group; Gordon Smith, Centre for Global Studies and CIGI; Bruce Stokes, German Marshall Fund; Ngaire Woods, Oxford Blavatnik School of Government; Lan Xue, Tsinghua University (Beijing); Yanbing Zhang, Tsinghua University.

ABOUT THE EDITORS

Colin I. Bradford
Wonhyuk Lim
Wonhyuk Lim is director of policy research at the Center for International Development within the Korea Development Institute. He was with the Presidential Transition Committee and the Presidential Committee on Northeast Asia after the 2002 election in Korea. A former fellow with Brookings’s Center for Northeast Asian Policy Studies, he has written extensively on development and corporate governance issues.

Downloads

Ordering Information:
  • {9ABF977A-E4A6-41C8-B030-0FD655E07DBF}, 978-0-8157-2145-1, $29.95 Add to Cart
     
 
 




effect

Proximity to the flagpole: Effective leadership in geographically dispersed organizations


The workplace is changing rapidly, and more and more leaders in government and private industry are required to lead those who are geographically separated. Globalization, economic shifts from manufacturing to information, the need to be closer to customers, and improved technological capabilities have increased the geographic dispersion of many organizations. While these organizations offer many exciting opportunities, they also bring new leadership challenges that are amplified because of the separation between leaders and followers. Although much has been researched and written on leadership in general, relatively little has been focused on the unique leadership challenges and opportunities presented in geographically separated environments. Furthermore, most leaders are not given the right tools and training to overcome the challenges or take advantage of the opportunities when leading in these unique settings.

A survey of leaders within a geographically dispersed military organization confirmed there are distinct differences in how remote and local leaders operate, and most leadership tasks related to leading those who are remote are more difficult than with those who are co-located. The tasks most difficult for remote leaders are related to communicating, mentoring and building personal relationships, fostering teamwork and group identity, and measuring performance. To be effective, leaders must be aware of the challenges they face when leading from afar and be deliberate in their engagement.

Although there are unique leadership challenges in geographically dispersed environments, most current leadership literature and training is developed on work in face-to-face settings. Leading geographically dispersed organizations is not a new concept, but technological advances over the last decade have provided leaders with greater ability to be more influential and involved with distant teams than ever before. This advancement has given leaders not only the opportunity to be successful in a moment of time but ensures continued success by enhancing the way they build dispersed organizations and grow future leaders from afar.

Downloads

Authors

  • Scott M. Kieffer
Image Source: © Edgar Su / Reuters
     
 
 




effect

The effect of COVID-19 and disease suppression policies on labor markets: A preliminary analysis of the data

World leaders are deliberating when and how to re-open business operations amidst considerable uncertainty as to the economic consequences of the coronavirus. One pressing question is whether or not countries that have remained relatively open have managed to escape at least some of the economic harm, and whether that harm is related to the spread…

       




effect

Turkey cannot effectively fight ISIS unless it makes peace with the Kurds


Terrorist attacks with high casualties usually create a sense of national solidarity and patriotic reaction in societies that fall victim to such heinous acts. Not in Turkey, however. Despite a growing number of terrorist attacks by the so-called Islamic State on Turkish soil in the last 12 months, the country remains as polarized as ever under strongman President Recep Tayyip Erdogan.

In fact, for two reasons, jihadist terrorism is exacerbating the division. First, Turkey's domestic polarization already has an Islamist-versus-secularist dimension. Most secularists hold Erdogan responsible for having created domestic political conditions that turn a blind eye to jihadist activities within Turkey.

It must also be said that polarization between secularists and Islamists in Turkey often fails to capture the complexity of Turkish politics, where not all secularists are democrats and not all Islamists are autocrats. In fact, there was a time when Erdogan was hailed as the great democratic reformer against the old secularist establishment under the guardianship of the military.

Yet, in the last five years, the religiosity and conservatism of the ruling Justice and Development Party, also known by its Turkish acronym AKP, on issues ranging from gender equality to public education has fueled the perception of rapid Islamization. Erdogan's anti-Western foreign policy discourse -- and the fact that Ankara has been strongly supportive of the Muslim Brotherhood in the wake of the Arab Spring -- exacerbates the secular-versus-Islamist divide in Turkish society.

Erdogan doesn't fully support the eradication of jihadist groups in Syria.

The days Erdogan represented the great hope of a Turkish model where Islam, secularism, democracy and pro-Western orientation came together are long gone. Despite all this, it is sociologically more accurate to analyze the polarization in Turkey as one between democracy and autocracy rather than one of Islam versus secularism.

The second reason why ISIS terrorism is exacerbating Turkey's polarization is related to foreign policy. A significant segment of Turkish society believes Erdogan's Syria policy has ended up strengthening ISIS. In an attempt to facilitate Syrian President Bashar Assad's overthrow, the AKP turned a blind eye to the flow of foreign volunteers transiting Turkey to join extremist groups in Syria. Until last year, Ankara often allowed Islamists to openly organize and procure equipment and supplies on the Turkish side of the Syria border.

Making things worse is the widely held belief that Turkey's National Intelligence Organization, or MİT, facilitated the supply of weapons to extremist Islamist elements amongst the Syrian rebels. Most of the links were with organizations such as Jabhat al-Nusra, Ahrar al-Sham and Islamist extremists from Syria's Turkish-speaking Turkmen minority.

He is trying to present the PKK as enemy number one.

Turkey's support for Islamist groups in Syria had another rationale in addition to facilitating the downfall of the Assad regime: the emerging Kurdish threat in the north of the country. Syria's Kurds are closely linked with Turkey's Kurdish nemesis, the Kurdistan Workers' Party, or PKK, which has been conducting an insurgency for greater rights for Turkey's Kurds since 1984.

On the one hand, Ankara has hardened its stance against ISIS by opening the airbase at Incirlik in southern Turkey for use by the U.S-led coalition targeting the organization with air strikes. However, Erdogan doesn't fully support the eradication of jihadist groups in Syria. The reason is simple: the Arab and Turkmen Islamist groups are the main bulwark against the expansion of the de facto autonomous Kurdish enclave in northern Syria. The AKP is concerned that the expansion and consolidation of a Kurdish state in Syria would both strengthen the PKK and further fuel similar aspirations amongst Turkey's own Kurds.

Will the most recent ISIS terrorist attack in Istanbul change anything in Turkey's main threat perception? When will the Turkish government finally realize that the jihadist threat in the country needs to be prioritized? If you listen to Erdogan's remarks, you will quickly realize that the real enemy he wants to fight is still the PKK. He tries hard after each ISIS attack to create a "generic" threat of terrorism in which all groups are bundled up together without any clear references to ISIS. He is trying to present the PKK as enemy number one.

Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

Under such circumstances, Turkish society will remain deeply polarized between Islamists, secularists, Turkish nationalists and Kurdish rebels. Terrorist attacks, such as the one in Istanbul this week and the one in Ankara in July that killed more than 100 people, will only exacerbate these divisions.

Finally, it is important to note that the Turkish obsession with the Kurdish threat has also created a major impasse in Turkish-American relations in Syria. Unlike Ankara, Washington's top priority in Syria is to defeat ISIS. The fact that U.S. strategy consists of using proxy forces such as Syrian Kurds against ISIS further complicates the situation.

There will be no real progress in Turkey's fight against ISIS unless there is a much more serious strategy to get Ankara to focus on peace with the PKK. Only after a peace process with Kurds will Turkey be able to understand that ISIS is an existential threat to national security.

This piece was originally posted by The Huffington Post.

Publication: The Huffington Post
Image Source: © Murad Sezer / Reuters
      
 
 




effect

Why local governments should prepare for the fiscal effects of a dwindling coal industry

       




effect

Simulating the effects of tobacco retail restriction policies

Tobacco use remains the single largest preventable cause of death and disease in the United States, killing more than 480,000 Americans each year and incurring over $300 billion per year in costs for direct medical care and lost productivity. In addition, of all cigarettes sold in the U.S. in 2016, 35% were menthol cigarettes, which…

       




effect

Simulating the effects of tobacco retail restriction policies

Tobacco use remains the single largest preventable cause of death and disease in the United States, killing more than 480,000 Americans each year and incurring over $300 billion per year in costs for direct medical care and lost productivity. In addition, of all cigarettes sold in the U.S. in 2016, 35% were menthol cigarettes, which…

       




effect

Measuring effects of the Common Core


Part II of the 2015 Brown Center Report on American Education

Over the next several years, policy analysts will evaluate the impact of the Common Core State Standards (CCSS) on U.S. education.  The task promises to be challenging.  The question most analysts will focus on is whether the CCSS is good or bad policy.  This section of the Brown Center Report (BCR) tackles a set of seemingly innocuous questions compared to the hot-button question of whether Common Core is wise or foolish.  The questions all have to do with when Common Core actually started, or more precisely, when the Common Core started having an effect on student learning.  And if it hasn’t yet had an effect, how will we know that CCSS has started to influence student achievement? 

The analysis below probes this issue empirically, hopefully persuading readers that deciding when a policy begins is elemental to evaluating its effects.  The question of a policy’s starting point is not always easy to answer.  Yet the answer has consequences.  You can’t figure out whether a policy worked or not unless you know when it began.[i] 

The analysis uses surveys of state implementation to model different CCSS starting points for states and produces a second early report card on how CCSS is doing.  The first report card, focusing on math, was presented in last year’s BCR.  The current study updates state implementation ratings that were presented in that report and extends the analysis to achievement in reading.  The goal is not only to estimate CCSS’s early impact, but also to lay out a fair approach for establishing when the Common Core’s impact began—and to do it now before data are generated that either critics or supporters can use to bolster their arguments.  The experience of No Child Left Behind (NCLB) illustrates this necessity.

Background

After the 2008 National Assessment of Educational Progress (NAEP) scores were released, former Secretary of Education Margaret Spellings claimed that the new scores showed “we are on the right track.”[ii] She pointed out that NAEP gains in the previous decade, 1999-2009, were much larger than in prior decades.  Mark Schneider of the American Institutes of Research (and a former Commissioner of the National Center for Education Statistics [NCES]) reached a different conclusion. He compared NAEP gains from 1996-2003 to 2003-2009 and declared NCLB’s impact disappointing.  “The pre-NCLB gains were greater than the post-NCLB gains.”[iii]  It is important to highlight that Schneider used the 2003 NAEP scores as the starting point for assessing NCLB.  A report from FairTest on the tenth anniversary of NCLB used the same demarcation for pre- and post-NCLB time frames.[iv]  FairTest is an advocacy group critical of high stakes testing—and harshly critical of NCLB—but if the 2003 starting point for NAEP is accepted, its conclusion is indisputable, “NAEP score improvement slowed or stopped in both reading and math after NCLB was implemented.” 

Choosing 2003 as NCLB’s starting date is intuitively appealing.  The law was introduced, debated, and passed by Congress in 2001.  President Bush signed NCLB into law on January 8, 2002.  It takes time to implement any law.  The 2003 NAEP is arguably the first chance that the assessment had to register NCLB’s effects. 

Selecting 2003 is consequential, however.  Some of the largest gains in NAEP’s history were registered between 2000 and 2003.  Once 2003 is established as a starting point (or baseline), pre-2003 gains become “pre-NCLB.”  But what if the 2003 NAEP scores were influenced by NCLB? Experiments evaluating the effects of new drugs collect baseline data from subjects before treatment, not after the treatment has begun.   Similarly, evaluating the effects of public policies require that baseline data are not influenced by the policies under evaluation.   

Avoiding such problems is particularly difficult when state or local policies are adopted nationally.  The federal effort to establish a speed limit of 55 miles per hour in the 1970s is a good example.  Several states already had speed limits of 55 mph or lower prior to the federal law’s enactment.  Moreover, a few states lowered speed limits in anticipation of the federal limit while the bill was debated in Congress.  On the day President Nixon signed the bill into law—January 2, 1974—the Associated Press reported that only 29 states would be required to lower speed limits.  Evaluating the effects of the 1974 law with national data but neglecting to adjust for what states were already doing would obviously yield tainted baseline data.

There are comparable reasons for questioning 2003 as a good baseline for evaluating NCLB’s effects.  The key components of NCLB’s accountability provisions—testing students, publicizing the results, and holding schools accountable for results—were already in place in nearly half the states.  In some states they had been in place for several years.  The 1999 iteration of Quality Counts, Education Week’s annual report on state-level efforts to improve public education, entitled Rewarding Results, Punishing Failure, was devoted to state accountability systems and the assessments underpinning them. Testing and accountability are especially important because they have drawn fire from critics of NCLB, a law that wasn’t passed until years later.

The Congressional debate of NCLB legislation took all of 2001, allowing states to pass anticipatory policies.  Derek Neal and Diane Whitmore Schanzenbach reported that “with the passage of NCLB lurking on the horizon,” Illinois placed hundreds of schools on a watch list and declared that future state testing would be high stakes.[v] In the summer and fall of 2002, with NCLB now the law of the land, state after state released lists of schools falling short of NCLB’s requirements.  Then the 2002-2003 school year began, during which the 2003 NAEP was administered.  Using 2003 as a NAEP baseline assumes that none of these activities—previous accountability systems, public lists of schools in need of improvement, anticipatory policy shifts—influenced achievement.  That is unlikely.[vi]

The Analysis

Unlike NCLB, there was no “pre-CCSS” state version of Common Core.  States vary in how quickly and aggressively they have implemented CCSS.  For the BCR analyses, two indexes were constructed to model CCSS implementation.  They are based on surveys of state education agencies and named for the two years that the surveys were conducted.  The 2011 survey reported the number of programs (e.g., professional development, new materials) on which states reported spending federal funds to implement CCSS.  Strong implementers spent money on more activities.  The 2011 index was used to investigate eighth grade math achievement in the 2014 BCR.  A new implementation index was created for this year’s study of reading achievement.  The 2013 index is based on a survey asking states when they planned to complete full implementation of CCSS in classrooms.  Strong states aimed for full implementation by 2012-2013 or earlier.      

Fourth grade NAEP reading scores serve as the achievement measure.  Why fourth grade and not eighth?  Reading instruction is a key activity of elementary classrooms but by eighth grade has all but disappeared.  What remains of “reading” as an independent subject, which has typically morphed into the study of literature, is subsumed under the English-Language Arts curriculum, a catchall term that also includes writing, vocabulary, listening, and public speaking.  Most students in fourth grade are in self-contained classes; they receive instruction in all subjects from one teacher.  The impact of CCSS on reading instruction—the recommendation that non-fiction take a larger role in reading materials is a good example—will be concentrated in the activities of a single teacher in elementary schools. The burden for meeting CCSS’s press for non-fiction, on the other hand, is expected to be shared by all middle and high school teachers.[vii] 

Results

Table 2-1 displays NAEP gains using the 2011 implementation index.  The four year period between 2009 and 2013 is broken down into two parts: 2009-2011 and 2011-2013.  Nineteen states are categorized as “strong” implementers of CCSS on the 2011 index, and from 2009-2013, they outscored the four states that did not adopt CCSS by a little more than one scale score point (0.87 vs. -0.24 for a 1.11 difference).  The non-adopters are the logical control group for CCSS, but with only four states in that category—Alaska, Nebraska, Texas, and Virginia—it is sensitive to big changes in one or two states.  Alaska and Texas both experienced a decline in fourth grade reading scores from 2009-2013.

The 1.11 point advantage in reading gains for strong CCSS implementers is similar to the 1.27 point advantage reported last year for eighth grade math.  Both are small.  The reading difference in favor of CCSS is equal to approximately 0.03 standard deviations of the 2009 baseline reading score.  Also note that the differences were greater in 2009-2011 than in 2011-2013 and that the “medium” implementers performed as well as or better than the strong implementers over the entire four year period (gain of 0.99).

Table 2-2 displays calculations using the 2013 implementation index.  Twelve states are rated as strong CCSS implementers, seven fewer than on the 2011 index.[viii]  Data for the non-adopters are the same as in the previous table.  In 2009-2013, the strong implementers gained 1.27 NAEP points compared to -0.24 among the non-adopters, a difference of 1.51 points.  The thirty-four states rated as medium implementers gained 0.82.  The strong implementers on this index are states that reported full implementation of CCSS-ELA by 2013.  Their larger gain in 2011-2013 (1.08 points) distinguishes them from the strong implementers in the previous table.  The overall advantage of 1.51 points over non-adopters represents about 0.04 standard deviations of the 2009 NAEP reading score, not a difference with real world significance.  Taken together, the 2011 and 2013 indexes estimate that NAEP reading gains from 2009-2013 were one to one and one-half scale score points larger in the strong CCSS implementation states compared to the states that did not adopt CCSS.

Common Core and Reading Content

As noted above, the 2013 implementation index is based on when states scheduled full implementation of CCSS in classrooms.  Other than reading achievement, does the index seem to reflect changes in any other classroom variable believed to be related to CCSS implementation?  If the answer is “yes,” that would bolster confidence that the index is measuring changes related to CCSS implementation. 

Let’s examine the types of literature that students encounter during instruction.  Perhaps the most controversial recommendation in the CCSS-ELA standards is the call for teachers to shift the content of reading materials away from stories and other fictional forms of literature in favor of more non-fiction.  NAEP asks fourth grade teachers the extent to which they teach fiction and non-fiction over the course of the school year (see Figure 2-1). 

Historically, fiction dominates fourth grade reading instruction.  It still does.  The percentage of teachers reporting that they teach fiction to a “large extent” exceeded the percentage answering “large extent” for non-fiction by 23 points in 2009 and 25 points in 2011.  In 2013, the difference narrowed to only 15 percentage points, primarily because of non-fiction’s increased use.  Fiction still dominated in 2013, but not by as much as in 2009.

The differences reported in Table 2-3 are national indicators of fiction’s declining prominence in fourth grade reading instruction.  What about the states?  We know that they were involved to varying degrees with the implementation of Common Core from 2009-2013.  Is there evidence that fiction’s prominence was more likely to weaken in states most aggressively pursuing CCSS implementation? 

Table 2-3 displays the data tackling that question.  Fourth grade teachers in strong implementation states decisively favored the use of fiction over non-fiction in 2009 and 2011.  But the prominence of fiction in those states experienced a large decline in 2013 (-12.4 percentage points).  The decline for the entire four year period, 2009-2013, was larger in the strong implementation states (-10.8) than in the medium implementation (-7.5) or non-adoption states (-9.8).  

Conclusion

This section of the Brown Center Report analyzed NAEP data and two indexes of CCSS implementation, one based on data collected in 2011, the second from data collected in 2013.  NAEP scores for 2009-2013 were examined.  Fourth grade reading scores improved by 1.11 scale score points in states with strong implementation of CCSS compared to states that did not adopt CCSS.  A similar comparison in last year’s BCR found a 1.27 point difference on NAEP’s eighth grade math test, also in favor of states with strong implementation of CCSS.  These differences, although certainly encouraging to CCSS supporters, are quite small, amounting to (at most) 0.04 standard deviations (SD) on the NAEP scale.  A threshold of 0.20 SD—five times larger—is often invoked as the minimum size for a test score change to be regarded as noticeable.  The current study’s findings are also merely statistical associations and cannot be used to make causal claims.  Perhaps other factors are driving test score changes, unmeasured by NAEP or the other sources of data analyzed here. 

The analysis also found that fourth grade teachers in strong implementation states are more likely to be shifting reading instruction from fiction to non-fiction texts.  That trend should be monitored closely to see if it continues.  Other events to keep an eye on as the Common Core unfolds include the following:

1.  The 2015 NAEP scores, typically released in the late fall, will be important for the Common Core.  In most states, the first CCSS-aligned state tests will be given in the spring of 2015.  Based on the earlier experiences of Kentucky and New York, results are expected to be disappointing.  Common Core supporters can respond by explaining that assessments given for the first time often produce disappointing results.  They will also claim that the tests are more rigorous than previous state assessments.  But it will be difficult to explain stagnant or falling NAEP scores in an era when implementing CCSS commands so much attention.   

2.  Assessment will become an important implementation variable in 2015 and subsequent years.  For analysts, the strategy employed here, modeling different indicators based on information collected at different stages of implementation, should become even more useful.  Some states are planning to use Smarter Balanced Assessments, others are using the Partnership for Assessment of Readiness for College and Careers (PARCC), and still others are using their own homegrown tests.   To capture variation among the states on this important dimension of implementation, analysts will need to use indicators that are up-to-date.

3.  The politics of Common Core injects a dynamic element into implementation.  The status of implementation is constantly changing.  States may choose to suspend, to delay, or to abandon CCSS.  That will require analysts to regularly re-configure which states are considered “in” Common Core and which states are “out.”  To further complicate matters, states may be “in” some years and “out” in others.

A final word.  When the 2014 BCR was released, many CCSS supporters commented that it is too early to tell the effects of Common Core.  The point that states may need more time operating under CCSS to realize its full effects certainly has merit.  But that does not discount everything states have done so far—including professional development, purchasing new textbooks and other instructional materials, designing new assessments, buying and installing computer systems, and conducting hearings and public outreach—as part of implementing the standards.  Some states are in their fifth year of implementation.  It could be that states need more time, but innovations can also produce their biggest “pop” earlier in implementation rather than later.  Kentucky was one of the earliest states to adopt and implement CCSS.  That state’s NAEP fourth grade reading score declined in both 2009-2011 and 2011-2013.  The optimism of CCSS supporters is understandable, but a one and a half point NAEP gain might be as good as it gets for CCSS.



[i] These ideas were first introduced in a 2013 Brown Center Chalkboard post I authored, entitled, “When Does a Policy Start?”

[ii] Maria Glod, “Since NCLB, Math and Reading Scores Rise for Ages 9 and 13,” Washington Post, April 29, 2009.

[iii] Mark Schneider, “NAEP Math Results Hold Bad News for NCLB,” AEIdeas (Washington, D.C.: American Enterprise Institute, 2009).

[iv] Lisa Guisbond with Monty Neill and Bob Schaeffer, NCLB’s Lost Decade for Educational Progress: What Can We Learn from this Policy Failure? (Jamaica Plain, MA: FairTest, 2012).

[v] Derek Neal and Diane Schanzenbach, “Left Behind by Design: Proficiency Counts and Test-Based Accountability,” NBER Working Paper No. W13293 (Cambridge: National Bureau of Economic Research, 2007), 13.

[vi] Careful analysts of NCLB have allowed different states to have different starting dates: see Thomas Dee and Brian A. Jacob, “Evaluating NCLB,” Education Next 10, no. 3 (Summer 2010); Manyee Wong, Thomas D. Cook, and Peter M. Steiner, “No Child Left Behind: An Interim Evaluation of Its Effects on Learning Using Two Interrupted Time Series Each with Its Own Non-Equivalent Comparison Series,” Working Paper 09-11 (Evanston, IL: Northwestern University Institute for Policy Research, 2009).

[vii] Common Core State Standards Initiative. “English Language Arts Standards, Key Design Consideration.” Retrieved from: http://www.corestandards.org/ELA-Literacy/introduction/key-design-consideration/

[viii] Twelve states shifted downward from strong to medium and five states shifted upward from medium to strong, netting out to a seven state swing.

« Part I: Girls, boys, and reading Part III: Student Engagement »

Downloads

Authors

     
 
 




effect

Eurozone desperately needs a fiscal transfer mechanism to soften the effects of competitiveness imbalances


The eurozone has three problems: national debt obligations that cannot be met, medium-term imbalances in trade competitiveness, and long-term structural flaws.

The short-run problem requires more of the monetary easing that Germany has, with appalling shortsightedness, been resisting, and less of the near-term fiscal restraint that Germany has, with equally appalling shortsightedness, been seeking. To insist that Greece meet all of its near-term current debt service obligations makes about as much sense as did French and British insistence that Germany honor its reparations obligations after World War I. The latter could not be and were not honored. The former cannot and will not be honored either.

The medium-term problem is that, given a single currency, labor costs are too high in Greece and too low in Germany and some other northern European countries. Because adjustments in currency values cannot correct these imbalances, differences in growth of wages must do the job—either wage deflation and continued depression in Greece and other peripheral countries, wage inflation in Germany, or both. The former is a recipe for intense and sustained misery. The latter, however politically improbable it may now seem, is the better alternative.

The long-term problem is that the eurozone lacks the fiscal transfer mechanisms necessary to soften the effects of competitiveness imbalances while other forms of adjustment take effect. This lack places extraordinary demands on the willingness of individual nations to undertake internal policies to reduce such imbalances. Until such fiscal transfer mechanisms are created, crises such as the current one are bound to recur.

Present circumstances call for a combination of short-term expansionary policies that have to be led or accepted by the surplus nations, notably Germany, who will also have to recognize and accept that not all Greek debts will be paid or that debt service payments will not be made on time and at originally negotiated interest rates. The price for those concessions will be a current and credible commitment eventually to restore and maintain fiscal balance by the peripheral countries, notably Greece.


Authors

Publication: The International Economy
Image Source: © Vincent Kessler / Reuters
     
 
 




effect

The polarizing effect of Islamic State aggression on the global jihadi movement

      
 
 




effect

Experts assess the nuclear Non-Proliferation Treaty, 50 years after it went into effect

March 5, 2020 marks the 50th anniversary of the entry into effect of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT). Five decades on, is the treaty achieving what was originally envisioned? Where is it succeeding in curbing the spread of nuclear weapons, and where might it be falling short? Four Brookings experts on defense…

       




effect

Ozone hole is officially shrinking, proof that international treaties can be effective

New NASA study offers first direct proof that the ozone hole is recovering thanks to the Montreal Protocol treaty and the international ban on CFCs.




effect

Cabin project follows stress-reducing effect of living in nature -- the Swedish way (Video)

Swedes enjoy an interesting "close-to-nature" lifestyle -- this informal study shows how it might help visitors from other countries.




effect

GM Volt Versus Toyota Prius: Which Design Type Will Be More Effective At Reducing Stack & Tailpipe Emissions, And Energy Consumption?

This is one of those comparison posts that that could draw many angry comments: like Could Hype Sell An Inferior Hybrid? - Ford Fusion versus Toyota Camry did. Please carefully read the caveats.




effect

Field Guide to Eco-Friendly, Efficient, Effective Print

Design like you give a damn with the second edition of Monadnock Paper Mills' how-to guide for creating more-sustainable print materials. A Field Guide: Eco-Friendly, Efficient and Effective Print, accompanied by luscious illustrations by the




effect

Hawaii’s plastic bag ban goes into effect, but…

On the first of this month, Hawaii became the first state in the U.S. to put a plastic bag ban into effect.




effect

Rethinking death to better understand the effects of chemicals

Thought experiments worked for Einstein. Can they help protect the environment too?




effect

The Thoroughly Positive Effects of Positivity & Why Environmentalism Could Use More Of It

There's a really fascinating feature over at Greater Good on the powerful transformative effects that positive emotions have on our wellbeing, our lives, our bodies, those around us. I won't relay all that Barbara




effect

Effects of Global Warming Inspire Alterations to Famous Aalto Vase

The vase designed and named after Finnish designer Alvar Aalto is an icon among the design-savvy. The now-classic piece was released in 1937 at the World Fair in Paris. Today, the vase is produced by Iittala, which has slightly changed the size and




effect

Resist the Diderot Effect!

First identified by a French philosopher more than 250 years ago, it describes how one purchase can lead to another.




effect

Only 1/3 of sunscreens are safe and effective, here's where to find them

The 2019 rating by EWG finds that most sunscreens contain sketchy ingredients and/or don't offer adequate protection.




effect

Ask Pablo: Do Solar Panels Contribute To The Heat Island Effect?

Image credit: Bernd Sieker, used under Creative Commons license. Dear Pablo: Does installing commercial rooftop solar PV (with the dark-colored PV cells) negate the effect of painting that same roof white to alleviate the "heat island" effect in




effect

Effective frequency in sustainable messaging

In our mission to close the “green-gap” through sustainable messaging, every bit of insight counts.




effect

Quote of the day: "Oil spills can have positive effects"

Pipeline company Kinder Morgan claims that they create "business and employment opportunities".




effect

William McDonough at Dwell on Design: A More "Effective" Not Just More "Efficient" Future

A compact chicken coop for city dwellers that wheels around the yard and fertilizes soil is on display at "Dwell on Design" this weekend. The show's awards recognize




effect

Fairtrade International takes prize for most effective label

Despite recent criticisms, a new report shows that Fairtrade International is doing better work than any of its competitors.




effect

8 steps for using a paper planner effectively

Paper planners are effective only if you use them properly and regularly. Here are some ways to get into the groove, if you're not yet an addict!




effect

Is shaming people for flying effective?

Greta Thunberg's sailboat journey has triggered a heated debate over how to encourage people to take climate action.





effect

Adam Neumann lawsuit will have long-term effects: WSJ's Maurren Farrell

WeWork co-founder Adam Neumann is now suing his ontime ally SoftBank. Maureen Farrell, WSJ, and CNBC's Deirdre Bosa join 'Power Lunch' to discuss if WeWork can withstand this and how it will impact the company.